* [v1 00/19] net/mlx5: Add HW steering low level support @ 2022-09-22 19:03 Alex Vesker 2022-09-22 19:03 ` [v1 01/19] net/mlx5: split flow item translation Alex Vesker ` (23 more replies) 0 siblings, 24 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm; +Cc: dev, orika Mellanox ConnetX devices supports packet matching, packet modification and redirection. These functionalities are also referred to as flow-steering. To configure a steering rule, the rule is written to the device owned memory, this memory is accessed and cached by the device when processing a packet. The highlight of this patchset is supporting HW Steering (HWS) which is the new technology supported in new ConnectX devices, HWS allows configuring steering rules directly to the HW using special HW queues with minimal CPU effort. This patchset is the internal low layer implementation for HWS used by the mlx5 PMD. The mlx5dr (direct rule) is layer that bridges between the PMD and the HW by configuring the HW offloads based on the PMD logic This is the initial draft to present the code to the community and it will be reworked. Alex Vesker (13): net/mlx5: Add additional glue functions for HWS net/mlx5: Remove stub HWS support net/mlx5/hws: Add HWS command layer net/mlx5/hws: Add HWS pool and buddy net/mlx5/hws: Add HWS send layer net/mlx5/hws: Add HWS definer layer net/mlx5/hws: Add HWS context object net/mlx5/hws: Add HWS table object net/mlx5/hws: Add HWS matcher object net/mlx5/hws: Add HWS rule object net/mlx5/hws: Add HWS action object net/mlx5/hws: Add HWS debug layer net/mlx5/hws: Enable HWS Bing Zhao (2): common/mlx5: query set capability of registers net/mlx5: provide the available tag registers Dariusz Sosnowski (1): net/mlx5: add port to metadata conversion Suanming Mou (3): net/mlx5: split flow item translation net/mlx5: split flow item matcher and value translation net/mlx5: add hardware steering item translation function drivers/common/mlx5/linux/mlx5_glue.c | 121 +- drivers/common/mlx5/linux/mlx5_glue.h | 17 + drivers/common/mlx5/mlx5_devx_cmds.c | 30 + drivers/common/mlx5/mlx5_devx_cmds.h | 2 + drivers/common/mlx5/mlx5_prm.h | 653 ++++- drivers/net/mlx5/hws/meson.build | 18 + drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} | 210 +- drivers/net/mlx5/hws/mlx5dr_action.c | 2217 +++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 251 ++ drivers/net/mlx5/hws/mlx5dr_buddy.c | 201 ++ drivers/net/mlx5/hws/mlx5dr_buddy.h | 18 + drivers/net/mlx5/hws/mlx5dr_cmd.c | 957 +++++++ drivers/net/mlx5/hws/mlx5dr_cmd.h | 232 ++ drivers/net/mlx5/hws/mlx5dr_context.c | 222 ++ drivers/net/mlx5/hws/mlx5dr_context.h | 40 + drivers/net/mlx5/hws/mlx5dr_debug.c | 459 ++++ drivers/net/mlx5/hws/mlx5dr_debug.h | 28 + drivers/net/mlx5/hws/mlx5dr_definer.c | 1866 +++++++++++++ drivers/net/mlx5/hws/mlx5dr_definer.h | 582 ++++ drivers/net/mlx5/hws/mlx5dr_internal.h | 93 + drivers/net/mlx5/hws/mlx5dr_matcher.c | 920 +++++++ drivers/net/mlx5/hws/mlx5dr_matcher.h | 76 + drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 511 ++++ drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 76 + drivers/net/mlx5/hws/mlx5dr_pool.c | 672 +++++ drivers/net/mlx5/hws/mlx5dr_pool.h | 152 + drivers/net/mlx5/hws/mlx5dr_rule.c | 528 ++++ drivers/net/mlx5/hws/mlx5dr_rule.h | 50 + drivers/net/mlx5/hws/mlx5dr_send.c | 849 ++++++ drivers/net/mlx5/hws/mlx5dr_send.h | 273 ++ drivers/net/mlx5/hws/mlx5dr_table.c | 248 ++ drivers/net/mlx5/hws/mlx5dr_table.h | 44 + drivers/net/mlx5/linux/mlx5_os.c | 7 +- drivers/net/mlx5/meson.build | 2 +- drivers/net/mlx5/mlx5.c | 3 + drivers/net/mlx5/mlx5.h | 3 +- drivers/net/mlx5/mlx5_defs.h | 2 + drivers/net/mlx5/mlx5_dr.c | 383 --- drivers/net/mlx5/mlx5_flow.c | 17 + drivers/net/mlx5/mlx5_flow.h | 128 + drivers/net/mlx5/mlx5_flow_dv.c | 2599 +++++++++--------- drivers/net/mlx5/mlx5_flow_hw.c | 109 +- 42 files changed, 14189 insertions(+), 1680 deletions(-) create mode 100644 drivers/net/mlx5/hws/meson.build rename drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} (65%) create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_internal.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.h delete mode 100644 drivers/net/mlx5/mlx5_dr.c -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 01/19] net/mlx5: split flow item translation 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 02/19] net/mlx5: split flow item matcher and value translation Alex Vesker ` (22 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> In order to share the item translation code with hardware steering mode, this commit splits flow item translation code to a dedicate function. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow_dv.c | 1915 ++++++++++++++++--------------- 1 file changed, 979 insertions(+), 936 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 5a382b66a4..2f3f4b98b9 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -13029,8 +13029,7 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, } /** - * Fill the flow with DV spec, lock free - * (mutex should be acquired by caller). + * Translate the flow item to matcher. * * @param[in] dev * Pointer to rte_eth_dev structure. @@ -13040,8 +13039,8 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * Pointer to the flow attributes. * @param[in] items * Pointer to the list of items. - * @param[in] actions - * Pointer to the list of actions. + * @param[in] matcher + * Pointer to the flow matcher. * @param[out] error * Pointer to the error structure. * @@ -13049,1041 +13048,1086 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -flow_dv_translate(struct rte_eth_dev *dev, - struct mlx5_flow *dev_flow, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[], - struct rte_flow_error *error) +flow_dv_translate_items(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + struct mlx5_flow_dv_matcher *matcher, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_sh_config *dev_conf = &priv->sh->config; struct rte_flow *flow = dev_flow->flow; struct mlx5_flow_handle *handle = dev_flow->handle; struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); - struct mlx5_flow_rss_desc *rss_desc; + struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc; uint64_t item_flags = 0; uint64_t last_item = 0; - uint64_t action_flags = 0; - struct mlx5_flow_dv_matcher matcher = { - .mask = { - .size = sizeof(matcher.mask.buf), - }, - }; - int actions_n = 0; - bool actions_end = false; - union { - struct mlx5_flow_dv_modify_hdr_resource res; - uint8_t len[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + - sizeof(struct mlx5_modification_cmd) * - (MLX5_MAX_MODIFY_NUM + 1)]; - } mhdr_dummy; - struct mlx5_flow_dv_modify_hdr_resource *mhdr_res = &mhdr_dummy.res; - const struct rte_flow_action_count *count = NULL; - const struct rte_flow_action_age *non_shared_age = NULL; - union flow_dv_attr flow_attr = { .attr = 0 }; - uint32_t tag_be; - union mlx5_flow_tbl_key tbl_key; - uint32_t modify_action_position = UINT32_MAX; - void *match_mask = matcher.mask.buf; + void *match_mask = matcher->mask.buf; void *match_value = dev_flow->dv.value.buf; uint8_t next_protocol = 0xff; - struct rte_vlan_hdr vlan = { 0 }; - struct mlx5_flow_dv_dest_array_resource mdest_res; - struct mlx5_flow_dv_sample_resource sample_res; - void *sample_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; - const struct rte_flow_action_sample *sample = NULL; - struct mlx5_flow_sub_actions_list *sample_act; - uint32_t sample_act_pos = UINT32_MAX; - uint32_t age_act_pos = UINT32_MAX; - uint32_t num_of_dest = 0; - int tmp_actions_n = 0; - uint32_t table; - int ret = 0; - const struct mlx5_flow_tunnel *tunnel = NULL; - struct flow_grp_info grp_info = { - .external = !!dev_flow->external, - .transfer = !!attr->transfer, - .fdb_def_rule = !!priv->fdb_def_rule, - .skip_scale = dev_flow->skip_scale & - (1 << MLX5_SCALE_FLOW_GROUP_BIT), - .std_tbl_fix = true, - }; + uint16_t priority = 0; const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; const struct rte_flow_item *tunnel_item = NULL; const struct rte_flow_item *gre_item = NULL; + int ret = 0; - if (!wks) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "failed to push flow workspace"); - rss_desc = &wks->rss_desc; - memset(&mdest_res, 0, sizeof(struct mlx5_flow_dv_dest_array_resource)); - memset(&sample_res, 0, sizeof(struct mlx5_flow_dv_sample_resource)); - mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : - MLX5DV_FLOW_TABLE_TYPE_NIC_RX; - /* update normal path action resource into last index of array */ - sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM - 1]; - if (is_tunnel_offload_active(dev)) { - if (dev_flow->tunnel) { - RTE_VERIFY(dev_flow->tof_type == - MLX5_TUNNEL_OFFLOAD_MISS_RULE); - tunnel = dev_flow->tunnel; - } else { - tunnel = mlx5_get_tof(items, actions, - &dev_flow->tof_type); - dev_flow->tunnel = tunnel; - } - grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate - (dev, attr, tunnel, dev_flow->tof_type); - } - mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : - MLX5DV_FLOW_TABLE_TYPE_NIC_RX; - ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table, - &grp_info, error); - if (ret) - return ret; - dev_flow->dv.group = table; - if (attr->transfer) - mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; - /* number of actions must be set to 0 in case of dirty stack. */ - mhdr_res->actions_num = 0; - if (is_flow_tunnel_match_rule(dev_flow->tof_type)) { - /* - * do not add decap action if match rule drops packet - * HW rejects rules with decap & drop - * - * if tunnel match rule was inserted before matching tunnel set - * rule flow table used in the match rule must be registered. - * current implementation handles that in the - * flow_dv_match_register() at the function end. - */ - bool add_decap = true; - const struct rte_flow_action *ptr = actions; - - for (; ptr->type != RTE_FLOW_ACTION_TYPE_END; ptr++) { - if (ptr->type == RTE_FLOW_ACTION_TYPE_DROP) { - add_decap = false; - break; - } - } - if (add_decap) { - if (flow_dv_create_action_l2_decap(dev, dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_DECAP; - } - } - for (; !actions_end ; actions++) { - const struct rte_flow_action_queue *queue; - const struct rte_flow_action_rss *rss; - const struct rte_flow_action *action = actions; - const uint8_t *rss_key; - struct mlx5_flow_tbl_resource *tbl; - struct mlx5_aso_age_action *age_act; - struct mlx5_flow_counter *cnt_act; - uint32_t port_id = 0; - struct mlx5_flow_dv_port_id_action_resource port_id_resource; - int action_type = actions->type; - const struct rte_flow_action *found_action = NULL; - uint32_t jump_group = 0; - uint32_t owner_idx; - struct mlx5_aso_ct_action *ct; + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { + int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + int item_type = items->type; - if (!mlx5_flow_os_action_supported(action_type)) + if (!mlx5_flow_os_item_supported(item_type)) return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "action not supported"); - switch (action_type) { - case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET: - action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET; + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "item not supported"); + switch (item_type) { + case RTE_FLOW_ITEM_TYPE_ESP: + flow_dv_translate_item_esp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_ITEM_ESP; break; - case RTE_FLOW_ACTION_TYPE_VOID: + case RTE_FLOW_ITEM_TYPE_PORT_ID: + flow_dv_translate_item_port_id + (dev, match_mask, match_value, items, attr); + last_item = MLX5_FLOW_ITEM_PORT_ID; break; - case RTE_FLOW_ACTION_TYPE_PORT_ID: - case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: - if (flow_dv_translate_action_port_id(dev, action, - &port_id, error)) - return -rte_errno; - port_id_resource.port_id = port_id; - MLX5_ASSERT(!handle->rix_port_id_action); - if (flow_dv_port_id_action_resource_register - (dev, &port_id_resource, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.port_id_action->action; - action_flags |= MLX5_FLOW_ACTION_PORT_ID; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_PORT_ID; - sample_act->action_flags |= MLX5_FLOW_ACTION_PORT_ID; - num_of_dest++; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + flow_dv_translate_item_represented_port + (dev, match_mask, match_value, items, attr); + last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; break; - case RTE_FLOW_ACTION_TYPE_FLAG: - action_flags |= MLX5_FLOW_ACTION_FLAG; - wks->mark = 1; - if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { - struct rte_flow_action_mark mark = { - .id = MLX5_FLOW_MARK_DEFAULT, - }; - - if (flow_dv_convert_action_mark(dev, &mark, - mhdr_res, - error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MARK_EXT; - break; + case RTE_FLOW_ITEM_TYPE_ETH: + flow_dv_translate_item_eth(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = dev_flow->act_flags & + MLX5_FLOW_ACTION_DEFAULT_MISS && + !dev_flow->external ? + MLX5_PRIORITY_MAP_L3 : + MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + flow_dv_translate_item_vlan(dev_flow, + match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | + MLX5_FLOW_LAYER_INNER_VLAN) : + (MLX5_FLOW_LAYER_OUTER_L2 | + MLX5_FLOW_LAYER_OUTER_VLAN); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &item_flags, &tunnel); + flow_dv_translate_item_ipv4(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv4 *) + items->mask)->hdr.next_proto_id) { + next_protocol = + ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + next_protocol &= + ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; } - tag_be = mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT); - /* - * Only one FLAG or MARK is supported per device flow - * right now. So the pointer to the tag resource must be - * zero before the register process. - */ - MLX5_ASSERT(!handle->dvh.rix_tag); - if (flow_dv_tag_resource_register(dev, tag_be, - dev_flow, error)) - return -rte_errno; - MLX5_ASSERT(dev_flow->dv.tag_resource); - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.tag_resource->action; break; - case RTE_FLOW_ACTION_TYPE_MARK: - action_flags |= MLX5_FLOW_ACTION_MARK; - wks->mark = 1; - if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { - const struct rte_flow_action_mark *mark = - (const struct rte_flow_action_mark *) - actions->conf; - - if (flow_dv_convert_action_mark(dev, mark, - mhdr_res, - error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MARK_EXT; - break; + case RTE_FLOW_ITEM_TYPE_IPV6: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &item_flags, &tunnel); + flow_dv_translate_item_ipv6(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto) { + next_protocol = + ((const struct rte_flow_item_ipv6 *) + items->spec)->hdr.proto; + next_protocol &= + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; } - /* Fall-through */ - case MLX5_RTE_FLOW_ACTION_TYPE_MARK: - /* Legacy (non-extensive) MARK action. */ - tag_be = mlx5_flow_mark_set - (((const struct rte_flow_action_mark *) - (actions->conf))->id); - MLX5_ASSERT(!handle->dvh.rix_tag); - if (flow_dv_tag_resource_register(dev, tag_be, - dev_flow, error)) - return -rte_errno; - MLX5_ASSERT(dev_flow->dv.tag_resource); - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.tag_resource->action; break; - case RTE_FLOW_ACTION_TYPE_SET_META: - if (flow_dv_convert_action_set_meta - (dev, mhdr_res, attr, - (const struct rte_flow_action_set_meta *) - actions->conf, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_META; + case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: + flow_dv_translate_item_ipv6_frag_ext(match_mask, + match_value, + items, tunnel); + last_item = tunnel ? + MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : + MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header) { + next_protocol = + ((const struct rte_flow_item_ipv6_frag_ext *) + items->spec)->hdr.next_header; + next_protocol &= + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } break; - case RTE_FLOW_ACTION_TYPE_SET_TAG: - if (flow_dv_convert_action_set_tag - (dev, mhdr_res, - (const struct rte_flow_action_set_tag *) - actions->conf, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TAG; + case RTE_FLOW_ITEM_TYPE_TCP: + flow_dv_translate_item_tcp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; break; - case RTE_FLOW_ACTION_TYPE_DROP: - action_flags |= MLX5_FLOW_ACTION_DROP; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_DROP; + case RTE_FLOW_ITEM_TYPE_UDP: + flow_dv_translate_item_udp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; break; - case RTE_FLOW_ACTION_TYPE_QUEUE: - queue = actions->conf; - rss_desc->queue_num = 1; - rss_desc->queue[0] = queue->index; - action_flags |= MLX5_FLOW_ACTION_QUEUE; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_QUEUE; - sample_act->action_flags |= MLX5_FLOW_ACTION_QUEUE; - num_of_dest++; + case RTE_FLOW_ITEM_TYPE_GRE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; + gre_item = items; break; - case RTE_FLOW_ACTION_TYPE_RSS: - rss = actions->conf; - memcpy(rss_desc->queue, rss->queue, - rss->queue_num * sizeof(uint16_t)); - rss_desc->queue_num = rss->queue_num; - /* NULL RSS key indicates default RSS key. */ - rss_key = !rss->key ? rss_hash_default_key : rss->key; - memcpy(rss_desc->key, rss_key, MLX5_RSS_HASH_KEY_LEN); - /* - * rss->level and rss.types should be set in advance - * when expanding items for RSS. - */ - action_flags |= MLX5_FLOW_ACTION_RSS; - dev_flow->handle->fate_action = rss_desc->shared_rss ? - MLX5_FLOW_FATE_SHARED_RSS : - MLX5_FLOW_FATE_QUEUE; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + flow_dv_translate_item_gre_key(match_mask, + match_value, items); + last_item = MLX5_FLOW_LAYER_GRE_KEY; break; - case MLX5_RTE_FLOW_ACTION_TYPE_AGE: - owner_idx = (uint32_t)(uintptr_t)action->conf; - age_act = flow_aso_age_get_by_idx(dev, owner_idx); - if (flow->age == 0) { - flow->age = owner_idx; - __atomic_fetch_add(&age_act->refcnt, 1, - __ATOMIC_RELAXED); - } - age_act_pos = actions_n++; - action_flags |= MLX5_FLOW_ACTION_AGE; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_AGE: - non_shared_age = action->conf; - age_act_pos = actions_n++; - action_flags |= MLX5_FLOW_ACTION_AGE; + case RTE_FLOW_ITEM_TYPE_NVGRE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; break; - case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: - owner_idx = (uint32_t)(uintptr_t)action->conf; - cnt_act = flow_dv_counter_get_by_idx(dev, owner_idx, - NULL); - MLX5_ASSERT(cnt_act != NULL); - /** - * When creating meter drop flow in drop table, the - * counter should not overwrite the rte flow counter. - */ - if (attr->group == MLX5_FLOW_TABLE_LEVEL_METER && - dev_flow->dv.table_id == MLX5_MTR_TABLE_ID_DROP) { - dev_flow->dv.actions[actions_n++] = - cnt_act->action; - } else { - if (flow->counter == 0) { - flow->counter = owner_idx; - __atomic_fetch_add - (&cnt_act->shared_info.refcnt, - 1, __ATOMIC_RELAXED); - } - /* Save information first, will apply later. */ - action_flags |= MLX5_FLOW_ACTION_COUNT; - } + case RTE_FLOW_ITEM_TYPE_VXLAN: + flow_dv_translate_item_vxlan(dev, attr, + match_mask, match_value, + items, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN; break; - case RTE_FLOW_ACTION_TYPE_COUNT: - if (!priv->sh->cdev->config.devx) { - return rte_flow_error_set - (error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "count action not supported"); - } - /* Save information first, will apply later. */ - count = action->conf; - action_flags |= MLX5_FLOW_ACTION_COUNT; + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN_GPE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: - dev_flow->dv.actions[actions_n++] = - priv->sh->pop_vlan_action; - action_flags |= MLX5_FLOW_ACTION_OF_POP_VLAN; + case RTE_FLOW_ITEM_TYPE_GENEVE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GENEVE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: - if (!(action_flags & - MLX5_FLOW_ACTION_OF_SET_VLAN_VID)) - flow_dev_get_vlan_info_from_items(items, &vlan); - vlan.eth_proto = rte_be_to_cpu_16 - ((((const struct rte_flow_action_of_push_vlan *) - actions->conf)->ethertype)); - found_action = mlx5_flow_find_action - (actions + 1, - RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID); - if (found_action) - mlx5_update_vlan_vid_pcp(found_action, &vlan); - found_action = mlx5_flow_find_action - (actions + 1, - RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP); - if (found_action) - mlx5_update_vlan_vid_pcp(found_action, &vlan); - if (flow_dv_create_action_push_vlan - (dev, attr, &vlan, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.push_vlan_res->action; - action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN; + case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: + ret = flow_dv_translate_item_geneve_opt(dev, match_mask, + match_value, + items, error); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GENEVE TLV option"); + flow->geneve_tlv_option = 1; + last_item = MLX5_FLOW_LAYER_GENEVE_OPT; break; - case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP: - /* of_vlan_push action handled this action */ - MLX5_ASSERT(action_flags & - MLX5_FLOW_ACTION_OF_PUSH_VLAN); + case RTE_FLOW_ITEM_TYPE_MPLS: + flow_dv_translate_item_mpls(match_mask, match_value, + items, last_item, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_MPLS; break; - case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID: - if (action_flags & MLX5_FLOW_ACTION_OF_PUSH_VLAN) - break; - flow_dev_get_vlan_info_from_items(items, &vlan); - mlx5_update_vlan_vid_pcp(actions, &vlan); - /* If no VLAN push - this is a modify header action */ - if (flow_dv_convert_action_modify_vlan_vid - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID; + case RTE_FLOW_ITEM_TYPE_MARK: + flow_dv_translate_item_mark(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_MARK; break; - case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: - case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: - if (flow_dv_create_action_l2_encap(dev, actions, - dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_ENCAP; - if (action_flags & MLX5_FLOW_ACTION_SAMPLE) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ITEM_TYPE_META: + flow_dv_translate_item_meta(dev, match_mask, + match_value, attr, items); + last_item = MLX5_FLOW_ITEM_METADATA; break; - case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: - case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: - if (flow_dv_create_action_l2_decap(dev, dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_DECAP; + case RTE_FLOW_ITEM_TYPE_ICMP: + flow_dv_translate_item_icmp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP; break; - case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: - /* Handle encap with preceding decap. */ - if (action_flags & MLX5_FLOW_ACTION_DECAP) { - if (flow_dv_create_action_raw_encap - (dev, actions, dev_flow, attr, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } else { - /* Handle encap without preceding decap. */ - if (flow_dv_create_action_l2_encap - (dev, actions, dev_flow, attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } - action_flags |= MLX5_FLOW_ACTION_ENCAP; - if (action_flags & MLX5_FLOW_ACTION_SAMPLE) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ITEM_TYPE_ICMP6: + flow_dv_translate_item_icmp6(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP6; break; - case RTE_FLOW_ACTION_TYPE_RAW_DECAP: - while ((++action)->type == RTE_FLOW_ACTION_TYPE_VOID) - ; - if (action->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { - if (flow_dv_create_action_l2_decap - (dev, dev_flow, attr->transfer, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } - /* If decap is followed by encap, handle it at encap. */ - action_flags |= MLX5_FLOW_ACTION_DECAP; + case RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_item_tag(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_TAG; break; - case MLX5_RTE_FLOW_ACTION_TYPE_JUMP: - dev_flow->dv.actions[actions_n++] = - (void *)(uintptr_t)action->conf; - action_flags |= MLX5_FLOW_ACTION_JUMP; + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_mlx5_item_tag(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_TAG; break; - case RTE_FLOW_ACTION_TYPE_JUMP: - jump_group = ((const struct rte_flow_action_jump *) - action->conf)->group; - grp_info.std_tbl_fix = 0; - if (dev_flow->skip_scale & - (1 << MLX5_SCALE_JUMP_FLOW_GROUP_BIT)) - grp_info.skip_scale = 1; - else - grp_info.skip_scale = 0; - ret = mlx5_flow_group_to_table(dev, tunnel, - jump_group, - &table, - &grp_info, error); + case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + flow_dv_translate_item_tx_queue(dev, match_mask, + match_value, + items); + last_item = MLX5_FLOW_ITEM_TX_QUEUE; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + flow_dv_translate_item_gtp(match_mask, match_value, + items, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = flow_dv_translate_item_gtp_psc(match_mask, + match_value, + items); if (ret) - return ret; - tbl = flow_dv_tbl_resource_get(dev, table, attr->egress, - attr->transfer, - !!dev_flow->external, - tunnel, jump_group, 0, - 0, error); - if (!tbl) - return rte_flow_error_set - (error, errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "cannot create jump action."); - if (flow_dv_jump_tbl_resource_register - (dev, tbl, dev_flow, error)) { - flow_dv_tbl_resource_release(MLX5_SH(dev), tbl); - return rte_flow_error_set - (error, errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "cannot create jump action."); + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GTP PSC item"); + last_item = MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_ECPRI: + if (!mlx5_flex_parser_ecpri_exist(dev)) { + /* Create it only the first time to be used. */ + ret = mlx5_flex_parser_ecpri_alloc(dev); + if (ret) + return rte_flow_error_set + (error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, + "cannot create eCPRI parser"); + } + flow_dv_translate_item_ecpri(dev, match_mask, + match_value, items, + last_item); + /* No other protocol should follow eCPRI layer. */ + last_item = MLX5_FLOW_LAYER_ECPRI; + break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + flow_dv_translate_item_integrity(items, integrity_items, + &last_item); + break; + case RTE_FLOW_ITEM_TYPE_CONNTRACK: + flow_dv_translate_item_aso_ct(dev, match_mask, + match_value, items); + break; + case RTE_FLOW_ITEM_TYPE_FLEX: + flow_dv_translate_item_flex(dev, match_mask, + match_value, items, + dev_flow, tunnel != 0); + last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : + MLX5_FLOW_ITEM_OUTER_FLEX; + break; + default: + break; + } + item_flags |= last_item; + } + /* + * When E-Switch mode is enabled, we have two cases where we need to + * set the source port manually. + * The first one, is in case of NIC ingress steering rule, and the + * second is E-Switch rule where no port_id item was found. + * In both cases the source port is set according the current port + * in use. + */ + if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && + !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && + !(attr->egress && !attr->transfer)) { + if (flow_dv_translate_item_port_id(dev, match_mask, + match_value, NULL, attr)) + return -rte_errno; + } + if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + flow_dv_translate_item_integrity_post(match_mask, match_value, + integrity_items, + item_flags); + } + if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) + flow_dv_translate_item_vxlan_gpe(match_mask, match_value, + tunnel_item, item_flags); + else if (item_flags & MLX5_FLOW_LAYER_GENEVE) + flow_dv_translate_item_geneve(match_mask, match_value, + tunnel_item, item_flags); + else if (item_flags & MLX5_FLOW_LAYER_GRE) { + if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) + flow_dv_translate_item_gre(match_mask, match_value, + tunnel_item, item_flags); + else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) + flow_dv_translate_item_nvgre(match_mask, match_value, + tunnel_item, item_flags); + else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) + flow_dv_translate_item_gre_option(match_mask, match_value, + tunnel_item, gre_item, item_flags); + else + MLX5_ASSERT(false); + } + matcher->priority = priority; +#ifdef RTE_LIBRTE_MLX5_DEBUG + MLX5_ASSERT(!flow_dv_check_valid_spec(matcher->mask.buf, + dev_flow->dv.value.buf)); +#endif + /* + * Layers may be already initialized from prefix flow if this dev_flow + * is the suffix flow. + */ + handle->layers |= item_flags; + return ret; +} + +/** + * Fill the flow with DV spec, lock free + * (mutex should be acquired by caller). + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in, out] dev_flow + * Pointer to the sub flow. + * @param[in] attr + * Pointer to the flow attributes. + * @param[in] items + * Pointer to the list of items. + * @param[in] actions + * Pointer to the list of actions. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_translate(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_sh_config *dev_conf = &priv->sh->config; + struct rte_flow *flow = dev_flow->flow; + struct mlx5_flow_handle *handle = dev_flow->handle; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); + struct mlx5_flow_rss_desc *rss_desc; + uint64_t action_flags = 0; + struct mlx5_flow_dv_matcher matcher = { + .mask = { + .size = sizeof(matcher.mask.buf), + }, + }; + int actions_n = 0; + bool actions_end = false; + union { + struct mlx5_flow_dv_modify_hdr_resource res; + uint8_t len[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * + (MLX5_MAX_MODIFY_NUM + 1)]; + } mhdr_dummy; + struct mlx5_flow_dv_modify_hdr_resource *mhdr_res = &mhdr_dummy.res; + const struct rte_flow_action_count *count = NULL; + const struct rte_flow_action_age *non_shared_age = NULL; + union flow_dv_attr flow_attr = { .attr = 0 }; + uint32_t tag_be; + union mlx5_flow_tbl_key tbl_key; + uint32_t modify_action_position = UINT32_MAX; + struct rte_vlan_hdr vlan = { 0 }; + struct mlx5_flow_dv_dest_array_resource mdest_res; + struct mlx5_flow_dv_sample_resource sample_res; + void *sample_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; + const struct rte_flow_action_sample *sample = NULL; + struct mlx5_flow_sub_actions_list *sample_act; + uint32_t sample_act_pos = UINT32_MAX; + uint32_t age_act_pos = UINT32_MAX; + uint32_t num_of_dest = 0; + int tmp_actions_n = 0; + uint32_t table; + int ret = 0; + const struct mlx5_flow_tunnel *tunnel = NULL; + struct flow_grp_info grp_info = { + .external = !!dev_flow->external, + .transfer = !!attr->transfer, + .fdb_def_rule = !!priv->fdb_def_rule, + .skip_scale = dev_flow->skip_scale & + (1 << MLX5_SCALE_FLOW_GROUP_BIT), + .std_tbl_fix = true, + }; + + if (!wks) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "failed to push flow workspace"); + rss_desc = &wks->rss_desc; + memset(&mdest_res, 0, sizeof(struct mlx5_flow_dv_dest_array_resource)); + memset(&sample_res, 0, sizeof(struct mlx5_flow_dv_sample_resource)); + mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : + MLX5DV_FLOW_TABLE_TYPE_NIC_RX; + /* update normal path action resource into last index of array */ + sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM - 1]; + if (is_tunnel_offload_active(dev)) { + if (dev_flow->tunnel) { + RTE_VERIFY(dev_flow->tof_type == + MLX5_TUNNEL_OFFLOAD_MISS_RULE); + tunnel = dev_flow->tunnel; + } else { + tunnel = mlx5_get_tof(items, actions, + &dev_flow->tof_type); + dev_flow->tunnel = tunnel; + } + grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate + (dev, attr, tunnel, dev_flow->tof_type); + } + mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : + MLX5DV_FLOW_TABLE_TYPE_NIC_RX; + ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table, + &grp_info, error); + if (ret) + return ret; + dev_flow->dv.group = table; + if (attr->transfer) + mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; + /* number of actions must be set to 0 in case of dirty stack. */ + mhdr_res->actions_num = 0; + if (is_flow_tunnel_match_rule(dev_flow->tof_type)) { + /* + * do not add decap action if match rule drops packet + * HW rejects rules with decap & drop + * + * if tunnel match rule was inserted before matching tunnel set + * rule flow table used in the match rule must be registered. + * current implementation handles that in the + * flow_dv_match_register() at the function end. + */ + bool add_decap = true; + const struct rte_flow_action *ptr = actions; + + for (; ptr->type != RTE_FLOW_ACTION_TYPE_END; ptr++) { + if (ptr->type == RTE_FLOW_ACTION_TYPE_DROP) { + add_decap = false; + break; } - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.jump->action; - action_flags |= MLX5_FLOW_ACTION_JUMP; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_JUMP; - sample_act->action_flags |= MLX5_FLOW_ACTION_JUMP; - num_of_dest++; - break; - case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC: - case RTE_FLOW_ACTION_TYPE_SET_MAC_DST: - if (flow_dv_convert_action_modify_mac - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_MAC_SRC ? - MLX5_FLOW_ACTION_SET_MAC_SRC : - MLX5_FLOW_ACTION_SET_MAC_DST; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: - case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST: - if (flow_dv_convert_action_modify_ipv4 - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC ? - MLX5_FLOW_ACTION_SET_IPV4_SRC : - MLX5_FLOW_ACTION_SET_IPV4_DST; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC: - case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST: - if (flow_dv_convert_action_modify_ipv6 - (mhdr_res, actions, error)) + } + if (add_decap) { + if (flow_dv_create_action_l2_decap(dev, dev_flow, + attr->transfer, + error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC ? - MLX5_FLOW_ACTION_SET_IPV6_SRC : - MLX5_FLOW_ACTION_SET_IPV6_DST; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_DECAP; + } + } + for (; !actions_end ; actions++) { + const struct rte_flow_action_queue *queue; + const struct rte_flow_action_rss *rss; + const struct rte_flow_action *action = actions; + const uint8_t *rss_key; + struct mlx5_flow_tbl_resource *tbl; + struct mlx5_aso_age_action *age_act; + struct mlx5_flow_counter *cnt_act; + uint32_t port_id = 0; + struct mlx5_flow_dv_port_id_action_resource port_id_resource; + int action_type = actions->type; + const struct rte_flow_action *found_action = NULL; + uint32_t jump_group = 0; + uint32_t owner_idx; + struct mlx5_aso_ct_action *ct; + + if (!mlx5_flow_os_action_supported(action_type)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, + "action not supported"); + switch (action_type) { + case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET: + action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET; break; - case RTE_FLOW_ACTION_TYPE_SET_TP_SRC: - case RTE_FLOW_ACTION_TYPE_SET_TP_DST: - if (flow_dv_convert_action_modify_tp - (mhdr_res, actions, items, - &flow_attr, dev_flow, !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_TP_SRC ? - MLX5_FLOW_ACTION_SET_TP_SRC : - MLX5_FLOW_ACTION_SET_TP_DST; + case RTE_FLOW_ACTION_TYPE_VOID: break; - case RTE_FLOW_ACTION_TYPE_DEC_TTL: - if (flow_dv_convert_action_modify_dec_ttl - (mhdr_res, items, &flow_attr, dev_flow, - !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) + case RTE_FLOW_ACTION_TYPE_PORT_ID: + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + if (flow_dv_translate_action_port_id(dev, action, + &port_id, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_DEC_TTL; - break; - case RTE_FLOW_ACTION_TYPE_SET_TTL: - if (flow_dv_convert_action_modify_ttl - (mhdr_res, actions, items, &flow_attr, - dev_flow, !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) + port_id_resource.port_id = port_id; + MLX5_ASSERT(!handle->rix_port_id_action); + if (flow_dv_port_id_action_resource_register + (dev, &port_id_resource, dev_flow, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TTL; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.port_id_action->action; + action_flags |= MLX5_FLOW_ACTION_PORT_ID; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_PORT_ID; + sample_act->action_flags |= MLX5_FLOW_ACTION_PORT_ID; + num_of_dest++; break; - case RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ: - case RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ: - if (flow_dv_convert_action_modify_tcp_seq - (mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_FLAG: + action_flags |= MLX5_FLOW_ACTION_FLAG; + wks->mark = 1; + if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { + struct rte_flow_action_mark mark = { + .id = MLX5_FLOW_MARK_DEFAULT, + }; + + if (flow_dv_convert_action_mark(dev, &mark, + mhdr_res, + error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MARK_EXT; + break; + } + tag_be = mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT); + /* + * Only one FLAG or MARK is supported per device flow + * right now. So the pointer to the tag resource must be + * zero before the register process. + */ + MLX5_ASSERT(!handle->dvh.rix_tag); + if (flow_dv_tag_resource_register(dev, tag_be, + dev_flow, error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ ? - MLX5_FLOW_ACTION_INC_TCP_SEQ : - MLX5_FLOW_ACTION_DEC_TCP_SEQ; + MLX5_ASSERT(dev_flow->dv.tag_resource); + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.tag_resource->action; break; + case RTE_FLOW_ACTION_TYPE_MARK: + action_flags |= MLX5_FLOW_ACTION_MARK; + wks->mark = 1; + if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { + const struct rte_flow_action_mark *mark = + (const struct rte_flow_action_mark *) + actions->conf; - case RTE_FLOW_ACTION_TYPE_INC_TCP_ACK: - case RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK: - if (flow_dv_convert_action_modify_tcp_ack - (mhdr_res, actions, error)) + if (flow_dv_convert_action_mark(dev, mark, + mhdr_res, + error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MARK_EXT; + break; + } + /* Fall-through */ + case MLX5_RTE_FLOW_ACTION_TYPE_MARK: + /* Legacy (non-extensive) MARK action. */ + tag_be = mlx5_flow_mark_set + (((const struct rte_flow_action_mark *) + (actions->conf))->id); + MLX5_ASSERT(!handle->dvh.rix_tag); + if (flow_dv_tag_resource_register(dev, tag_be, + dev_flow, error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_INC_TCP_ACK ? - MLX5_FLOW_ACTION_INC_TCP_ACK : - MLX5_FLOW_ACTION_DEC_TCP_ACK; + MLX5_ASSERT(dev_flow->dv.tag_resource); + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.tag_resource->action; break; - case MLX5_RTE_FLOW_ACTION_TYPE_TAG: - if (flow_dv_convert_action_set_reg - (mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_SET_META: + if (flow_dv_convert_action_set_meta + (dev, mhdr_res, attr, + (const struct rte_flow_action_set_meta *) + actions->conf, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TAG; + action_flags |= MLX5_FLOW_ACTION_SET_META; break; - case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG: - if (flow_dv_convert_action_copy_mreg - (dev, mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_SET_TAG: + if (flow_dv_convert_action_set_tag + (dev, mhdr_res, + (const struct rte_flow_action_set_tag *) + actions->conf, error)) return -rte_errno; action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case MLX5_RTE_FLOW_ACTION_TYPE_DEFAULT_MISS: - action_flags |= MLX5_FLOW_ACTION_DEFAULT_MISS; - dev_flow->handle->fate_action = - MLX5_FLOW_FATE_DEFAULT_MISS; - break; - case RTE_FLOW_ACTION_TYPE_METER: - if (!wks->fm) - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, "Failed to get meter in flow."); - /* Set the meter action. */ - dev_flow->dv.actions[actions_n++] = - wks->fm->meter_action_g; - action_flags |= MLX5_FLOW_ACTION_METER; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV4_DSCP: - if (flow_dv_convert_action_modify_ipv4_dscp(mhdr_res, - actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_IPV4_DSCP; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV6_DSCP: - if (flow_dv_convert_action_modify_ipv6_dscp(mhdr_res, - actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_IPV6_DSCP; + case RTE_FLOW_ACTION_TYPE_DROP: + action_flags |= MLX5_FLOW_ACTION_DROP; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_DROP; break; - case RTE_FLOW_ACTION_TYPE_SAMPLE: - sample_act_pos = actions_n; - sample = (const struct rte_flow_action_sample *) - action->conf; - actions_n++; - action_flags |= MLX5_FLOW_ACTION_SAMPLE; - /* put encap action into group if work with port id */ - if ((action_flags & MLX5_FLOW_ACTION_ENCAP) && - (action_flags & MLX5_FLOW_ACTION_PORT_ID)) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ACTION_TYPE_QUEUE: + queue = actions->conf; + rss_desc->queue_num = 1; + rss_desc->queue[0] = queue->index; + action_flags |= MLX5_FLOW_ACTION_QUEUE; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_QUEUE; + sample_act->action_flags |= MLX5_FLOW_ACTION_QUEUE; + num_of_dest++; break; - case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: - if (flow_dv_convert_action_modify_field - (dev, mhdr_res, actions, attr, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; + case RTE_FLOW_ACTION_TYPE_RSS: + rss = actions->conf; + memcpy(rss_desc->queue, rss->queue, + rss->queue_num * sizeof(uint16_t)); + rss_desc->queue_num = rss->queue_num; + /* NULL RSS key indicates default RSS key. */ + rss_key = !rss->key ? rss_hash_default_key : rss->key; + memcpy(rss_desc->key, rss_key, MLX5_RSS_HASH_KEY_LEN); + /* + * rss->level and rss.types should be set in advance + * when expanding items for RSS. + */ + action_flags |= MLX5_FLOW_ACTION_RSS; + dev_flow->handle->fate_action = rss_desc->shared_rss ? + MLX5_FLOW_FATE_SHARED_RSS : + MLX5_FLOW_FATE_QUEUE; break; - case RTE_FLOW_ACTION_TYPE_CONNTRACK: + case MLX5_RTE_FLOW_ACTION_TYPE_AGE: owner_idx = (uint32_t)(uintptr_t)action->conf; - ct = flow_aso_ct_get_by_idx(dev, owner_idx); - if (!ct) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "Failed to get CT object."); - if (mlx5_aso_ct_available(priv->sh, ct)) - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "CT is unavailable."); - if (ct->is_original) - dev_flow->dv.actions[actions_n] = - ct->dr_action_orig; - else - dev_flow->dv.actions[actions_n] = - ct->dr_action_rply; - if (flow->ct == 0) { - flow->indirect_type = - MLX5_INDIRECT_ACTION_TYPE_CT; - flow->ct = owner_idx; - __atomic_fetch_add(&ct->refcnt, 1, + age_act = flow_aso_age_get_by_idx(dev, owner_idx); + if (flow->age == 0) { + flow->age = owner_idx; + __atomic_fetch_add(&age_act->refcnt, 1, __ATOMIC_RELAXED); } - actions_n++; - action_flags |= MLX5_FLOW_ACTION_CT; + age_act_pos = actions_n++; + action_flags |= MLX5_FLOW_ACTION_AGE; break; - case RTE_FLOW_ACTION_TYPE_END: - actions_end = true; - if (mhdr_res->actions_num) { - /* create modify action if needed. */ - if (flow_dv_modify_hdr_resource_register - (dev, mhdr_res, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[modify_action_position] = - handle->dvh.modify_hdr->action; - } - /* - * Handle AGE and COUNT action by single HW counter - * when they are not shared. + case RTE_FLOW_ACTION_TYPE_AGE: + non_shared_age = action->conf; + age_act_pos = actions_n++; + action_flags |= MLX5_FLOW_ACTION_AGE; + break; + case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: + owner_idx = (uint32_t)(uintptr_t)action->conf; + cnt_act = flow_dv_counter_get_by_idx(dev, owner_idx, + NULL); + MLX5_ASSERT(cnt_act != NULL); + /** + * When creating meter drop flow in drop table, the + * counter should not overwrite the rte flow counter. */ - if (action_flags & MLX5_FLOW_ACTION_AGE) { - if ((non_shared_age && count) || - !flow_hit_aso_supported(priv->sh, attr)) { - /* Creates age by counters. */ - cnt_act = flow_dv_prepare_counter - (dev, dev_flow, - flow, count, - non_shared_age, - error); - if (!cnt_act) - return -rte_errno; - dev_flow->dv.actions[age_act_pos] = - cnt_act->action; - break; - } - if (!flow->age && non_shared_age) { - flow->age = flow_dv_aso_age_alloc - (dev, error); - if (!flow->age) - return -rte_errno; - flow_dv_aso_age_params_init - (dev, flow->age, - non_shared_age->context ? - non_shared_age->context : - (void *)(uintptr_t) - (dev_flow->flow_idx), - non_shared_age->timeout); - } - age_act = flow_aso_age_get_by_idx(dev, - flow->age); - dev_flow->dv.actions[age_act_pos] = - age_act->dr_action; - } - if (action_flags & MLX5_FLOW_ACTION_COUNT) { - /* - * Create one count action, to be used - * by all sub-flows. - */ - cnt_act = flow_dv_prepare_counter(dev, dev_flow, - flow, count, - NULL, error); - if (!cnt_act) - return -rte_errno; + if (attr->group == MLX5_FLOW_TABLE_LEVEL_METER && + dev_flow->dv.table_id == MLX5_MTR_TABLE_ID_DROP) { dev_flow->dv.actions[actions_n++] = - cnt_act->action; + cnt_act->action; + } else { + if (flow->counter == 0) { + flow->counter = owner_idx; + __atomic_fetch_add + (&cnt_act->shared_info.refcnt, + 1, __ATOMIC_RELAXED); + } + /* Save information first, will apply later. */ + action_flags |= MLX5_FLOW_ACTION_COUNT; } - default: break; - } - if (mhdr_res->actions_num && - modify_action_position == UINT32_MAX) - modify_action_position = actions_n++; - } - for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { - int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); - int item_type = items->type; - - if (!mlx5_flow_os_item_supported(item_type)) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "item not supported"); - switch (item_type) { - case RTE_FLOW_ITEM_TYPE_ESP: - flow_dv_translate_item_esp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_ITEM_ESP; + case RTE_FLOW_ACTION_TYPE_COUNT: + if (!priv->sh->cdev->config.devx) { + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "count action not supported"); + } + /* Save information first, will apply later. */ + count = action->conf; + action_flags |= MLX5_FLOW_ACTION_COUNT; break; - case RTE_FLOW_ITEM_TYPE_PORT_ID: - flow_dv_translate_item_port_id - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_PORT_ID; + case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: + dev_flow->dv.actions[actions_n++] = + priv->sh->pop_vlan_action; + action_flags |= MLX5_FLOW_ACTION_OF_POP_VLAN; break; - case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: - flow_dv_translate_item_represented_port - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; + case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: + if (!(action_flags & + MLX5_FLOW_ACTION_OF_SET_VLAN_VID)) + flow_dev_get_vlan_info_from_items(items, &vlan); + vlan.eth_proto = rte_be_to_cpu_16 + ((((const struct rte_flow_action_of_push_vlan *) + actions->conf)->ethertype)); + found_action = mlx5_flow_find_action + (actions + 1, + RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID); + if (found_action) + mlx5_update_vlan_vid_pcp(found_action, &vlan); + found_action = mlx5_flow_find_action + (actions + 1, + RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP); + if (found_action) + mlx5_update_vlan_vid_pcp(found_action, &vlan); + if (flow_dv_create_action_push_vlan + (dev, attr, &vlan, dev_flow, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.push_vlan_res->action; + action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN; break; - case RTE_FLOW_ITEM_TYPE_ETH: - flow_dv_translate_item_eth(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = action_flags & - MLX5_FLOW_ACTION_DEFAULT_MISS && - !dev_flow->external ? - MLX5_PRIORITY_MAP_L3 : - MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : - MLX5_FLOW_LAYER_OUTER_L2; + case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP: + /* of_vlan_push action handled this action */ + MLX5_ASSERT(action_flags & + MLX5_FLOW_ACTION_OF_PUSH_VLAN); break; - case RTE_FLOW_ITEM_TYPE_VLAN: - flow_dv_translate_item_vlan(dev_flow, - match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | - MLX5_FLOW_LAYER_INNER_VLAN) : - (MLX5_FLOW_LAYER_OUTER_L2 | - MLX5_FLOW_LAYER_OUTER_VLAN); + case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID: + if (action_flags & MLX5_FLOW_ACTION_OF_PUSH_VLAN) + break; + flow_dev_get_vlan_info_from_items(items, &vlan); + mlx5_update_vlan_vid_pcp(actions, &vlan); + /* If no VLAN push - this is a modify header action */ + if (flow_dv_convert_action_modify_vlan_vid + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID; break; - case RTE_FLOW_ITEM_TYPE_IPV4: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv4(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : - MLX5_FLOW_LAYER_OUTER_L3_IPV4; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv4 *) - items->mask)->hdr.next_proto_id) { - next_protocol = - ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - next_protocol &= - ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: + if (flow_dv_create_action_l2_encap(dev, actions, + dev_flow, + attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_ENCAP; + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_IPV6: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv6(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : - MLX5_FLOW_LAYER_OUTER_L3_IPV6; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto) { - next_protocol = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol &= - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: + if (flow_dv_create_action_l2_decap(dev, dev_flow, + attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_DECAP; break; - case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: - flow_dv_translate_item_ipv6_frag_ext(match_mask, - match_value, - items, tunnel); - last_item = tunnel ? - MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : - MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header) { - next_protocol = - ((const struct rte_flow_item_ipv6_frag_ext *) - items->spec)->hdr.next_header; - next_protocol &= - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + /* Handle encap with preceding decap. */ + if (action_flags & MLX5_FLOW_ACTION_DECAP) { + if (flow_dv_create_action_raw_encap + (dev, actions, dev_flow, attr, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; } else { - /* Reset for inner layer. */ - next_protocol = 0xff; + /* Handle encap without preceding decap. */ + if (flow_dv_create_action_l2_encap + (dev, actions, dev_flow, attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; } + action_flags |= MLX5_FLOW_ACTION_ENCAP; + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_TCP: - flow_dv_translate_item_tcp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : - MLX5_FLOW_LAYER_OUTER_L4_TCP; - break; - case RTE_FLOW_ITEM_TYPE_UDP: - flow_dv_translate_item_udp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : - MLX5_FLOW_LAYER_OUTER_L4_UDP; - break; - case RTE_FLOW_ITEM_TYPE_GRE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - gre_item = items; + case RTE_FLOW_ACTION_TYPE_RAW_DECAP: + while ((++action)->type == RTE_FLOW_ACTION_TYPE_VOID) + ; + if (action->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { + if (flow_dv_create_action_l2_decap + (dev, dev_flow, attr->transfer, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + } + /* If decap is followed by encap, handle it at encap. */ + action_flags |= MLX5_FLOW_ACTION_DECAP; break; - case RTE_FLOW_ITEM_TYPE_GRE_KEY: - flow_dv_translate_item_gre_key(match_mask, - match_value, items); - last_item = MLX5_FLOW_LAYER_GRE_KEY; + case MLX5_RTE_FLOW_ACTION_TYPE_JUMP: + dev_flow->dv.actions[actions_n++] = + (void *)(uintptr_t)action->conf; + action_flags |= MLX5_FLOW_ACTION_JUMP; break; - case RTE_FLOW_ITEM_TYPE_GRE_OPTION: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_JUMP: + jump_group = ((const struct rte_flow_action_jump *) + action->conf)->group; + grp_info.std_tbl_fix = 0; + if (dev_flow->skip_scale & + (1 << MLX5_SCALE_JUMP_FLOW_GROUP_BIT)) + grp_info.skip_scale = 1; + else + grp_info.skip_scale = 0; + ret = mlx5_flow_group_to_table(dev, tunnel, + jump_group, + &table, + &grp_info, error); + if (ret) + return ret; + tbl = flow_dv_tbl_resource_get(dev, table, attr->egress, + attr->transfer, + !!dev_flow->external, + tunnel, jump_group, 0, + 0, error); + if (!tbl) + return rte_flow_error_set + (error, errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create jump action."); + if (flow_dv_jump_tbl_resource_register + (dev, tbl, dev_flow, error)) { + flow_dv_tbl_resource_release(MLX5_SH(dev), tbl); + return rte_flow_error_set + (error, errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create jump action."); + } + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.jump->action; + action_flags |= MLX5_FLOW_ACTION_JUMP; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_JUMP; + sample_act->action_flags |= MLX5_FLOW_ACTION_JUMP; + num_of_dest++; break; - case RTE_FLOW_ITEM_TYPE_NVGRE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC: + case RTE_FLOW_ACTION_TYPE_SET_MAC_DST: + if (flow_dv_convert_action_modify_mac + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_MAC_SRC ? + MLX5_FLOW_ACTION_SET_MAC_SRC : + MLX5_FLOW_ACTION_SET_MAC_DST; break; - case RTE_FLOW_ITEM_TYPE_VXLAN: - flow_dv_translate_item_vxlan(dev, attr, - match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN; + case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: + case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST: + if (flow_dv_convert_action_modify_ipv4 + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC ? + MLX5_FLOW_ACTION_SET_IPV4_SRC : + MLX5_FLOW_ACTION_SET_IPV4_DST; break; - case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN_GPE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC: + case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST: + if (flow_dv_convert_action_modify_ipv6 + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC ? + MLX5_FLOW_ACTION_SET_IPV6_SRC : + MLX5_FLOW_ACTION_SET_IPV6_DST; break; - case RTE_FLOW_ITEM_TYPE_GENEVE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GENEVE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_TP_SRC: + case RTE_FLOW_ACTION_TYPE_SET_TP_DST: + if (flow_dv_convert_action_modify_tp + (mhdr_res, actions, items, + &flow_attr, dev_flow, !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_TP_SRC ? + MLX5_FLOW_ACTION_SET_TP_SRC : + MLX5_FLOW_ACTION_SET_TP_DST; break; - case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: - ret = flow_dv_translate_item_geneve_opt(dev, match_mask, - match_value, - items, error); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GENEVE TLV option"); - flow->geneve_tlv_option = 1; - last_item = MLX5_FLOW_LAYER_GENEVE_OPT; + case RTE_FLOW_ACTION_TYPE_DEC_TTL: + if (flow_dv_convert_action_modify_dec_ttl + (mhdr_res, items, &flow_attr, dev_flow, + !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_DEC_TTL; break; - case RTE_FLOW_ITEM_TYPE_MPLS: - flow_dv_translate_item_mpls(match_mask, match_value, - items, last_item, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_MPLS; + case RTE_FLOW_ACTION_TYPE_SET_TTL: + if (flow_dv_convert_action_modify_ttl + (mhdr_res, actions, items, &flow_attr, + dev_flow, !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TTL; break; - case RTE_FLOW_ITEM_TYPE_MARK: - flow_dv_translate_item_mark(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_MARK; + case RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ: + case RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ: + if (flow_dv_convert_action_modify_tcp_seq + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ ? + MLX5_FLOW_ACTION_INC_TCP_SEQ : + MLX5_FLOW_ACTION_DEC_TCP_SEQ; break; - case RTE_FLOW_ITEM_TYPE_META: - flow_dv_translate_item_meta(dev, match_mask, - match_value, attr, items); - last_item = MLX5_FLOW_ITEM_METADATA; + + case RTE_FLOW_ACTION_TYPE_INC_TCP_ACK: + case RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK: + if (flow_dv_convert_action_modify_tcp_ack + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_INC_TCP_ACK ? + MLX5_FLOW_ACTION_INC_TCP_ACK : + MLX5_FLOW_ACTION_DEC_TCP_ACK; break; - case RTE_FLOW_ITEM_TYPE_ICMP: - flow_dv_translate_item_icmp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP; + case MLX5_RTE_FLOW_ACTION_TYPE_TAG: + if (flow_dv_convert_action_set_reg + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case RTE_FLOW_ITEM_TYPE_ICMP6: - flow_dv_translate_item_icmp6(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP6; + case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG: + if (flow_dv_convert_action_copy_mreg + (dev, mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; + case MLX5_RTE_FLOW_ACTION_TYPE_DEFAULT_MISS: + action_flags |= MLX5_FLOW_ACTION_DEFAULT_MISS; + dev_flow->handle->fate_action = + MLX5_FLOW_FATE_DEFAULT_MISS; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_mlx5_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; + case RTE_FLOW_ACTION_TYPE_METER: + if (!wks->fm) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "Failed to get meter in flow."); + /* Set the meter action. */ + dev_flow->dv.actions[actions_n++] = + wks->fm->meter_action_g; + action_flags |= MLX5_FLOW_ACTION_METER; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - flow_dv_translate_item_tx_queue(dev, match_mask, - match_value, - items); - last_item = MLX5_FLOW_ITEM_TX_QUEUE; + case RTE_FLOW_ACTION_TYPE_SET_IPV4_DSCP: + if (flow_dv_convert_action_modify_ipv4_dscp(mhdr_res, + actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_IPV4_DSCP; break; - case RTE_FLOW_ITEM_TYPE_GTP: - flow_dv_translate_item_gtp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GTP; + case RTE_FLOW_ACTION_TYPE_SET_IPV6_DSCP: + if (flow_dv_convert_action_modify_ipv6_dscp(mhdr_res, + actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_IPV6_DSCP; break; - case RTE_FLOW_ITEM_TYPE_GTP_PSC: - ret = flow_dv_translate_item_gtp_psc(match_mask, - match_value, - items); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GTP PSC item"); - last_item = MLX5_FLOW_LAYER_GTP_PSC; + case RTE_FLOW_ACTION_TYPE_SAMPLE: + sample_act_pos = actions_n; + sample = (const struct rte_flow_action_sample *) + action->conf; + actions_n++; + action_flags |= MLX5_FLOW_ACTION_SAMPLE; + /* put encap action into group if work with port id */ + if ((action_flags & MLX5_FLOW_ACTION_ENCAP) && + (action_flags & MLX5_FLOW_ACTION_PORT_ID)) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_ECPRI: - if (!mlx5_flex_parser_ecpri_exist(dev)) { - /* Create it only the first time to be used. */ - ret = mlx5_flex_parser_ecpri_alloc(dev); - if (ret) - return rte_flow_error_set - (error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, + case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: + if (flow_dv_convert_action_modify_field + (dev, mhdr_res, actions, attr, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; + break; + case RTE_FLOW_ACTION_TYPE_CONNTRACK: + owner_idx = (uint32_t)(uintptr_t)action->conf; + ct = flow_aso_ct_get_by_idx(dev, owner_idx); + if (!ct) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, - "cannot create eCPRI parser"); + "Failed to get CT object."); + if (mlx5_aso_ct_available(priv->sh, ct)) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "CT is unavailable."); + if (ct->is_original) + dev_flow->dv.actions[actions_n] = + ct->dr_action_orig; + else + dev_flow->dv.actions[actions_n] = + ct->dr_action_rply; + if (flow->ct == 0) { + flow->indirect_type = + MLX5_INDIRECT_ACTION_TYPE_CT; + flow->ct = owner_idx; + __atomic_fetch_add(&ct->refcnt, 1, + __ATOMIC_RELAXED); } - flow_dv_translate_item_ecpri(dev, match_mask, - match_value, items, - last_item); - /* No other protocol should follow eCPRI layer. */ - last_item = MLX5_FLOW_LAYER_ECPRI; - break; - case RTE_FLOW_ITEM_TYPE_INTEGRITY: - flow_dv_translate_item_integrity(items, integrity_items, - &last_item); - break; - case RTE_FLOW_ITEM_TYPE_CONNTRACK: - flow_dv_translate_item_aso_ct(dev, match_mask, - match_value, items); - break; - case RTE_FLOW_ITEM_TYPE_FLEX: - flow_dv_translate_item_flex(dev, match_mask, - match_value, items, - dev_flow, tunnel != 0); - last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : - MLX5_FLOW_ITEM_OUTER_FLEX; + actions_n++; + action_flags |= MLX5_FLOW_ACTION_CT; break; + case RTE_FLOW_ACTION_TYPE_END: + actions_end = true; + if (mhdr_res->actions_num) { + /* create modify action if needed. */ + if (flow_dv_modify_hdr_resource_register + (dev, mhdr_res, dev_flow, error)) + return -rte_errno; + dev_flow->dv.actions[modify_action_position] = + handle->dvh.modify_hdr->action; + } + /* + * Handle AGE and COUNT action by single HW counter + * when they are not shared. + */ + if (action_flags & MLX5_FLOW_ACTION_AGE) { + if ((non_shared_age && count) || + !flow_hit_aso_supported(priv->sh, attr)) { + /* Creates age by counters. */ + cnt_act = flow_dv_prepare_counter + (dev, dev_flow, + flow, count, + non_shared_age, + error); + if (!cnt_act) + return -rte_errno; + dev_flow->dv.actions[age_act_pos] = + cnt_act->action; + break; + } + if (!flow->age && non_shared_age) { + flow->age = flow_dv_aso_age_alloc + (dev, error); + if (!flow->age) + return -rte_errno; + flow_dv_aso_age_params_init + (dev, flow->age, + non_shared_age->context ? + non_shared_age->context : + (void *)(uintptr_t) + (dev_flow->flow_idx), + non_shared_age->timeout); + } + age_act = flow_aso_age_get_by_idx(dev, + flow->age); + dev_flow->dv.actions[age_act_pos] = + age_act->dr_action; + } + if (action_flags & MLX5_FLOW_ACTION_COUNT) { + /* + * Create one count action, to be used + * by all sub-flows. + */ + cnt_act = flow_dv_prepare_counter(dev, dev_flow, + flow, count, + NULL, error); + if (!cnt_act) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + cnt_act->action; + } default: break; } - item_flags |= last_item; - } - /* - * When E-Switch mode is enabled, we have two cases where we need to - * set the source port manually. - * The first one, is in case of NIC ingress steering rule, and the - * second is E-Switch rule where no port_id item was found. - * In both cases the source port is set according the current port - * in use. - */ - if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && - !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && - !(attr->egress && !attr->transfer)) { - if (flow_dv_translate_item_port_id(dev, match_mask, - match_value, NULL, attr)) - return -rte_errno; - } - if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { - flow_dv_translate_item_integrity_post(match_mask, match_value, - integrity_items, - item_flags); - } - if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) - flow_dv_translate_item_vxlan_gpe(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GENEVE) - flow_dv_translate_item_geneve(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GRE) { - if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) - flow_dv_translate_item_gre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) - flow_dv_translate_item_nvgre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) - flow_dv_translate_item_gre_option(match_mask, match_value, - tunnel_item, gre_item, item_flags); - else - MLX5_ASSERT(false); + if (mhdr_res->actions_num && + modify_action_position == UINT32_MAX) + modify_action_position = actions_n++; } -#ifdef RTE_LIBRTE_MLX5_DEBUG - MLX5_ASSERT(!flow_dv_check_valid_spec(matcher.mask.buf, - dev_flow->dv.value.buf)); -#endif - /* - * Layers may be already initialized from prefix flow if this dev_flow - * is the suffix flow. - */ - handle->layers |= item_flags; + dev_flow->act_flags = action_flags; + ret = flow_dv_translate_items(dev, dev_flow, attr, items, &matcher, + error); + if (ret) + return -rte_errno; if (action_flags & MLX5_FLOW_ACTION_RSS) flow_dv_hashfields_set(dev_flow->handle->layers, rss_desc, @@ -14153,7 +14197,6 @@ flow_dv_translate(struct rte_eth_dev *dev, actions_n = tmp_actions_n; } dev_flow->dv.actions_n = actions_n; - dev_flow->act_flags = action_flags; if (wks->skip_matcher_reg) return 0; /* Register matcher. */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 02/19] net/mlx5: split flow item matcher and value translation 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker 2022-09-22 19:03 ` [v1 01/19] net/mlx5: split flow item translation Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 03/19] net/mlx5: add hardware steering item translation function Alex Vesker ` (21 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> As hardware steering mode translates flow matcher and value in two different stages, split the flow item matcher and value translation to help reuse the code. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow.h | 32 + drivers/net/mlx5/mlx5_flow_dv.c | 2317 +++++++++++++++---------------- 2 files changed, 1188 insertions(+), 1161 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 0fa1735b1a..2ebb8496f2 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1264,6 +1264,38 @@ struct mlx5_flow_workspace { uint32_t skip_matcher_reg:1; /* Indicates if need to skip matcher register in translate. */ uint32_t mark:1; /* Indicates if flow contains mark action. */ + uint32_t vport_meta_tag; /* Used for vport index match. */ +}; + +/* Matcher translate type. */ +enum MLX5_SET_MATCHER { + MLX5_SET_MATCHER_SW_V = 1 << 0, + MLX5_SET_MATCHER_SW_M = 1 << 1, + MLX5_SET_MATCHER_HS_V = 1 << 2, + MLX5_SET_MATCHER_HS_M = 1 << 3, +}; + +#define MLX5_SET_MATCHER_SW (MLX5_SET_MATCHER_SW_V | MLX5_SET_MATCHER_SW_M) +#define MLX5_SET_MATCHER_HS (MLX5_SET_MATCHER_HS_V | MLX5_SET_MATCHER_HS_M) +#define MLX5_SET_MATCHER_V (MLX5_SET_MATCHER_SW_V | MLX5_SET_MATCHER_HS_V) +#define MLX5_SET_MATCHER_M (MLX5_SET_MATCHER_SW_M | MLX5_SET_MATCHER_HS_M) + +/* Flow matcher workspace intermediate data. */ +struct mlx5_dv_matcher_workspace { + uint8_t priority; /* Flow priority. */ + uint64_t last_item; /* Last item in pattern. */ + uint64_t item_flags; /* Flow item pattern flags. */ + uint64_t action_flags; /* Flow action flags. */ + bool external; /* External flow or not. */ + uint32_t vlan_tag:12; /* Flow item VLAN tag. */ + uint8_t next_protocol; /* Tunnel next protocol */ + uint32_t geneve_tlv_option; /* Flow item Geneve TLV option. */ + uint32_t group; /* Flow group. */ + uint16_t udp_dport; /* Flow item UDP port. */ + const struct rte_flow_attr *attr; /* Flow attribute. */ + struct mlx5_flow_rss_desc *rss_desc; /* RSS descriptor. */ + const struct rte_flow_item *tunnel_item; /* Flow tunnel item. */ + const struct rte_flow_item *gre_item; /* Flow GRE item. */ }; struct mlx5_flow_split_info { diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 2f3f4b98b9..cea1aa3137 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -63,6 +63,25 @@ #define MLX5DV_FLOW_VLAN_PCP_MASK_BE RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK) #define MLX5DV_FLOW_VLAN_VID_MASK_BE RTE_BE16(MLX5DV_FLOW_VLAN_VID_MASK) +#define MLX5_ITEM_VALID(item, key_type) \ + (((MLX5_SET_MATCHER_SW & (key_type)) && !((item)->spec)) || \ + ((MLX5_SET_MATCHER_HS_V == (key_type)) && !((item)->spec)) || \ + ((MLX5_SET_MATCHER_HS_M == (key_type)) && !((item)->mask))) + +#define MLX5_ITEM_UPDATE(item, key_type, v, m, gm) \ + do { \ + if ((key_type) == MLX5_SET_MATCHER_SW_V) { \ + v = (item)->spec; \ + m = (item)->mask ? (item)->mask : (gm); \ + } else if ((key_type) == MLX5_SET_MATCHER_HS_V) { \ + v = (item)->spec; \ + m = (v); \ + } else { \ + v = (item)->mask ? (item)->mask : (gm); \ + m = (v); \ + } \ + } while (0) + union flow_dv_attr { struct { uint32_t valid:1; @@ -8323,70 +8342,61 @@ flow_dv_check_valid_spec(void *match_mask, void *match_value) static inline void flow_dv_set_match_ip_version(uint32_t group, void *headers_v, - void *headers_m, + uint32_t key_type, uint8_t ip_version) { - if (group == 0) - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, 0xf); + if (group == 0 && (key_type & MLX5_SET_MATCHER_M)) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, 0xf); else - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, ip_version); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, ip_version); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, 0); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ethertype, 0); } /** - * Add Ethernet item to matcher and to the value. + * Add Ethernet item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] grpup + * Flow matcher group. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_eth(void *matcher, void *key, - const struct rte_flow_item *item, int inner, - uint32_t group) +flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_eth *eth_m = item->mask; - const struct rte_flow_item_eth *eth_v = item->spec; + const struct rte_flow_item_eth *eth_vv = item->spec; + const struct rte_flow_item_eth *eth_m; + const struct rte_flow_item_eth *eth_v; const struct rte_flow_item_eth nic_mask = { .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", .src.addr_bytes = "\xff\xff\xff\xff\xff\xff", .type = RTE_BE16(0xffff), .has_vlan = 0, }; - void *hdrs_m; void *hdrs_v; char *l24_v; unsigned int i; - if (!eth_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!eth_m) - eth_m = &nic_mask; - if (inner) { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); + MLX5_ITEM_UPDATE(item, key_type, eth_v, eth_m, &nic_mask); + if (!eth_vv) + eth_vv = eth_v; + if (inner) hdrs_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); + else hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, dmac_47_16), - ð_m->dst, sizeof(eth_m->dst)); /* The value must be in the range of the mask. */ l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, dmac_47_16); for (i = 0; i < sizeof(eth_m->dst); ++i) l24_v[i] = eth_m->dst.addr_bytes[i] & eth_v->dst.addr_bytes[i]; - memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, smac_47_16), - ð_m->src, sizeof(eth_m->src)); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, smac_47_16); /* The value must be in the range of the mask. */ for (i = 0; i < sizeof(eth_m->dst); ++i) @@ -8400,145 +8410,149 @@ flow_dv_translate_item_eth(void *matcher, void *key, * eCPRI over Ether layer will use type value 0xAEFE. */ if (eth_m->type == 0xFFFF) { + rte_be16_t type = eth_v->type; + + /* + * When set the matcher mask, refer to the original spec + * value. + */ + if (key_type == MLX5_SET_MATCHER_SW_M) { + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + type = eth_vv->type; + } /* Set cvlan_tag mask for any single\multi\un-tagged case. */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); - switch (eth_v->type) { + switch (type) { case RTE_BE16(RTE_ETHER_TYPE_VLAN): MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); return; case RTE_BE16(RTE_ETHER_TYPE_QINQ): - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); return; case RTE_BE16(RTE_ETHER_TYPE_IPV4): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 4); + flow_dv_set_match_ip_version(group, hdrs_v, key_type, + 4); return; case RTE_BE16(RTE_ETHER_TYPE_IPV6): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 6); + flow_dv_set_match_ip_version(group, hdrs_v, key_type, + 6); return; default: break; } } - if (eth_m->has_vlan) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); - if (eth_v->has_vlan) { - /* - * Here, when also has_more_vlan field in VLAN item is - * not set, only single-tagged packets will be matched. - */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + /* + * Only SW steering value should refer to the mask value. + * Other cases are using the fake masks, just ignore the mask. + */ + if (eth_v->has_vlan && eth_m->has_vlan) { + /* + * Here, when also has_more_vlan field in VLAN item is + * not set, only single-tagged packets will be matched. + */ + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + if (key_type != MLX5_SET_MATCHER_HS_M && eth_vv->has_vlan) return; - } } - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, ethertype, - rte_be_to_cpu_16(eth_m->type)); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype); *(uint16_t *)(l24_v) = eth_m->type & eth_v->type; } /** - * Add VLAN item to matcher and to the value. + * Add VLAN item to the value. * - * @param[in, out] dev_flow - * Flow descriptor. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] wks + * Item workspace. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow, - void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_vlan *vlan_m = item->mask; - const struct rte_flow_item_vlan *vlan_v = item->spec; - void *hdrs_m; + const struct rte_flow_item_vlan *vlan_m; + const struct rte_flow_item_vlan *vlan_v; + const struct rte_flow_item_vlan *vlan_vv = item->spec; void *hdrs_v; - uint16_t tci_m; uint16_t tci_v; if (inner) { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); } else { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); /* * This is workaround, masks are not supported, * and pre-validated. */ - if (vlan_v) - dev_flow->handle->vf_vlan.tag = - rte_be_to_cpu_16(vlan_v->tci) & 0x0fff; + if (vlan_vv) + wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->tci) & 0x0fff; } /* * When VLAN item exists in flow, mark packet as tagged, * even if TCI is not specified. */ - if (!MLX5_GET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag)) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); + if (!MLX5_GET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag)) MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); - } - if (!vlan_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!vlan_m) - vlan_m = &rte_flow_item_vlan_mask; - tci_m = rte_be_to_cpu_16(vlan_m->tci); + MLX5_ITEM_UPDATE(item, key_type, vlan_v, vlan_m, + &rte_flow_item_vlan_mask); tci_v = rte_be_to_cpu_16(vlan_m->tci & vlan_v->tci); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_vid, tci_m); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_vid, tci_v); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_cfi, tci_m >> 12); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_cfi, tci_v >> 12); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_prio, tci_m >> 13); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_prio, tci_v >> 13); /* * HW is optimized for IPv4/IPv6. In such cases, avoid setting * ethertype, and use ip_version field instead. */ if (vlan_m->inner_type == 0xFFFF) { - switch (vlan_v->inner_type) { + rte_be16_t inner_type = vlan_v->inner_type; + + /* + * When set the matcher mask, refer to the original spec + * value. + */ + if (key_type == MLX5_SET_MATCHER_SW_M) + inner_type = vlan_vv->inner_type; + switch (inner_type) { case RTE_BE16(RTE_ETHER_TYPE_VLAN): - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); + if (key_type & MLX5_SET_MATCHER_V) + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, + cvlan_tag, 0); return; case RTE_BE16(RTE_ETHER_TYPE_IPV4): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 4); + flow_dv_set_match_ip_version + (wks->group, hdrs_v, key_type, 4); return; case RTE_BE16(RTE_ETHER_TYPE_IPV6): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 6); + flow_dv_set_match_ip_version + (wks->group, hdrs_v, key_type, 6); return; default: break; } } if (vlan_m->has_more_vlan && vlan_v->has_more_vlan) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); /* Only one vlan_tag bit can be set. */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); + if (key_type & MLX5_SET_MATCHER_V) + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); return; } - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, ethertype, - rte_be_to_cpu_16(vlan_m->inner_type)); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, ethertype, rte_be_to_cpu_16(vlan_m->inner_type & vlan_v->inner_type)); } /** - * Add IPV4 item to matcher and to the value. + * Add IPV4 item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8547,14 +8561,15 @@ flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow, * Item is inner pattern. * @param[in] group * The group to insert the rule. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv4(void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_ipv4(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_ipv4 *ipv4_m = item->mask; - const struct rte_flow_item_ipv4 *ipv4_v = item->spec; + const struct rte_flow_item_ipv4 *ipv4_m; + const struct rte_flow_item_ipv4 *ipv4_v; const struct rte_flow_item_ipv4 nic_mask = { .hdr = { .src_addr = RTE_BE32(0xffffffff), @@ -8564,68 +8579,41 @@ flow_dv_translate_item_ipv4(void *matcher, void *key, .time_to_live = 0xff, }, }; - void *headers_m; void *headers_v; - char *l24_m; char *l24_v; - uint8_t tos, ihl_m, ihl_v; + uint8_t tos; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - flow_dv_set_match_ip_version(group, headers_v, headers_m, 4); - if (!ipv4_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + flow_dv_set_match_ip_version(group, headers_v, key_type, 4); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv4_m) - ipv4_m = &nic_mask; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - dst_ipv4_dst_ipv6.ipv4_layout.ipv4); + MLX5_ITEM_UPDATE(item, key_type, ipv4_v, ipv4_m, &nic_mask); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, dst_ipv4_dst_ipv6.ipv4_layout.ipv4); - *(uint32_t *)l24_m = ipv4_m->hdr.dst_addr; *(uint32_t *)l24_v = ipv4_m->hdr.dst_addr & ipv4_v->hdr.dst_addr; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - src_ipv4_src_ipv6.ipv4_layout.ipv4); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, src_ipv4_src_ipv6.ipv4_layout.ipv4); - *(uint32_t *)l24_m = ipv4_m->hdr.src_addr; *(uint32_t *)l24_v = ipv4_m->hdr.src_addr & ipv4_v->hdr.src_addr; tos = ipv4_m->hdr.type_of_service & ipv4_v->hdr.type_of_service; - ihl_m = ipv4_m->hdr.version_ihl & RTE_IPV4_HDR_IHL_MASK; - ihl_v = ipv4_v->hdr.version_ihl & RTE_IPV4_HDR_IHL_MASK; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_ihl, ihl_m); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_ihl, ihl_m & ihl_v); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ecn, - ipv4_m->hdr.type_of_service); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_ihl, + ipv4_v->hdr.ihl & ipv4_m->hdr.ihl); + if (key_type == MLX5_SET_MATCHER_SW_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, + ipv4_v->hdr.type_of_service); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, tos); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_dscp, - ipv4_m->hdr.type_of_service >> 2); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp, tos >> 2); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv4_m->hdr.next_proto_id); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv4_v->hdr.next_proto_id & ipv4_m->hdr.next_proto_id); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ttl_hoplimit, - ipv4_m->hdr.time_to_live); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit, ipv4_v->hdr.time_to_live & ipv4_m->hdr.time_to_live); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, - !!(ipv4_m->hdr.fragment_offset)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, !!(ipv4_v->hdr.fragment_offset & ipv4_m->hdr.fragment_offset)); } /** - * Add IPV6 item to matcher and to the value. + * Add IPV6 item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8634,14 +8622,15 @@ flow_dv_translate_item_ipv4(void *matcher, void *key, * Item is inner pattern. * @param[in] group * The group to insert the rule. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv6(void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_ipv6(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_ipv6 *ipv6_m = item->mask; - const struct rte_flow_item_ipv6 *ipv6_v = item->spec; + const struct rte_flow_item_ipv6 *ipv6_m; + const struct rte_flow_item_ipv6 *ipv6_v; const struct rte_flow_item_ipv6 nic_mask = { .hdr = { .src_addr = @@ -8655,287 +8644,217 @@ flow_dv_translate_item_ipv6(void *matcher, void *key, .hop_limits = 0xff, }, }; - void *headers_m; void *headers_v; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - char *l24_m; char *l24_v; - uint32_t vtc_m; uint32_t vtc_v; int i; int size; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - flow_dv_set_match_ip_version(group, headers_v, headers_m, 6); - if (!ipv6_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + flow_dv_set_match_ip_version(group, headers_v, key_type, 6); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv6_m) - ipv6_m = &nic_mask; + MLX5_ITEM_UPDATE(item, key_type, ipv6_v, ipv6_m, &nic_mask); size = sizeof(ipv6_m->hdr.dst_addr); - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - dst_ipv4_dst_ipv6.ipv6_layout.ipv6); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, dst_ipv4_dst_ipv6.ipv6_layout.ipv6); - memcpy(l24_m, ipv6_m->hdr.dst_addr, size); for (i = 0; i < size; ++i) - l24_v[i] = l24_m[i] & ipv6_v->hdr.dst_addr[i]; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - src_ipv4_src_ipv6.ipv6_layout.ipv6); + l24_v[i] = ipv6_m->hdr.dst_addr[i] & ipv6_v->hdr.dst_addr[i]; l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, src_ipv4_src_ipv6.ipv6_layout.ipv6); - memcpy(l24_m, ipv6_m->hdr.src_addr, size); for (i = 0; i < size; ++i) - l24_v[i] = l24_m[i] & ipv6_v->hdr.src_addr[i]; + l24_v[i] = ipv6_m->hdr.src_addr[i] & ipv6_v->hdr.src_addr[i]; /* TOS. */ - vtc_m = rte_be_to_cpu_32(ipv6_m->hdr.vtc_flow); vtc_v = rte_be_to_cpu_32(ipv6_m->hdr.vtc_flow & ipv6_v->hdr.vtc_flow); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ecn, vtc_m >> 20); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, vtc_v >> 20); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_dscp, vtc_m >> 22); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp, vtc_v >> 22); /* Label. */ - if (inner) { - MLX5_SET(fte_match_set_misc, misc_m, inner_ipv6_flow_label, - vtc_m); + if (inner) MLX5_SET(fte_match_set_misc, misc_v, inner_ipv6_flow_label, vtc_v); - } else { - MLX5_SET(fte_match_set_misc, misc_m, outer_ipv6_flow_label, - vtc_m); + else MLX5_SET(fte_match_set_misc, misc_v, outer_ipv6_flow_label, vtc_v); - } /* Protocol. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv6_m->hdr.proto); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv6_v->hdr.proto & ipv6_m->hdr.proto); /* Hop limit. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ttl_hoplimit, - ipv6_m->hdr.hop_limits); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit, ipv6_v->hdr.hop_limits & ipv6_m->hdr.hop_limits); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, - !!(ipv6_m->has_frag_ext)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, !!(ipv6_v->has_frag_ext & ipv6_m->has_frag_ext)); } /** - * Add IPV6 fragment extension item to matcher and to the value. + * Add IPV6 fragment extension item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv6_frag_ext(void *matcher, void *key, +flow_dv_translate_item_ipv6_frag_ext(void *key, const struct rte_flow_item *item, - int inner) + int inner, uint32_t key_type) { - const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_m = item->mask; - const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_v = item->spec; + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_m; + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_v; const struct rte_flow_item_ipv6_frag_ext nic_mask = { .hdr = { .next_header = 0xff, .frag_data = RTE_BE16(0xffff), }, }; - void *headers_m; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); /* IPv6 fragment extension item exists, so packet is IP fragment. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 1); - if (!ipv6_frag_ext_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv6_frag_ext_m) - ipv6_frag_ext_m = &nic_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv6_frag_ext_m->hdr.next_header); + MLX5_ITEM_UPDATE(item, key_type, ipv6_frag_ext_v, + ipv6_frag_ext_m, &nic_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv6_frag_ext_v->hdr.next_header & ipv6_frag_ext_m->hdr.next_header); } /** - * Add TCP item to matcher and to the value. + * Add TCP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_tcp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_tcp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_tcp *tcp_m = item->mask; - const struct rte_flow_item_tcp *tcp_v = item->spec; - void *headers_m; + const struct rte_flow_item_tcp *tcp_m; + const struct rte_flow_item_tcp *tcp_v; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_TCP); - if (!tcp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_TCP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!tcp_m) - tcp_m = &rte_flow_item_tcp_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_sport, - rte_be_to_cpu_16(tcp_m->hdr.src_port)); + MLX5_ITEM_UPDATE(item, key_type, tcp_v, tcp_m, + &rte_flow_item_tcp_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_sport, rte_be_to_cpu_16(tcp_v->hdr.src_port & tcp_m->hdr.src_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_dport, - rte_be_to_cpu_16(tcp_m->hdr.dst_port)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_dport, rte_be_to_cpu_16(tcp_v->hdr.dst_port & tcp_m->hdr.dst_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_flags, - tcp_m->hdr.tcp_flags); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_flags, - (tcp_v->hdr.tcp_flags & tcp_m->hdr.tcp_flags)); + tcp_v->hdr.tcp_flags & tcp_m->hdr.tcp_flags); } /** - * Add ESP item to matcher and to the value. + * Add ESP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_esp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_esp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_esp *esp_m = item->mask; - const struct rte_flow_item_esp *esp_v = item->spec; - void *headers_m; + const struct rte_flow_item_esp *esp_m; + const struct rte_flow_item_esp *esp_v; void *headers_v; - char *spi_m; char *spi_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ESP); - if (!esp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_ESP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!esp_m) - esp_m = &rte_flow_item_esp_mask; - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + MLX5_ITEM_UPDATE(item, key_type, esp_v, esp_m, + &rte_flow_item_esp_mask); headers_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - if (inner) { - spi_m = MLX5_ADDR_OF(fte_match_set_misc, headers_m, inner_esp_spi); - spi_v = MLX5_ADDR_OF(fte_match_set_misc, headers_v, inner_esp_spi); - } else { - spi_m = MLX5_ADDR_OF(fte_match_set_misc, headers_m, outer_esp_spi); - spi_v = MLX5_ADDR_OF(fte_match_set_misc, headers_v, outer_esp_spi); - } - *(uint32_t *)spi_m = esp_m->hdr.spi; + spi_v = inner ? MLX5_ADDR_OF(fte_match_set_misc, headers_v, + inner_esp_spi) : MLX5_ADDR_OF(fte_match_set_misc + , headers_v, outer_esp_spi); *(uint32_t *)spi_v = esp_m->hdr.spi & esp_v->hdr.spi; } /** - * Add UDP item to matcher and to the value. + * Add UDP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_udp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_udp(void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_udp *udp_m = item->mask; - const struct rte_flow_item_udp *udp_v = item->spec; - void *headers_m; + const struct rte_flow_item_udp *udp_m; + const struct rte_flow_item_udp *udp_v; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_UDP); - if (!udp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_UDP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!udp_m) - udp_m = &rte_flow_item_udp_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_sport, - rte_be_to_cpu_16(udp_m->hdr.src_port)); + MLX5_ITEM_UPDATE(item, key_type, udp_v, udp_m, + &rte_flow_item_udp_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_sport, rte_be_to_cpu_16(udp_v->hdr.src_port & udp_m->hdr.src_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, - rte_be_to_cpu_16(udp_m->hdr.dst_port)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, rte_be_to_cpu_16(udp_v->hdr.dst_port & udp_m->hdr.dst_port)); + /* Force get UDP dport in case to be used in VXLAN translate. */ + if (key_type & MLX5_SET_MATCHER_SW) { + udp_v = item->spec; + wks->udp_dport = rte_be_to_cpu_16(udp_v->hdr.dst_port & + udp_m->hdr.dst_port); + } } /** - * Add GRE optional Key item to matcher and to the value. + * Add GRE optional Key item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8944,55 +8863,46 @@ flow_dv_translate_item_udp(void *matcher, void *key, * Item is inner pattern. */ static void -flow_dv_translate_item_gre_key(void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_gre_key(void *key, const struct rte_flow_item *item, + uint32_t key_type) { - const rte_be32_t *key_m = item->mask; - const rte_be32_t *key_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const rte_be32_t *key_m; + const rte_be32_t *key_v; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); rte_be32_t gre_key_default_mask = RTE_BE32(UINT32_MAX); /* GRE K bit must be on and should already be validated */ - MLX5_SET(fte_match_set_misc, misc_m, gre_k_present, 1); MLX5_SET(fte_match_set_misc, misc_v, gre_k_present, 1); - if (!key_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!key_m) - key_m = &gre_key_default_mask; - MLX5_SET(fte_match_set_misc, misc_m, gre_key_h, - rte_be_to_cpu_32(*key_m) >> 8); + MLX5_ITEM_UPDATE(item, key_type, key_v, key_m, + &gre_key_default_mask); MLX5_SET(fte_match_set_misc, misc_v, gre_key_h, rte_be_to_cpu_32((*key_v) & (*key_m)) >> 8); - MLX5_SET(fte_match_set_misc, misc_m, gre_key_l, - rte_be_to_cpu_32(*key_m) & 0xFF); MLX5_SET(fte_match_set_misc, misc_v, gre_key_l, rte_be_to_cpu_32((*key_v) & (*key_m)) & 0xFF); } /** - * Add GRE item to matcher and to the value. + * Add GRE item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gre(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t pattern_flags) +flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item, + uint64_t pattern_flags, uint32_t key_type) { static const struct rte_flow_item_gre empty_gre = {0,}; const struct rte_flow_item_gre *gre_m = item->mask; const struct rte_flow_item_gre *gre_v = item->spec; - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); struct { union { @@ -9010,8 +8920,11 @@ flow_dv_translate_item_gre(void *matcher, void *key, } gre_crks_rsvd0_ver_m, gre_crks_rsvd0_ver_v; uint16_t protocol_m, protocol_v; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_GRE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, + IPPROTO_GRE); if (!gre_v) { gre_v = &empty_gre; gre_m = &empty_gre; @@ -9019,20 +8932,18 @@ flow_dv_translate_item_gre(void *matcher, void *key, if (!gre_m) gre_m = &rte_flow_item_gre_mask; } + if (key_type & MLX5_SET_MATCHER_M) + gre_v = gre_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + gre_m = gre_v; gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->c_rsvd0_ver); gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->c_rsvd0_ver); - MLX5_SET(fte_match_set_misc, misc_m, gre_c_present, - gre_crks_rsvd0_ver_m.c_present); MLX5_SET(fte_match_set_misc, misc_v, gre_c_present, gre_crks_rsvd0_ver_v.c_present & gre_crks_rsvd0_ver_m.c_present); - MLX5_SET(fte_match_set_misc, misc_m, gre_k_present, - gre_crks_rsvd0_ver_m.k_present); MLX5_SET(fte_match_set_misc, misc_v, gre_k_present, gre_crks_rsvd0_ver_v.k_present & gre_crks_rsvd0_ver_m.k_present); - MLX5_SET(fte_match_set_misc, misc_m, gre_s_present, - gre_crks_rsvd0_ver_m.s_present); MLX5_SET(fte_match_set_misc, misc_v, gre_s_present, gre_crks_rsvd0_ver_v.s_present & gre_crks_rsvd0_ver_m.s_present); @@ -9043,17 +8954,17 @@ flow_dv_translate_item_gre(void *matcher, void *key, protocol_v = mlx5_translate_tunnel_etypes(pattern_flags); if (protocol_v) protocol_m = 0xFFFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + protocol_v = protocol_m; } - MLX5_SET(fte_match_set_misc, misc_m, gre_protocol, protocol_m); MLX5_SET(fte_match_set_misc, misc_v, gre_protocol, protocol_m & protocol_v); } /** - * Add GRE optional items to matcher and to the value. + * Add GRE optional items to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -9062,24 +8973,28 @@ flow_dv_translate_item_gre(void *matcher, void *key, * Pointer to gre_item. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gre_option(void *matcher, void *key, +flow_dv_translate_item_gre_option(void *key, const struct rte_flow_item *item, const struct rte_flow_item *gre_item, - uint64_t pattern_flags) + uint64_t pattern_flags, uint32_t key_type) { - const struct rte_flow_item_gre_opt *option_m = item->mask; - const struct rte_flow_item_gre_opt *option_v = item->spec; + const struct rte_flow_item_gre_opt *option_m; + const struct rte_flow_item_gre_opt *option_v; const struct rte_flow_item_gre *gre_m = gre_item->mask; const struct rte_flow_item_gre *gre_v = gre_item->spec; static const struct rte_flow_item_gre empty_gre = {0}; + struct rte_flow_item_gre_opt option_dm; struct rte_flow_item gre_key_item; uint16_t c_rsvd0_ver_m, c_rsvd0_ver_v; uint16_t protocol_m, protocol_v; - void *misc5_m; void *misc5_v; + memset(&option_dm, 0, sizeof(option_dm)); + MLX5_ITEM_UPDATE(item, key_type, option_v, option_m, &option_dm); /* * If only match key field, keep using misc for matching. * If need to match checksum or sequence, using misc5 and do @@ -9087,11 +9002,10 @@ flow_dv_translate_item_gre_option(void *matcher, void *key, */ if (!(option_m->sequence.sequence || option_m->checksum_rsvd.checksum)) { - flow_dv_translate_item_gre(matcher, key, gre_item, - pattern_flags); + flow_dv_translate_item_gre(key, gre_item, pattern_flags, key_type); gre_key_item.spec = &option_v->key.key; gre_key_item.mask = &option_m->key.key; - flow_dv_translate_item_gre_key(matcher, key, &gre_key_item); + flow_dv_translate_item_gre_key(key, &gre_key_item, key_type); return; } if (!gre_v) { @@ -9126,57 +9040,49 @@ flow_dv_translate_item_gre_option(void *matcher, void *key, c_rsvd0_ver_v |= RTE_BE16(0x8000); c_rsvd0_ver_m |= RTE_BE16(0x8000); } + if (key_type & MLX5_SET_MATCHER_M) { + c_rsvd0_ver_v = c_rsvd0_ver_m; + protocol_v = protocol_m; + } /* * Hardware parses GRE optional field into the fixed location, * do not need to adjust the tunnel dword indices. */ misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); - misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_0, rte_be_to_cpu_32((c_rsvd0_ver_v | protocol_v << 16) & (c_rsvd0_ver_m | protocol_m << 16))); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_0, - rte_be_to_cpu_32(c_rsvd0_ver_m | protocol_m << 16)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_1, rte_be_to_cpu_32(option_v->checksum_rsvd.checksum & option_m->checksum_rsvd.checksum)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_1, - rte_be_to_cpu_32(option_m->checksum_rsvd.checksum)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_2, rte_be_to_cpu_32(option_v->key.key & option_m->key.key)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_2, - rte_be_to_cpu_32(option_m->key.key)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_3, rte_be_to_cpu_32(option_v->sequence.sequence & option_m->sequence.sequence)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_3, - rte_be_to_cpu_32(option_m->sequence.sequence)); } /** * Add NVGRE item to matcher and to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_nvgre(void *matcher, void *key, - const struct rte_flow_item *item, - unsigned long pattern_flags) +flow_dv_translate_item_nvgre(void *key, const struct rte_flow_item *item, + unsigned long pattern_flags, uint32_t key_type) { - const struct rte_flow_item_nvgre *nvgre_m = item->mask; - const struct rte_flow_item_nvgre *nvgre_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const struct rte_flow_item_nvgre *nvgre_m; + const struct rte_flow_item_nvgre *nvgre_v; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); const char *tni_flow_id_m; const char *tni_flow_id_v; - char *gre_key_m; char *gre_key_v; int size; int i; @@ -9195,158 +9101,145 @@ flow_dv_translate_item_nvgre(void *matcher, void *key, .mask = &gre_mask, .last = NULL, }; - flow_dv_translate_item_gre(matcher, key, &gre_item, pattern_flags); - if (!nvgre_v) + flow_dv_translate_item_gre(key, &gre_item, pattern_flags, key_type); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!nvgre_m) - nvgre_m = &rte_flow_item_nvgre_mask; + MLX5_ITEM_UPDATE(item, key_type, nvgre_v, nvgre_m, + &rte_flow_item_nvgre_mask); tni_flow_id_m = (const char *)nvgre_m->tni; tni_flow_id_v = (const char *)nvgre_v->tni; size = sizeof(nvgre_m->tni) + sizeof(nvgre_m->flow_id); - gre_key_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, gre_key_h); gre_key_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, gre_key_h); - memcpy(gre_key_m, tni_flow_id_m, size); for (i = 0; i < size; ++i) - gre_key_v[i] = gre_key_m[i] & tni_flow_id_v[i]; + gre_key_v[i] = tni_flow_id_m[i] & tni_flow_id_v[i]; } /** - * Add VXLAN item to matcher and to the value. + * Add VXLAN item to the value. * * @param[in] dev * Pointer to the Ethernet device structure. * @param[in] attr * Flow rule attributes. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] wks + * Matcher workspace. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_vxlan(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, - void *matcher, void *key, - const struct rte_flow_item *item, - int inner) + void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_vxlan *vxlan_m = item->mask; - const struct rte_flow_item_vxlan *vxlan_v = item->spec; - void *headers_m; + const struct rte_flow_item_vxlan *vxlan_m; + const struct rte_flow_item_vxlan *vxlan_v; + const struct rte_flow_item_vxlan *vxlan_vv = item->spec; void *headers_v; - void *misc5_m; + void *misc_v; void *misc5_v; + uint32_t tunnel_v; uint32_t *tunnel_header_v; - uint32_t *tunnel_header_m; + char *vni_v; uint16_t dport; + int size; + int i; struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_item_vxlan nic_mask = { .vni = "\xff\xff\xff", .rsvd1 = 0xff, }; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); dport = item->type == RTE_FLOW_ITEM_TYPE_VXLAN ? MLX5_UDP_PORT_VXLAN : MLX5_UDP_PORT_VXLAN_GPE; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, dport); - } - dport = MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport); - if (!vxlan_v) - return; - if (!vxlan_m) { - if ((!attr->group && !priv->sh->tunnel_header_0_1) || - (attr->group && !priv->sh->misc5_cap)) - vxlan_m = &rte_flow_item_vxlan_mask; + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xFFFF); else - vxlan_m = &nic_mask; + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, dport); } + /* + * Read the UDP dport to check if the value satisfies the VXLAN + * matching with MISC5 for CX5. + */ + if (wks->udp_dport) + dport = wks->udp_dport; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, vxlan_v, vxlan_m, &nic_mask); + if (item->mask == &nic_mask && + ((!attr->group && !priv->sh->tunnel_header_0_1) || + (attr->group && !priv->sh->misc5_cap))) + vxlan_m = &rte_flow_item_vxlan_mask; if ((priv->sh->steering_format_version == - MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 && - dport != MLX5_UDP_PORT_VXLAN) || - (!attr->group && !attr->transfer && !priv->sh->tunnel_header_0_1) || + MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 && + dport != MLX5_UDP_PORT_VXLAN) || + (!attr->group && !attr->transfer) || ((attr->group || attr->transfer) && !priv->sh->misc5_cap)) { - void *misc_m; - void *misc_v; - char *vni_m; - char *vni_v; - int size; - int i; - misc_m = MLX5_ADDR_OF(fte_match_param, - matcher, misc_parameters); misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); size = sizeof(vxlan_m->vni); - vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni); vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni); - memcpy(vni_m, vxlan_m->vni, size); for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & vxlan_v->vni[i]; + vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i]; return; } - misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5); - misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5, misc5_v, tunnel_header_1); - tunnel_header_m = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5, - misc5_m, - tunnel_header_1); - *tunnel_header_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) | - (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 | - (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16; - if (*tunnel_header_v) - *tunnel_header_m = vxlan_m->vni[0] | - vxlan_m->vni[1] << 8 | - vxlan_m->vni[2] << 16; - else - *tunnel_header_m = 0x0; - *tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24; - if (vxlan_v->rsvd1 & vxlan_m->rsvd1) - *tunnel_header_m |= vxlan_m->rsvd1 << 24; + tunnel_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) | + (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 | + (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16; + *tunnel_header_v = tunnel_v; + if (key_type == MLX5_SET_MATCHER_SW_M) { + tunnel_v = (vxlan_vv->vni[0] & vxlan_m->vni[0]) | + (vxlan_vv->vni[1] & vxlan_m->vni[1]) << 8 | + (vxlan_vv->vni[2] & vxlan_m->vni[2]) << 16; + if (!tunnel_v) + *tunnel_header_v = 0x0; + if (vxlan_vv->rsvd1 & vxlan_m->rsvd1) + *tunnel_header_v |= vxlan_v->rsvd1 << 24; + } else { + *tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24; + } } /** - * Add VXLAN-GPE item to matcher and to the value. + * Add VXLAN-GPE item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] pattern_flags + * Item pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, - const struct rte_flow_item *item, - const uint64_t pattern_flags) +flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item, + const uint64_t pattern_flags, + uint32_t key_type) { static const struct rte_flow_item_vxlan_gpe dummy_vxlan_gpe_hdr = {0, }; const struct rte_flow_item_vxlan_gpe *vxlan_m = item->mask; const struct rte_flow_item_vxlan_gpe *vxlan_v = item->spec; /* The item was validated to be on the outer side */ - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = - MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_3); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - char *vni_m = - MLX5_ADDR_OF(fte_match_set_misc3, misc_m, outer_vxlan_gpe_vni); char *vni_v = MLX5_ADDR_OF(fte_match_set_misc3, misc_v, outer_vxlan_gpe_vni); int i, size = sizeof(vxlan_m->vni); @@ -9355,9 +9248,12 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, uint8_t m_protocol, v_protocol; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_VXLAN_GPE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + MLX5_UDP_PORT_VXLAN_GPE); } if (!vxlan_v) { vxlan_v = &dummy_vxlan_gpe_hdr; @@ -9366,15 +9262,18 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, if (!vxlan_m) vxlan_m = &rte_flow_item_vxlan_gpe_mask; } - memcpy(vni_m, vxlan_m->vni, size); + if (key_type & MLX5_SET_MATCHER_M) + vxlan_v = vxlan_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + vxlan_m = vxlan_v; for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & vxlan_v->vni[i]; + vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i]; if (vxlan_m->flags) { flags_m = vxlan_m->flags; flags_v = vxlan_v->flags; } - MLX5_SET(fte_match_set_misc3, misc_m, outer_vxlan_gpe_flags, flags_m); - MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags, flags_v); + MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags, + flags_m & flags_v); m_protocol = vxlan_m->protocol; v_protocol = vxlan_v->protocol; if (!m_protocol) { @@ -9387,50 +9286,50 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, v_protocol = RTE_VXLAN_GPE_TYPE_IPV6; if (v_protocol) m_protocol = 0xFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + v_protocol = m_protocol; } - MLX5_SET(fte_match_set_misc3, misc_m, - outer_vxlan_gpe_next_protocol, m_protocol); MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_next_protocol, m_protocol & v_protocol); } /** - * Add Geneve item to matcher and to the value. + * Add Geneve item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] pattern_flags + * Item pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_geneve(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t pattern_flags) +flow_dv_translate_item_geneve(void *key, const struct rte_flow_item *item, + uint64_t pattern_flags, uint32_t key_type) { static const struct rte_flow_item_geneve empty_geneve = {0,}; const struct rte_flow_item_geneve *geneve_m = item->mask; const struct rte_flow_item_geneve *geneve_v = item->spec; /* GENEVE flow item validation allows single tunnel item */ - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); uint16_t gbhdr_m; uint16_t gbhdr_v; - char *vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, geneve_vni); char *vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, geneve_vni); size_t size = sizeof(geneve_m->vni), i; uint16_t protocol_m, protocol_v; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_GENEVE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + MLX5_UDP_PORT_GENEVE); } if (!geneve_v) { geneve_v = &empty_geneve; @@ -9439,17 +9338,16 @@ flow_dv_translate_item_geneve(void *matcher, void *key, if (!geneve_m) geneve_m = &rte_flow_item_geneve_mask; } - memcpy(vni_m, geneve_m->vni, size); + if (key_type & MLX5_SET_MATCHER_M) + geneve_v = geneve_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + geneve_m = geneve_v; for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & geneve_v->vni[i]; + vni_v[i] = geneve_m->vni[i] & geneve_v->vni[i]; gbhdr_m = rte_be_to_cpu_16(geneve_m->ver_opt_len_o_c_rsvd0); gbhdr_v = rte_be_to_cpu_16(geneve_v->ver_opt_len_o_c_rsvd0); - MLX5_SET(fte_match_set_misc, misc_m, geneve_oam, - MLX5_GENEVE_OAMF_VAL(gbhdr_m)); MLX5_SET(fte_match_set_misc, misc_v, geneve_oam, MLX5_GENEVE_OAMF_VAL(gbhdr_v) & MLX5_GENEVE_OAMF_VAL(gbhdr_m)); - MLX5_SET(fte_match_set_misc, misc_m, geneve_opt_len, - MLX5_GENEVE_OPTLEN_VAL(gbhdr_m)); MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, MLX5_GENEVE_OPTLEN_VAL(gbhdr_v) & MLX5_GENEVE_OPTLEN_VAL(gbhdr_m)); @@ -9460,8 +9358,10 @@ flow_dv_translate_item_geneve(void *matcher, void *key, protocol_v = mlx5_translate_tunnel_etypes(pattern_flags); if (protocol_v) protocol_m = 0xFFFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + protocol_v = protocol_m; } - MLX5_SET(fte_match_set_misc, misc_m, geneve_protocol_type, protocol_m); MLX5_SET(fte_match_set_misc, misc_v, geneve_protocol_type, protocol_m & protocol_v); } @@ -9471,10 +9371,8 @@ flow_dv_translate_item_geneve(void *matcher, void *key, * * @param dev[in, out] * Pointer to rte_eth_dev structure. - * @param[in, out] tag_be24 - * Tag value in big endian then R-shift 8. - * @parm[in, out] dev_flow - * Pointer to the dev_flow. + * @param[in] item + * Flow pattern to translate. * @param[out] error * pointer to error structure. * @@ -9551,38 +9449,38 @@ flow_dev_geneve_tlv_option_resource_register(struct rte_eth_dev *dev, } /** - * Add Geneve TLV option item to matcher. + * Add Geneve TLV option item to value. * * @param[in, out] dev * Pointer to rte_eth_dev structure. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. * @param[out] error * Pointer to error structure. */ static int -flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, +flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type, struct rte_flow_error *error) { - const struct rte_flow_item_geneve_opt *geneve_opt_m = item->mask; - const struct rte_flow_item_geneve_opt *geneve_opt_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const struct rte_flow_item_geneve_opt *geneve_opt_m; + const struct rte_flow_item_geneve_opt *geneve_opt_v; + const struct rte_flow_item_geneve_opt *geneve_opt_vv = item->spec; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); rte_be32_t opt_data_key = 0, opt_data_mask = 0; + uint32_t *data; int ret = 0; - if (!geneve_opt_v) + if (MLX5_ITEM_VALID(item, key_type)) return -1; - if (!geneve_opt_m) - geneve_opt_m = &rte_flow_item_geneve_opt_mask; + MLX5_ITEM_UPDATE(item, key_type, geneve_opt_v, geneve_opt_m, + &rte_flow_item_geneve_opt_mask); ret = flow_dev_geneve_tlv_option_resource_register(dev, item, error); if (ret) { @@ -9596,17 +9494,21 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, * If the option length was not requested but the GENEVE TLV option item * is present we set the option length field implicitly. */ - if (!MLX5_GET16(fte_match_set_misc, misc_m, geneve_opt_len)) { - MLX5_SET(fte_match_set_misc, misc_m, geneve_opt_len, - MLX5_GENEVE_OPTLEN_MASK); - MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, - geneve_opt_v->option_len + 1); - } - MLX5_SET(fte_match_set_misc, misc_m, geneve_tlv_option_0_exist, 1); - MLX5_SET(fte_match_set_misc, misc_v, geneve_tlv_option_0_exist, 1); + if (!MLX5_GET16(fte_match_set_misc, misc_v, geneve_opt_len)) { + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, + MLX5_GENEVE_OPTLEN_MASK); + else + MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, + geneve_opt_v->option_len + 1); + } /* Set the data. */ - if (geneve_opt_v->data) { - memcpy(&opt_data_key, geneve_opt_v->data, + if (key_type == MLX5_SET_MATCHER_SW_V) + data = geneve_opt_vv->data; + else + data = geneve_opt_v->data; + if (data) { + memcpy(&opt_data_key, data, RTE_MIN((uint32_t)(geneve_opt_v->option_len * 4), sizeof(opt_data_key))); MLX5_ASSERT((uint32_t)(geneve_opt_v->option_len * 4) <= @@ -9616,9 +9518,6 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, sizeof(opt_data_mask))); MLX5_ASSERT((uint32_t)(geneve_opt_v->option_len * 4) <= sizeof(opt_data_mask)); - MLX5_SET(fte_match_set_misc3, misc3_m, - geneve_tlv_option_0_data, - rte_be_to_cpu_32(opt_data_mask)); MLX5_SET(fte_match_set_misc3, misc3_v, geneve_tlv_option_0_data, rte_be_to_cpu_32(opt_data_key & opt_data_mask)); @@ -9627,10 +9526,8 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, } /** - * Add MPLS item to matcher and to the value. + * Add MPLS item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -9639,93 +9536,78 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, * The protocol layer indicated in previous item. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_mpls(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t prev_layer, - int inner) +flow_dv_translate_item_mpls(void *key, const struct rte_flow_item *item, + uint64_t prev_layer, int inner, + uint32_t key_type) { - const uint32_t *in_mpls_m = item->mask; - const uint32_t *in_mpls_v = item->spec; - uint32_t *out_mpls_m = 0; + const uint32_t *in_mpls_m; + const uint32_t *in_mpls_v; uint32_t *out_mpls_v = 0; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - void *misc2_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_2); void *misc2_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2); - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); switch (prev_layer) { case MLX5_FLOW_LAYER_OUTER_L4_UDP: if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, - 0xffff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_MPLS); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xffff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, MLX5_UDP_PORT_MPLS); } break; case MLX5_FLOW_LAYER_GRE: /* Fall-through. */ case MLX5_FLOW_LAYER_GRE_KEY: if (!MLX5_GET16(fte_match_set_misc, misc_v, gre_protocol)) { - MLX5_SET(fte_match_set_misc, misc_m, gre_protocol, - 0xffff); - MLX5_SET(fte_match_set_misc, misc_v, gre_protocol, - RTE_ETHER_TYPE_MPLS); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_misc, misc_v, + gre_protocol, 0xffff); + else + MLX5_SET(fte_match_set_misc, misc_v, + gre_protocol, RTE_ETHER_TYPE_MPLS); } break; default: break; } - if (!in_mpls_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!in_mpls_m) - in_mpls_m = (const uint32_t *)&rte_flow_item_mpls_mask; + MLX5_ITEM_UPDATE(item, key_type, in_mpls_v, in_mpls_m, + &rte_flow_item_mpls_mask); switch (prev_layer) { case MLX5_FLOW_LAYER_OUTER_L4_UDP: - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_m, - outer_first_mpls_over_udp); out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls_over_udp); break; case MLX5_FLOW_LAYER_GRE: - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_m, - outer_first_mpls_over_gre); out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls_over_gre); break; default: /* Inner MPLS not over GRE is not supported. */ - if (!inner) { - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, - misc2_m, - outer_first_mpls); + if (!inner) out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls); - } break; } - if (out_mpls_m && out_mpls_v) { - *out_mpls_m = *in_mpls_m; + if (out_mpls_v) *out_mpls_v = *in_mpls_v & *in_mpls_m; - } } /** * Add metadata register item to matcher * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] reg_type @@ -9736,12 +9618,9 @@ flow_dv_translate_item_mpls(void *matcher, void *key, * Register mask */ static void -flow_dv_match_meta_reg(void *matcher, void *key, - enum modify_reg reg_type, +flow_dv_match_meta_reg(void *key, enum modify_reg reg_type, uint32_t data, uint32_t mask) { - void *misc2_m = - MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2); void *misc2_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2); uint32_t temp; @@ -9749,11 +9628,9 @@ flow_dv_match_meta_reg(void *matcher, void *key, data &= mask; switch (reg_type) { case REG_A: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_a, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_a, data); break; case REG_B: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_b, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_b, data); break; case REG_C_0: @@ -9762,40 +9639,31 @@ flow_dv_match_meta_reg(void *matcher, void *key, * source vport index and META item value, we should set * this field according to specified mask, not as whole one. */ - temp = MLX5_GET(fte_match_set_misc2, misc2_m, metadata_reg_c_0); - temp |= mask; - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_0, temp); temp = MLX5_GET(fte_match_set_misc2, misc2_v, metadata_reg_c_0); - temp &= ~mask; + if (mask) + temp &= ~mask; temp |= data; MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_0, temp); break; case REG_C_1: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_1, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_1, data); break; case REG_C_2: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_2, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_2, data); break; case REG_C_3: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_3, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_3, data); break; case REG_C_4: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_4, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_4, data); break; case REG_C_5: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_5, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_5, data); break; case REG_C_6: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_6, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_6, data); break; case REG_C_7: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_7, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_7, data); break; default: @@ -9804,34 +9672,71 @@ flow_dv_match_meta_reg(void *matcher, void *key, } } +/** + * Add metadata register item to matcher + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] reg_type + * Type of device metadata register + * @param[in] value + * Register value + * @param[in] mask + * Register mask + */ +static void +flow_dv_match_meta_reg_all(void *matcher, void *key, enum modify_reg reg_type, + uint32_t data, uint32_t mask) +{ + flow_dv_match_meta_reg(key, reg_type, data, mask); + flow_dv_match_meta_reg(matcher, reg_type, mask, mask); +} + /** * Add MARK item to matcher * * @param[in] dev * The device to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_mark(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_mark(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_item_mark *mark; uint32_t value; - uint32_t mask; - - mark = item->mask ? (const void *)item->mask : - &rte_flow_item_mark_mask; - mask = mark->id & priv->sh->dv_mark_mask; - mark = (const void *)item->spec; - MLX5_ASSERT(mark); - value = mark->id & priv->sh->dv_mark_mask & mask; + uint32_t mask = 0; + + if (key_type & MLX5_SET_MATCHER_SW) { + mark = item->mask ? (const void *)item->mask : + &rte_flow_item_mark_mask; + mask = mark->id; + if (key_type == MLX5_SET_MATCHER_SW_M) { + value = mask; + } else { + mark = (const void *)item->spec; + MLX5_ASSERT(mark); + value = mark->id; + } + } else { + mark = (key_type == MLX5_SET_MATCHER_HS_V) ? + (const void *)item->spec : (const void *)item->mask; + MLX5_ASSERT(mark); + value = mark->id; + if (key_type == MLX5_SET_MATCHER_HS_M) + mask = value; + } + mask &= priv->sh->dv_mark_mask; + value &= mask; if (mask) { enum modify_reg reg; @@ -9847,7 +9752,7 @@ flow_dv_translate_item_mark(struct rte_eth_dev *dev, mask <<= shl_c0; value <<= shl_c0; } - flow_dv_match_meta_reg(matcher, key, reg, value, mask); + flow_dv_match_meta_reg(key, reg, value, mask); } } @@ -9856,65 +9761,66 @@ flow_dv_translate_item_mark(struct rte_eth_dev *dev, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] attr * Attributes of flow that includes this item. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_meta(struct rte_eth_dev *dev, - void *matcher, void *key, + void *key, const struct rte_flow_attr *attr, - const struct rte_flow_item *item) + const struct rte_flow_item *item, + uint32_t key_type) { const struct rte_flow_item_meta *meta_m; const struct rte_flow_item_meta *meta_v; + uint32_t value; + uint32_t mask = 0; + int reg; - meta_m = (const void *)item->mask; - if (!meta_m) - meta_m = &rte_flow_item_meta_mask; - meta_v = (const void *)item->spec; - if (meta_v) { - int reg; - uint32_t value = meta_v->data; - uint32_t mask = meta_m->data; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, meta_v, meta_m, + &rte_flow_item_meta_mask); + value = meta_v->data; + mask = meta_m->data; + if (key_type == MLX5_SET_MATCHER_HS_M) + mask = value; + reg = flow_dv_get_metadata_reg(dev, attr, NULL); + if (reg < 0) + return; + MLX5_ASSERT(reg != REG_NON); + if (reg == REG_C_0) { + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t msk_c0 = priv->sh->dv_regc0_mask; + uint32_t shl_c0 = rte_bsf32(msk_c0); - reg = flow_dv_get_metadata_reg(dev, attr, NULL); - if (reg < 0) - return; - MLX5_ASSERT(reg != REG_NON); - if (reg == REG_C_0) { - struct mlx5_priv *priv = dev->data->dev_private; - uint32_t msk_c0 = priv->sh->dv_regc0_mask; - uint32_t shl_c0 = rte_bsf32(msk_c0); - - mask &= msk_c0; - mask <<= shl_c0; - value <<= shl_c0; - } - flow_dv_match_meta_reg(matcher, key, reg, value, mask); + mask &= msk_c0; + mask <<= shl_c0; + value <<= shl_c0; } + flow_dv_match_meta_reg(key, reg, value, mask); } /** * Add vport metadata Reg C0 item to matcher * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. - * @param[in] reg - * Flow pattern to translate. + * @param[in] value + * Register value + * @param[in] mask + * Register mask */ static void -flow_dv_translate_item_meta_vport(void *matcher, void *key, - uint32_t value, uint32_t mask) +flow_dv_translate_item_meta_vport(void *key, uint32_t value, uint32_t mask) { - flow_dv_match_meta_reg(matcher, key, REG_C_0, value, mask); + flow_dv_match_meta_reg(key, REG_C_0, value, mask); } /** @@ -9922,17 +9828,17 @@ flow_dv_translate_item_meta_vport(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { const struct mlx5_rte_flow_item_tag *tag_v = item->spec; const struct mlx5_rte_flow_item_tag *tag_m = item->mask; @@ -9941,6 +9847,8 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, MLX5_ASSERT(tag_v); value = tag_v->data; mask = tag_m ? tag_m->data : UINT32_MAX; + if (key_type & MLX5_SET_MATCHER_M) + value = mask; if (tag_v->id == REG_C_0) { struct mlx5_priv *priv = dev->data->dev_private; uint32_t msk_c0 = priv->sh->dv_regc0_mask; @@ -9950,7 +9858,7 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, mask <<= shl_c0; value <<= shl_c0; } - flow_dv_match_meta_reg(matcher, key, tag_v->id, value, mask); + flow_dv_match_meta_reg(key, tag_v->id, value, mask); } /** @@ -9958,50 +9866,50 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_tag(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_tag(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { - const struct rte_flow_item_tag *tag_v = item->spec; - const struct rte_flow_item_tag *tag_m = item->mask; + const struct rte_flow_item_tag *tag_vv = item->spec; + const struct rte_flow_item_tag *tag_v; + const struct rte_flow_item_tag *tag_m; enum modify_reg reg; + uint32_t index; - MLX5_ASSERT(tag_v); - tag_m = tag_m ? tag_m : &rte_flow_item_tag_mask; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, tag_v, tag_m, + &rte_flow_item_tag_mask); + /* When set mask, the index should be from spec. */ + index = tag_vv ? tag_vv->index : tag_v->index; /* Get the metadata register index for the tag. */ - reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, tag_v->index, NULL); + reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, index, NULL); MLX5_ASSERT(reg > 0); - flow_dv_match_meta_reg(matcher, key, reg, tag_v->data, tag_m->data); + flow_dv_match_meta_reg(key, reg, tag_v->data, tag_m->data); } /** * Add source vport match to the specified matcher. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] port * Source vport value to match - * @param[in] mask - * Mask */ static void -flow_dv_translate_item_source_vport(void *matcher, void *key, - int16_t port, uint16_t mask) +flow_dv_translate_item_source_vport(void *key, + int16_t port) { - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - MLX5_SET(fte_match_set_misc, misc_m, source_port, mask); MLX5_SET(fte_match_set_misc, misc_v, source_port, port); } @@ -10010,31 +9918,34 @@ flow_dv_translate_item_source_vport(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] + * @param[in] attr * Flow attributes. + * @param[in] key_type + * Set flow matcher mask or value. * * @return * 0 on success, a negative errno value otherwise. */ static int -flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, - const struct rte_flow_attr *attr) +flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + const struct rte_flow_attr *attr, + uint32_t key_type) { const struct rte_flow_item_port_id *pid_m = item ? item->mask : NULL; const struct rte_flow_item_port_id *pid_v = item ? item->spec : NULL; struct mlx5_priv *priv; uint16_t mask, id; + uint32_t vport_meta; if (pid_v && pid_v->id == MLX5_PORT_ESW_MGR) { - flow_dv_translate_item_source_vport(matcher, key, - mlx5_flow_get_esw_manager_vport_id(dev), 0xffff); + flow_dv_translate_item_source_vport(key, + key_type & MLX5_SET_MATCHER_V ? + mlx5_flow_get_esw_manager_vport_id(dev) : 0xffff); return 0; } mask = pid_m ? pid_m->id : 0xffff; @@ -10042,6 +9953,13 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, priv = mlx5_port_to_eswitch_info(id, item == NULL); if (!priv) return -rte_errno; + if (key_type & MLX5_SET_MATCHER_M) { + id = mask; + vport_meta = priv->vport_meta_mask; + } else { + id = priv->vport_id; + vport_meta = priv->vport_meta_tag; + } /* * Translate to vport field or to metadata, depending on mode. * Kernel can use either misc.source_port or half of C0 metadata @@ -10055,20 +9973,17 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, */ if (mask == 0xffff && priv->vport_id == 0xffff && priv->pf_bond < 0 && attr->transfer) - flow_dv_translate_item_source_vport - (matcher, key, priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); /* * We should always set the vport metadata register, * otherwise the SW steering library can drop * the rule if wire vport metadata value is not zero, * it depends on kernel configuration. */ - flow_dv_translate_item_meta_vport(matcher, key, - priv->vport_meta_tag, - priv->vport_meta_mask); + flow_dv_translate_item_meta_vport + (key, vport_meta, priv->vport_meta_mask); } else { - flow_dv_translate_item_source_vport(matcher, key, - priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); } return 0; } @@ -10078,8 +9993,6 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -10091,21 +10004,25 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, * 0 on success, a negative errno value otherwise. */ static int -flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, - void *key, +flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *key, const struct rte_flow_item *item, - const struct rte_flow_attr *attr) + const struct rte_flow_attr *attr, + uint32_t key_type) { const struct rte_flow_item_ethdev *pid_m = item ? item->mask : NULL; const struct rte_flow_item_ethdev *pid_v = item ? item->spec : NULL; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); struct mlx5_priv *priv; uint16_t mask, id; + uint32_t vport_meta; + MLX5_ASSERT(wks); if (!pid_m && !pid_v) return 0; if (pid_v && pid_v->port_id == UINT16_MAX) { - flow_dv_translate_item_source_vport(matcher, key, - mlx5_flow_get_esw_manager_vport_id(dev), UINT16_MAX); + flow_dv_translate_item_source_vport(key, + key_type & MLX5_SET_MATCHER_V ? + mlx5_flow_get_esw_manager_vport_id(dev) : 0xffff); return 0; } mask = pid_m ? pid_m->port_id : UINT16_MAX; @@ -10113,6 +10030,14 @@ flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, priv = mlx5_port_to_eswitch_info(id, item == NULL); if (!priv) return -rte_errno; + if (key_type & MLX5_SET_MATCHER_M) { + id = mask; + vport_meta = priv->vport_meta_mask; + } else { + id = priv->vport_id; + vport_meta = priv->vport_meta_tag; + wks->vport_meta_tag = vport_meta; + } /* * Translate to vport field or to metadata, depending on mode. * Kernel can use either misc.source_port or half of C0 metadata @@ -10125,119 +10050,133 @@ flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, * save the extra vport match. */ if (mask == UINT16_MAX && priv->vport_id == UINT16_MAX && - priv->pf_bond < 0 && attr->transfer) - flow_dv_translate_item_source_vport - (matcher, key, priv->vport_id, mask); + priv->pf_bond < 0 && attr->transfer && + priv->sh->config.dv_flow_en != 2) + flow_dv_translate_item_source_vport(key, id); /* * We should always set the vport metadata register, * otherwise the SW steering library can drop * the rule if wire vport metadata value is not zero, * it depends on kernel configuration. */ - flow_dv_translate_item_meta_vport(matcher, key, - priv->vport_meta_tag, + flow_dv_translate_item_meta_vport(key, vport_meta, priv->vport_meta_mask); } else { - flow_dv_translate_item_source_vport(matcher, key, - priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); } return 0; } /** - * Add ICMP6 item to matcher and to the value. + * Translate port-id item to eswitch match on port-id. * + * @param[in] dev + * The devich to configure through. * @param[in, out] matcher * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] attr + * Flow attributes. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +static int +flow_dv_translate_item_port_id_all(struct rte_eth_dev *dev, + void *matcher, void *key, + const struct rte_flow_item *item, + const struct rte_flow_attr *attr) +{ + int ret; + + ret = flow_dv_translate_item_port_id + (dev, matcher, item, attr, MLX5_SET_MATCHER_SW_M); + if (ret) + return ret; + ret = flow_dv_translate_item_port_id + (dev, key, item, attr, MLX5_SET_MATCHER_SW_V); + return ret; +} + + +/** + * Add ICMP6 item to the value. + * + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_icmp6(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_icmp6(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_icmp6 *icmp6_m = item->mask; - const struct rte_flow_item_icmp6 *icmp6_v = item->spec; - void *headers_m; + const struct rte_flow_item_icmp6 *icmp6_m; + const struct rte_flow_item_icmp6 *icmp6_v; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ICMPV6); - if (!icmp6_v) + + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, 0xFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, + IPPROTO_ICMPV6); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!icmp6_m) - icmp6_m = &rte_flow_item_icmp6_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, icmpv6_type, icmp6_m->type); + MLX5_ITEM_UPDATE(item, key_type, icmp6_v, icmp6_m, + &rte_flow_item_icmp6_mask); MLX5_SET(fte_match_set_misc3, misc3_v, icmpv6_type, icmp6_v->type & icmp6_m->type); - MLX5_SET(fte_match_set_misc3, misc3_m, icmpv6_code, icmp6_m->code); MLX5_SET(fte_match_set_misc3, misc3_v, icmpv6_code, icmp6_v->code & icmp6_m->code); } /** - * Add ICMP item to matcher and to the value. + * Add ICMP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_icmp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_icmp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_icmp *icmp_m = item->mask; - const struct rte_flow_item_icmp *icmp_v = item->spec; + const struct rte_flow_item_icmp *icmp_m; + const struct rte_flow_item_icmp *icmp_v; uint32_t icmp_header_data_m = 0; uint32_t icmp_header_data_v = 0; - void *headers_m; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ICMP); - if (!icmp_v) + + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_ICMP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!icmp_m) - icmp_m = &rte_flow_item_icmp_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_type, - icmp_m->hdr.icmp_type); + MLX5_ITEM_UPDATE(item, key_type, icmp_v, icmp_m, + &rte_flow_item_icmp_mask); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_type, icmp_v->hdr.icmp_type & icmp_m->hdr.icmp_type); - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_code, - icmp_m->hdr.icmp_code); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_code, icmp_v->hdr.icmp_code & icmp_m->hdr.icmp_code); icmp_header_data_m = rte_be_to_cpu_16(icmp_m->hdr.icmp_seq_nb); @@ -10246,64 +10185,51 @@ flow_dv_translate_item_icmp(void *matcher, void *key, icmp_header_data_v = rte_be_to_cpu_16(icmp_v->hdr.icmp_seq_nb); icmp_header_data_v |= rte_be_to_cpu_16(icmp_v->hdr.icmp_ident) << 16; - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_header_data, - icmp_header_data_m); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_header_data, icmp_header_data_v & icmp_header_data_m); } } /** - * Add GTP item to matcher and to the value. + * Add GTP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gtp(void *matcher, void *key, - const struct rte_flow_item *item, int inner) +flow_dv_translate_item_gtp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_gtp *gtp_m = item->mask; - const struct rte_flow_item_gtp *gtp_v = item->spec; - void *headers_m; + const struct rte_flow_item_gtp *gtp_m; + const struct rte_flow_item_gtp *gtp_v; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); uint16_t dport = RTE_GTPU_UDP_PORT; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, dport); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, dport); } - if (!gtp_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!gtp_m) - gtp_m = &rte_flow_item_gtp_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_flags, - gtp_m->v_pt_rsv_flags); + MLX5_ITEM_UPDATE(item, key_type, gtp_v, gtp_m, + &rte_flow_item_gtp_mask); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags, gtp_v->v_pt_rsv_flags & gtp_m->v_pt_rsv_flags); - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_type, gtp_m->msg_type); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_type, gtp_v->msg_type & gtp_m->msg_type); - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_teid, - rte_be_to_cpu_32(gtp_m->teid)); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_teid, rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid)); } @@ -10311,21 +10237,19 @@ flow_dv_translate_item_gtp(void *matcher, void *key, /** * Add GTP PSC item to matcher. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static int -flow_dv_translate_item_gtp_psc(void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_gtp_psc(void *key, const struct rte_flow_item *item, + uint32_t key_type) { - const struct rte_flow_item_gtp_psc *gtp_psc_m = item->mask; - const struct rte_flow_item_gtp_psc *gtp_psc_v = item->spec; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); + const struct rte_flow_item_gtp_psc *gtp_psc_m; + const struct rte_flow_item_gtp_psc *gtp_psc_v; void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); union { uint32_t w32; @@ -10335,52 +10259,40 @@ flow_dv_translate_item_gtp_psc(void *matcher, void *key, uint8_t next_ext_header_type; }; } dw_2; + union { + uint32_t w32; + struct { + uint8_t len; + uint8_t type_flags; + uint8_t qfi; + uint8_t reserved; + }; + } dw_0; uint8_t gtp_flags; /* Always set E-flag match on one, regardless of GTP item settings. */ - gtp_flags = MLX5_GET(fte_match_set_misc3, misc3_m, gtpu_msg_flags); - gtp_flags |= MLX5_GTP_EXT_HEADER_FLAG; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_flags, gtp_flags); gtp_flags = MLX5_GET(fte_match_set_misc3, misc3_v, gtpu_msg_flags); gtp_flags |= MLX5_GTP_EXT_HEADER_FLAG; MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags, gtp_flags); /*Set next extension header type. */ dw_2.seq_num = 0; dw_2.npdu_num = 0; - dw_2.next_ext_header_type = 0xff; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_dw_2, - rte_cpu_to_be_32(dw_2.w32)); - dw_2.seq_num = 0; - dw_2.npdu_num = 0; - dw_2.next_ext_header_type = 0x85; + if (key_type & MLX5_SET_MATCHER_M) + dw_2.next_ext_header_type = 0xff; + else + dw_2.next_ext_header_type = 0x85; MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_dw_2, rte_cpu_to_be_32(dw_2.w32)); - if (gtp_psc_v) { - union { - uint32_t w32; - struct { - uint8_t len; - uint8_t type_flags; - uint8_t qfi; - uint8_t reserved; - }; - } dw_0; - - /*Set extension header PDU type and Qos. */ - if (!gtp_psc_m) - gtp_psc_m = &rte_flow_item_gtp_psc_mask; - dw_0.w32 = 0; - dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_m->hdr.type); - dw_0.qfi = gtp_psc_m->hdr.qfi; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_first_ext_dw_0, - rte_cpu_to_be_32(dw_0.w32)); - dw_0.w32 = 0; - dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_v->hdr.type & - gtp_psc_m->hdr.type); - dw_0.qfi = gtp_psc_v->hdr.qfi & gtp_psc_m->hdr.qfi; - MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_first_ext_dw_0, - rte_cpu_to_be_32(dw_0.w32)); - } + if (MLX5_ITEM_VALID(item, key_type)) + return 0; + MLX5_ITEM_UPDATE(item, key_type, gtp_psc_v, + gtp_psc_m, &rte_flow_item_gtp_psc_mask); + dw_0.w32 = 0; + dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_v->hdr.type & + gtp_psc_m->hdr.type); + dw_0.qfi = gtp_psc_v->hdr.qfi & gtp_psc_m->hdr.qfi; + MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_first_ext_dw_0, + rte_cpu_to_be_32(dw_0.w32)); return 0; } @@ -10389,29 +10301,27 @@ flow_dv_translate_item_gtp_psc(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] last_item * Last item flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, - uint64_t last_item) +flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint64_t last_item, uint32_t key_type) { struct mlx5_priv *priv = dev->data->dev_private; - const struct rte_flow_item_ecpri *ecpri_m = item->mask; - const struct rte_flow_item_ecpri *ecpri_v = item->spec; + const struct rte_flow_item_ecpri *ecpri_m; + const struct rte_flow_item_ecpri *ecpri_v; + const struct rte_flow_item_ecpri *ecpri_vv = item->spec; struct rte_ecpri_common_hdr common; - void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_4); void *misc4_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_4); uint32_t *samples; - void *dw_m; void *dw_v; /* @@ -10419,21 +10329,22 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, * match on eCPRI EtherType implicitly. */ if (last_item & MLX5_FLOW_LAYER_OUTER_L2) { - void *hdrs_m, *hdrs_v, *l2m, *l2v; + void *hdrs_v, *l2v; - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - l2m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, ethertype); l2v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype); - if (*(uint16_t *)l2m == 0 && *(uint16_t *)l2v == 0) { - *(uint16_t *)l2m = UINT16_MAX; - *(uint16_t *)l2v = RTE_BE16(RTE_ETHER_TYPE_ECPRI); + if (*(uint16_t *)l2v == 0) { + if (key_type & MLX5_SET_MATCHER_M) + *(uint16_t *)l2v = UINT16_MAX; + else + *(uint16_t *)l2v = + RTE_BE16(RTE_ETHER_TYPE_ECPRI); } } - if (!ecpri_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ecpri_m) - ecpri_m = &rte_flow_item_ecpri_mask; + MLX5_ITEM_UPDATE(item, key_type, ecpri_v, ecpri_m, + &rte_flow_item_ecpri_mask); /* * Maximal four DW samples are supported in a single matching now. * Two are used now for a eCPRI matching: @@ -10445,16 +10356,11 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, return; samples = priv->sh->ecpri_parser.ids; /* Need to take the whole DW as the mask to fill the entry. */ - dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m, - prog_sample_field_value_0); dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v, prog_sample_field_value_0); /* Already big endian (network order) in the header. */ - *(uint32_t *)dw_m = ecpri_m->hdr.common.u32; *(uint32_t *)dw_v = ecpri_v->hdr.common.u32 & ecpri_m->hdr.common.u32; /* Sample#0, used for matching type, offset 0. */ - MLX5_SET(fte_match_set_misc4, misc4_m, - prog_sample_field_id_0, samples[0]); /* It makes no sense to set the sample ID in the mask field. */ MLX5_SET(fte_match_set_misc4, misc4_v, prog_sample_field_id_0, samples[0]); @@ -10463,21 +10369,19 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, * Some wildcard rules only matching type field should be supported. */ if (ecpri_m->hdr.dummy[0]) { - common.u32 = rte_be_to_cpu_32(ecpri_v->hdr.common.u32); + if (key_type == MLX5_SET_MATCHER_SW_M) + common.u32 = rte_be_to_cpu_32(ecpri_vv->hdr.common.u32); + else + common.u32 = rte_be_to_cpu_32(ecpri_v->hdr.common.u32); switch (common.type) { case RTE_ECPRI_MSG_TYPE_IQ_DATA: case RTE_ECPRI_MSG_TYPE_RTC_CTRL: case RTE_ECPRI_MSG_TYPE_DLY_MSR: - dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m, - prog_sample_field_value_1); dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v, prog_sample_field_value_1); - *(uint32_t *)dw_m = ecpri_m->hdr.dummy[0]; *(uint32_t *)dw_v = ecpri_v->hdr.dummy[0] & ecpri_m->hdr.dummy[0]; /* Sample#1, to match message body, offset 4. */ - MLX5_SET(fte_match_set_misc4, misc4_m, - prog_sample_field_id_1, samples[1]); MLX5_SET(fte_match_set_misc4, misc4_v, prog_sample_field_id_1, samples[1]); break; @@ -10542,7 +10446,7 @@ flow_dv_translate_item_aso_ct(struct rte_eth_dev *dev, reg_id = mlx5_flow_get_reg_id(dev, MLX5_ASO_CONNTRACK, 0, &error); if (reg_id == REG_NON) return; - flow_dv_match_meta_reg(matcher, key, (enum modify_reg)reg_id, + flow_dv_match_meta_reg_all(matcher, key, (enum modify_reg)reg_id, reg_value, reg_mask); } @@ -11328,42 +11232,48 @@ flow_dv_translate_create_counter(struct rte_eth_dev *dev, * * @param[in] dev * Pointer to the dev struct. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) + void *key, + const struct rte_flow_item *item, + uint32_t key_type) { const struct mlx5_rte_flow_item_tx_queue *queue_m; const struct mlx5_rte_flow_item_tx_queue *queue_v; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); - void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - struct mlx5_txq_ctrl *txq; - uint32_t queue, mask; + const struct mlx5_rte_flow_item_tx_queue queue_mask = { + .queue = UINT32_MAX, + }; + void *misc_v = + MLX5_ADDR_OF(fte_match_param, key, misc_parameters); + struct mlx5_txq_ctrl *txq = NULL; + uint32_t queue; - queue_m = (const void *)item->mask; - queue_v = (const void *)item->spec; - if (!queue_v) - return; - txq = mlx5_txq_get(dev, queue_v->queue); - if (!txq) + MLX5_ITEM_UPDATE(item, key_type, queue_v, queue_m, &queue_mask); + if (!queue_m || !queue_v) return; - if (txq->is_hairpin) - queue = txq->obj->sq->id; - else - queue = txq->obj->sq_obj.sq->id; - mask = queue_m == NULL ? UINT32_MAX : queue_m->queue; - MLX5_SET(fte_match_set_misc, misc_m, source_sqn, mask); - MLX5_SET(fte_match_set_misc, misc_v, source_sqn, queue & mask); - mlx5_txq_release(dev, queue_v->queue); + if (key_type & MLX5_SET_MATCHER_V) { + txq = mlx5_txq_get(dev, queue_v->queue); + if (!txq) + return; + if (txq->is_hairpin) + queue = txq->obj->sq->id; + else + queue = txq->obj->sq_obj.sq->id; + if (key_type == MLX5_SET_MATCHER_SW_V) + queue &= queue_m->queue; + } else { + queue = queue_m->queue; + } + MLX5_SET(fte_match_set_misc, misc_v, source_sqn, queue); + if (txq) + mlx5_txq_release(dev, queue_v->queue); } /** @@ -13029,7 +12939,298 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, } /** - * Translate the flow item to matcher. + * Fill the flow matcher with DV spec. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] items + * Pointer to the list of items. + * @param[in] wks + * Pointer to the matcher workspace. + * @param[in] key + * Pointer to the flow matcher key. + * @param[in] key_type + * Key type. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_translate_items(struct rte_eth_dev *dev, + const struct rte_flow_item *items, + struct mlx5_dv_matcher_workspace *wks, + void *key, uint32_t key_type, + struct rte_flow_error *error) +{ + struct mlx5_flow_rss_desc *rss_desc = wks->rss_desc; + uint8_t next_protocol = wks->next_protocol; + int tunnel = !!(wks->item_flags & MLX5_FLOW_LAYER_TUNNEL); + int item_type = items->type; + uint64_t last_item = wks->last_item; + int ret; + + switch (item_type) { + case RTE_FLOW_ITEM_TYPE_ESP: + flow_dv_translate_item_esp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_ITEM_ESP; + break; + case RTE_FLOW_ITEM_TYPE_PORT_ID: + flow_dv_translate_item_port_id + (dev, key, items, wks->attr, key_type); + last_item = MLX5_FLOW_ITEM_PORT_ID; + break; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + flow_dv_translate_item_represented_port + (dev, key, items, wks->attr, key_type); + last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; + break; + case RTE_FLOW_ITEM_TYPE_ETH: + flow_dv_translate_item_eth(key, items, tunnel, + wks->group, key_type); + wks->priority = wks->action_flags & + MLX5_FLOW_ACTION_DEFAULT_MISS && + !wks->external ? + MLX5_PRIORITY_MAP_L3 : + MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + flow_dv_translate_item_vlan(key, items, tunnel, wks, key_type); + wks->priority = MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | + MLX5_FLOW_LAYER_INNER_VLAN) : + (MLX5_FLOW_LAYER_OUTER_L2 | + MLX5_FLOW_LAYER_OUTER_VLAN); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &wks->item_flags, &tunnel); + flow_dv_translate_item_ipv4(key, items, tunnel, + wks->group, key_type); + wks->priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv4 *) + items->mask)->hdr.next_proto_id) { + next_protocol = + ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + next_protocol &= + ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &wks->item_flags, &tunnel); + flow_dv_translate_item_ipv6(key, items, tunnel, + wks->group, key_type); + wks->priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto) { + next_protocol = + ((const struct rte_flow_item_ipv6 *) + items->spec)->hdr.proto; + next_protocol &= + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6 *) + (items->mask))->hdr.proto; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6 *) + (items->spec))->hdr.proto; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: + flow_dv_translate_item_ipv6_frag_ext + (key, items, tunnel, key_type); + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : + MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header) { + next_protocol = + ((const struct rte_flow_item_ipv6_frag_ext *) + items->spec)->hdr.next_header; + next_protocol &= + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *) + (items->mask))->hdr.next_header; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *) + (items->spec))->hdr.next_header; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_TCP: + flow_dv_translate_item_tcp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; + break; + case RTE_FLOW_ITEM_TYPE_UDP: + flow_dv_translate_item_udp(key, items, tunnel, wks, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; + break; + case RTE_FLOW_ITEM_TYPE_GRE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + wks->gre_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + flow_dv_translate_item_gre_key(key, items, key_type); + last_item = MLX5_FLOW_LAYER_GRE_KEY; + break; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_NVGRE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + flow_dv_translate_item_vxlan(dev, wks->attr, key, + items, tunnel, wks, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_VXLAN_GPE; + break; + case RTE_FLOW_ITEM_TYPE_GENEVE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GENEVE; + break; + case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: + ret = flow_dv_translate_item_geneve_opt + (dev, key, items, key_type, error); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GENEVE TLV option"); + wks->geneve_tlv_option = 1; + last_item = MLX5_FLOW_LAYER_GENEVE_OPT; + break; + case RTE_FLOW_ITEM_TYPE_MPLS: + flow_dv_translate_item_mpls(key, items, last_item, + tunnel, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_MPLS; + break; + case RTE_FLOW_ITEM_TYPE_MARK: + flow_dv_translate_item_mark(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_MARK; + break; + case RTE_FLOW_ITEM_TYPE_META: + flow_dv_translate_item_meta + (dev, key, wks->attr, items, key_type); + last_item = MLX5_FLOW_ITEM_METADATA; + break; + case RTE_FLOW_ITEM_TYPE_ICMP: + flow_dv_translate_item_icmp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP; + break; + case RTE_FLOW_ITEM_TYPE_ICMP6: + flow_dv_translate_item_icmp6(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP6; + break; + case RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_item_tag(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TAG; + break; + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_mlx5_item_tag(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TAG; + break; + case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + flow_dv_translate_item_tx_queue(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TX_QUEUE; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + flow_dv_translate_item_gtp(key, items, tunnel, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = flow_dv_translate_item_gtp_psc(key, items, key_type); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GTP PSC item"); + last_item = MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_ECPRI: + if (!mlx5_flex_parser_ecpri_exist(dev)) { + /* Create it only the first time to be used. */ + ret = mlx5_flex_parser_ecpri_alloc(dev); + if (ret) + return rte_flow_error_set + (error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, + "cannot create eCPRI parser"); + } + flow_dv_translate_item_ecpri + (dev, key, items, last_item, key_type); + /* No other protocol should follow eCPRI layer. */ + last_item = MLX5_FLOW_LAYER_ECPRI; + break; + default: + break; + } + wks->item_flags |= last_item; + wks->last_item = last_item; + wks->next_protocol = next_protocol; + return 0; +} + +/** + * Fill the SW steering flow with DV spec. * * @param[in] dev * Pointer to rte_eth_dev structure. @@ -13039,7 +13240,7 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * Pointer to the flow attributes. * @param[in] items * Pointer to the list of items. - * @param[in] matcher + * @param[in, out] matcher * Pointer to the flow matcher. * @param[out] error * Pointer to the error structure. @@ -13048,287 +13249,41 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -flow_dv_translate_items(struct rte_eth_dev *dev, - struct mlx5_flow *dev_flow, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - struct mlx5_flow_dv_matcher *matcher, - struct rte_flow_error *error) +flow_dv_translate_items_sws(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item *items, + struct mlx5_flow_dv_matcher *matcher, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow *flow = dev_flow->flow; - struct mlx5_flow_handle *handle = dev_flow->handle; - struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); - struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc; - uint64_t item_flags = 0; - uint64_t last_item = 0; void *match_mask = matcher->mask.buf; void *match_value = dev_flow->dv.value.buf; - uint8_t next_protocol = 0xff; - uint16_t priority = 0; + struct mlx5_dv_matcher_workspace wks = { + .action_flags = dev_flow->act_flags, + .item_flags = 0, + .external = dev_flow->external, + .next_protocol = 0xff, + .group = dev_flow->dv.group, + .attr = attr, + .rss_desc = &((struct mlx5_flow_workspace *) + mlx5_flow_get_thread_workspace())->rss_desc, + }; + struct mlx5_dv_matcher_workspace wks_m = wks; const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; - const struct rte_flow_item *tunnel_item = NULL; - const struct rte_flow_item *gre_item = NULL; int ret = 0; + int tunnel; for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { - int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); - int item_type = items->type; - - if (!mlx5_flow_os_item_supported(item_type)) + if (!mlx5_flow_os_item_supported(items->type)) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, NULL, "item not supported"); - switch (item_type) { - case RTE_FLOW_ITEM_TYPE_ESP: - flow_dv_translate_item_esp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_ITEM_ESP; - break; - case RTE_FLOW_ITEM_TYPE_PORT_ID: - flow_dv_translate_item_port_id - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_PORT_ID; - break; - case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: - flow_dv_translate_item_represented_port - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; - break; - case RTE_FLOW_ITEM_TYPE_ETH: - flow_dv_translate_item_eth(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = dev_flow->act_flags & - MLX5_FLOW_ACTION_DEFAULT_MISS && - !dev_flow->external ? - MLX5_PRIORITY_MAP_L3 : - MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : - MLX5_FLOW_LAYER_OUTER_L2; - break; - case RTE_FLOW_ITEM_TYPE_VLAN: - flow_dv_translate_item_vlan(dev_flow, - match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | - MLX5_FLOW_LAYER_INNER_VLAN) : - (MLX5_FLOW_LAYER_OUTER_L2 | - MLX5_FLOW_LAYER_OUTER_VLAN); - break; - case RTE_FLOW_ITEM_TYPE_IPV4: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv4(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : - MLX5_FLOW_LAYER_OUTER_L3_IPV4; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv4 *) - items->mask)->hdr.next_proto_id) { - next_protocol = - ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - next_protocol &= - ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_IPV6: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv6(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : - MLX5_FLOW_LAYER_OUTER_L3_IPV6; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto) { - next_protocol = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol &= - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: - flow_dv_translate_item_ipv6_frag_ext(match_mask, - match_value, - items, tunnel); - last_item = tunnel ? - MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : - MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header) { - next_protocol = - ((const struct rte_flow_item_ipv6_frag_ext *) - items->spec)->hdr.next_header; - next_protocol &= - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_TCP: - flow_dv_translate_item_tcp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : - MLX5_FLOW_LAYER_OUTER_L4_TCP; - break; - case RTE_FLOW_ITEM_TYPE_UDP: - flow_dv_translate_item_udp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : - MLX5_FLOW_LAYER_OUTER_L4_UDP; - break; - case RTE_FLOW_ITEM_TYPE_GRE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - gre_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GRE_KEY: - flow_dv_translate_item_gre_key(match_mask, - match_value, items); - last_item = MLX5_FLOW_LAYER_GRE_KEY; - break; - case RTE_FLOW_ITEM_TYPE_GRE_OPTION: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_NVGRE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_VXLAN: - flow_dv_translate_item_vxlan(dev, attr, - match_mask, match_value, - items, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN; - break; - case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN_GPE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GENEVE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GENEVE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: - ret = flow_dv_translate_item_geneve_opt(dev, match_mask, - match_value, - items, error); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GENEVE TLV option"); - flow->geneve_tlv_option = 1; - last_item = MLX5_FLOW_LAYER_GENEVE_OPT; - break; - case RTE_FLOW_ITEM_TYPE_MPLS: - flow_dv_translate_item_mpls(match_mask, match_value, - items, last_item, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_MPLS; - break; - case RTE_FLOW_ITEM_TYPE_MARK: - flow_dv_translate_item_mark(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_MARK; - break; - case RTE_FLOW_ITEM_TYPE_META: - flow_dv_translate_item_meta(dev, match_mask, - match_value, attr, items); - last_item = MLX5_FLOW_ITEM_METADATA; - break; - case RTE_FLOW_ITEM_TYPE_ICMP: - flow_dv_translate_item_icmp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP; - break; - case RTE_FLOW_ITEM_TYPE_ICMP6: - flow_dv_translate_item_icmp6(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP6; - break; - case RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; - break; - case MLX5_RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_mlx5_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; - break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - flow_dv_translate_item_tx_queue(dev, match_mask, - match_value, - items); - last_item = MLX5_FLOW_ITEM_TX_QUEUE; - break; - case RTE_FLOW_ITEM_TYPE_GTP: - flow_dv_translate_item_gtp(match_mask, match_value, - items, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GTP; - break; - case RTE_FLOW_ITEM_TYPE_GTP_PSC: - ret = flow_dv_translate_item_gtp_psc(match_mask, - match_value, - items); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GTP PSC item"); - last_item = MLX5_FLOW_LAYER_GTP_PSC; - break; - case RTE_FLOW_ITEM_TYPE_ECPRI: - if (!mlx5_flex_parser_ecpri_exist(dev)) { - /* Create it only the first time to be used. */ - ret = mlx5_flex_parser_ecpri_alloc(dev); - if (ret) - return rte_flow_error_set - (error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, - "cannot create eCPRI parser"); - } - flow_dv_translate_item_ecpri(dev, match_mask, - match_value, items, - last_item); - /* No other protocol should follow eCPRI layer. */ - last_item = MLX5_FLOW_LAYER_ECPRI; - break; + tunnel = !!(wks.item_flags & MLX5_FLOW_LAYER_TUNNEL); + switch (items->type) { case RTE_FLOW_ITEM_TYPE_INTEGRITY: flow_dv_translate_item_integrity(items, integrity_items, - &last_item); + &wks.last_item); break; case RTE_FLOW_ITEM_TYPE_CONNTRACK: flow_dv_translate_item_aso_ct(dev, match_mask, @@ -13338,13 +13293,22 @@ flow_dv_translate_items(struct rte_eth_dev *dev, flow_dv_translate_item_flex(dev, match_mask, match_value, items, dev_flow, tunnel != 0); - last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : - MLX5_FLOW_ITEM_OUTER_FLEX; + wks.last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : + MLX5_FLOW_ITEM_OUTER_FLEX; break; + default: + ret = flow_dv_translate_items(dev, items, &wks_m, + match_mask, MLX5_SET_MATCHER_SW_M, error); + if (ret) + return ret; + ret = flow_dv_translate_items(dev, items, &wks, + match_value, MLX5_SET_MATCHER_SW_V, error); + if (ret) + return ret; break; } - item_flags |= last_item; + wks.item_flags |= wks.last_item; } /* * When E-Switch mode is enabled, we have two cases where we need to @@ -13354,48 +13318,82 @@ flow_dv_translate_items(struct rte_eth_dev *dev, * In both cases the source port is set according the current port * in use. */ - if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && - !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && + if (!(wks.item_flags & MLX5_FLOW_ITEM_PORT_ID) && + !(wks.item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && !(attr->egress && !attr->transfer)) { - if (flow_dv_translate_item_port_id(dev, match_mask, + if (flow_dv_translate_item_port_id_all(dev, match_mask, match_value, NULL, attr)) return -rte_errno; } - if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + if (wks.item_flags & MLX5_FLOW_ITEM_INTEGRITY) { flow_dv_translate_item_integrity_post(match_mask, match_value, integrity_items, - item_flags); - } - if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) - flow_dv_translate_item_vxlan_gpe(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GENEVE) - flow_dv_translate_item_geneve(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GRE) { - if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) - flow_dv_translate_item_gre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) - flow_dv_translate_item_nvgre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) - flow_dv_translate_item_gre_option(match_mask, match_value, - tunnel_item, gre_item, item_flags); - else + wks.item_flags); + } + if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { + flow_dv_translate_item_vxlan_gpe(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_vxlan_gpe(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GENEVE) { + flow_dv_translate_item_geneve(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_geneve(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GRE) { + if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) { + flow_dv_translate_item_gre(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_gre(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) { + flow_dv_translate_item_nvgre(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_nvgre(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) { + flow_dv_translate_item_gre_option(match_mask, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_gre_option(match_value, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else { MLX5_ASSERT(false); + } } - matcher->priority = priority; + dev_flow->handle->vf_vlan.tag = wks.vlan_tag; + matcher->priority = wks.priority; #ifdef RTE_LIBRTE_MLX5_DEBUG - MLX5_ASSERT(!flow_dv_check_valid_spec(matcher->mask.buf, - dev_flow->dv.value.buf)); + MLX5_ASSERT(!flow_dv_check_valid_spec(match_mask, match_value)); #endif /* * Layers may be already initialized from prefix flow if this dev_flow * is the suffix flow. */ - handle->layers |= item_flags; - return ret; + dev_flow->handle->layers |= wks.item_flags; + dev_flow->flow->geneve_tlv_option = wks.geneve_tlv_option; + return 0; } /** @@ -14124,7 +14122,7 @@ flow_dv_translate(struct rte_eth_dev *dev, modify_action_position = actions_n++; } dev_flow->act_flags = action_flags; - ret = flow_dv_translate_items(dev, dev_flow, attr, items, &matcher, + ret = flow_dv_translate_items_sws(dev, dev_flow, attr, items, &matcher, error); if (ret) return -rte_errno; @@ -16690,27 +16688,23 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev, struct mlx5_flow_dv_match_params value = { .size = sizeof(value.buf), }; - struct mlx5_flow_dv_match_params matcher = { - .size = sizeof(matcher.buf), - }; struct mlx5_priv *priv = dev->data->dev_private; uint8_t misc_mask; if (match_src_port && priv->sh->esw_mode) { if (item && item->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT) - ret = flow_dv_translate_item_represented_port(dev, matcher.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_represented_port(dev, value.buf, + item, attr, MLX5_SET_MATCHER_SW_V); else - ret = flow_dv_translate_item_port_id(dev, matcher.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_port_id(dev, value.buf, + item, attr, MLX5_SET_MATCHER_SW_V); if (ret) { DRV_LOG(ERR, "Failed to create meter policy%d flow's" " value with port.", color); return -1; } } - flow_dv_match_meta_reg(matcher.buf, value.buf, - (enum modify_reg)color_reg_c_idx, + flow_dv_match_meta_reg(value.buf, (enum modify_reg)color_reg_c_idx, rte_col_2_mlx5_col(color), UINT32_MAX); misc_mask = flow_dv_matcher_enable(value.buf); __flow_dv_adjust_buf_size(&value.size, misc_mask); @@ -16742,9 +16736,6 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, }, .tbl = tbl_rsc, }; - struct mlx5_flow_dv_match_params value = { - .size = sizeof(value.buf), - }; struct mlx5_flow_cb_ctx ctx = { .error = error, .data = &matcher, @@ -16757,10 +16748,10 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, if (match_src_port && priv->sh->esw_mode) { if (item && item->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT) ret = flow_dv_translate_item_represented_port(dev, matcher.mask.buf, - value.buf, item, attr); + item, attr, MLX5_SET_MATCHER_SW_M); else - ret = flow_dv_translate_item_port_id(dev, matcher.mask.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_port_id(dev, matcher.mask.buf, + item, attr, MLX5_SET_MATCHER_SW_M); if (ret) { DRV_LOG(ERR, "Failed to register meter policy%d matcher" " with port.", priority); @@ -16769,7 +16760,7 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, } tbl_data = container_of(tbl_rsc, struct mlx5_flow_tbl_data_entry, tbl); if (priority < RTE_COLOR_RED) - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg(matcher.mask.buf, (enum modify_reg)color_reg_c_idx, 0, color_mask); matcher.priority = priority; matcher.crc = rte_raw_cksum((const void *)matcher.mask.buf, @@ -17305,7 +17296,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, tbl_data = container_of(mtrmng->drop_tbl[domain], struct mlx5_flow_tbl_data_entry, tbl); if (!mtrmng->def_matcher[domain]) { - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg_all(matcher.mask.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, 0); matcher.priority = MLX5_MTRS_DEFAULT_RULE_PRIORITY; @@ -17325,7 +17316,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, if (!mtrmng->def_rule[domain]) { i = 0; actions[i++] = priv->sh->dr_drop_action; - flow_dv_match_meta_reg(matcher_para.buf, value.buf, + flow_dv_match_meta_reg_all(matcher_para.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, 0); misc_mask = flow_dv_matcher_enable(value.buf); __flow_dv_adjust_buf_size(&value.size, misc_mask); @@ -17344,7 +17335,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, MLX5_ASSERT(mtrmng->max_mtr_bits); if (!mtrmng->drop_matcher[domain][mtrmng->max_mtr_bits - 1]) { /* Create matchers for Drop. */ - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg_all(matcher.mask.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, (mtr_id_mask << mtr_id_offset)); matcher.priority = MLX5_REG_BITS - mtrmng->max_mtr_bits; @@ -17364,7 +17355,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, drop_matcher = mtrmng->drop_matcher[domain][mtrmng->max_mtr_bits - 1]; /* Create drop rule, matching meter_id only. */ - flow_dv_match_meta_reg(matcher_para.buf, value.buf, + flow_dv_match_meta_reg_all(matcher_para.buf, value.buf, (enum modify_reg)mtr_id_reg_c, (mtr_idx << mtr_id_offset), UINT32_MAX); i = 0; @@ -18846,8 +18837,12 @@ flow_dv_discover_priorities(struct rte_eth_dev *dev, flow.dv.actions[0] = action; flow.dv.actions_n = 1; memset(ð, 0, sizeof(eth)); - flow_dv_translate_item_eth(matcher.mask.buf, flow.dv.value.buf, - &item, /* inner */ false, /* group */ 0); + flow_dv_translate_item_eth(matcher.mask.buf, &item, + /* inner */ false, /* group */ 0, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_eth(flow.dv.value.buf, &item, + /* inner */ false, /* group */ 0, + MLX5_SET_MATCHER_SW_V); matcher.crc = rte_raw_cksum(matcher.mask.buf, matcher.mask.size); for (i = 0; i < vprio_n; i++) { /* Configure the next proposed maximum priority. */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 03/19] net/mlx5: add hardware steering item translation function 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker 2022-09-22 19:03 ` [v1 01/19] net/mlx5: split flow item translation Alex Vesker 2022-09-22 19:03 ` [v1 02/19] net/mlx5: split flow item matcher and value translation Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 04/19] net/mlx5: add port to metadata conversion Alex Vesker ` (20 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> As hardware steering root table flows still work under FW steering mode. This commit provides shared item tranlsation code for hardware steering root table flows. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow.h | 17 ++++++ drivers/net/mlx5/mlx5_flow_dv.c | 93 +++++++++++++++++++++++++++++++++ 2 files changed, 110 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2ebb8496f2..86a08074dc 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1006,6 +1006,18 @@ flow_items_to_tunnel(const struct rte_flow_item items[]) return items[0].spec; } +/* HW steering flow attributes. */ +struct mlx5_flow_attr { + uint32_t port_id; /* Port index. */ + uint32_t group; /* Flow group. */ + uint32_t priority; /* Original Priority. */ + /* rss level, used by priority adjustment. */ + uint32_t rss_level; + /* Action flags, used by priority adjustment. */ + uint32_t act_flags; + uint32_t tbl_type; /* Flow table type. */ +}; + /* Flow structure. */ struct rte_flow { uint32_t dev_handles; @@ -2122,4 +2134,9 @@ int mlx5_flow_get_item_vport_id(struct rte_eth_dev *dev, bool *all_ports, struct rte_flow_error *error); +int flow_dv_translate_items_hws(const struct rte_flow_item *items, + struct mlx5_flow_attr *attr, void *key, + uint32_t key_type, uint64_t *item_flags, + uint8_t *match_criteria, + struct rte_flow_error *error); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index cea1aa3137..885b4c5588 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -13229,6 +13229,99 @@ flow_dv_translate_items(struct rte_eth_dev *dev, return 0; } +/** + * Fill the HW steering flow with DV spec. + * + * @param[in] items + * Pointer to the list of items. + * @param[in] attr + * Pointer to the flow attributes. + * @param[in] key + * Pointer to the flow matcher key. + * @param[in] key_type + * Key type. + * @param[in, out] item_flags + * Pointer to the flow item flags. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +flow_dv_translate_items_hws(const struct rte_flow_item *items, + struct mlx5_flow_attr *attr, void *key, + uint32_t key_type, uint64_t *item_flags, + uint8_t *match_criteria, + struct rte_flow_error *error) +{ + struct mlx5_flow_rss_desc rss_desc = { .level = attr->rss_level }; + struct rte_flow_attr rattr = { + .group = attr->group, + .priority = attr->priority, + .ingress = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_NIC_RX), + .egress = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_NIC_TX), + .transfer = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_FDB), + }; + struct mlx5_dv_matcher_workspace wks = { + .action_flags = attr->act_flags, + .item_flags = item_flags ? *item_flags : 0, + .external = 0, + .next_protocol = 0xff, + .attr = &rattr, + .rss_desc = &rss_desc, + }; + int ret; + + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { + if (!mlx5_flow_os_item_supported(items->type)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "item not supported"); + ret = flow_dv_translate_items(&rte_eth_devices[attr->port_id], + items, &wks, key, key_type, NULL); + if (ret) + return ret; + } + if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { + flow_dv_translate_item_vxlan_gpe(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GENEVE) { + flow_dv_translate_item_geneve(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GRE) { + if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) { + flow_dv_translate_item_gre(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) { + flow_dv_translate_item_gre_option(key, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + key_type); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) { + flow_dv_translate_item_nvgre(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else { + MLX5_ASSERT(false); + } + } + + if (match_criteria) + *match_criteria = flow_dv_matcher_enable(key); + if (item_flags) + *item_flags = wks.item_flags; + return 0; +} + /** * Fill the SW steering flow with DV spec. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 04/19] net/mlx5: add port to metadata conversion 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (2 preceding siblings ...) 2022-09-22 19:03 ` [v1 03/19] net/mlx5: add hardware steering item translation function Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 05/19] common/mlx5: query set capability of registers Alex Vesker ` (19 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad Cc: dev, orika, Dariusz Sosnowski From: Dariusz Sosnowski <dsosnowski@nvidia.com> This patch initial version of functions used to: - convert between ethdev port_id and internal tag/mask value, - convert between IB context and internal tag/mask value. Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- drivers/net/mlx5/linux/mlx5_os.c | 2 ++ drivers/net/mlx5/mlx5.c | 1 + drivers/net/mlx5/mlx5_flow.c | 6 ++++ drivers/net/mlx5/mlx5_flow.h | 50 ++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 29 ++++++++++++++++++ 5 files changed, 88 insertions(+) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 2a539eb085..7e316d9dce 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1540,6 +1540,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, if (!priv->hrxqs) goto error; rte_rwlock_init(&priv->ind_tbls_lock); + if (priv->vport_meta_mask) + flow_hw_set_port_info(eth_dev); if (priv->sh->config.dv_flow_en == 2) return eth_dev; /* Port representor shares the same max priority with pf port. */ diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 5829b66b0b..abdf867ea8 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1945,6 +1945,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) flow_hw_resource_release(dev); #endif + flow_hw_clear_port_info(dev); if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ rte_delay_us_sleep(1000); diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index e4744b0a67..acf1467bf6 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -33,6 +33,12 @@ #include "mlx5_common_os.h" #include "rte_pmd_mlx5.h" +/* + * Shared array for quick translation between port_id and vport mask/values + * used for HWS rules. + */ +struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; + struct tunnel_default_miss_ctx { uint16_t *queue; __extension__ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 86a08074dc..2eb2b46060 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1320,6 +1320,56 @@ struct mlx5_flow_split_info { uint64_t prefix_layers; /**< Prefix subflow layers. */ }; +struct flow_hw_port_info { + uint32_t regc_mask; + uint32_t regc_value; + uint32_t is_wire:1; +}; + +extern struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; + +/* + * Get metadata match tag and mask for given rte_eth_dev port. + * Used in HWS rule creation. + */ +static __rte_always_inline const struct flow_hw_port_info * +flow_hw_conv_port_id(const uint16_t port_id) +{ + struct flow_hw_port_info *port_info; + + if (port_id >= RTE_MAX_ETHPORTS) + return NULL; + port_info = &mlx5_flow_hw_port_infos[port_id]; + return !!port_info->regc_mask ? port_info : NULL; +} + +/* + * Get metadata match tag and mask for the uplink port represented + * by given IB context. Used in HWS context creation. + */ +static __rte_always_inline const struct flow_hw_port_info * +flow_hw_get_wire_port(struct ibv_context *ibctx) +{ + struct ibv_device *ibdev = ibctx->device; + uint16_t port_id; + + MLX5_ETH_FOREACH_DEV(port_id, NULL) { + const struct mlx5_priv *priv = + rte_eth_devices[port_id].data->dev_private; + + if (priv && priv->master) { + struct ibv_context *port_ibctx = priv->sh->cdev->ctx; + + if (port_ibctx->device == ibdev) + return flow_hw_conv_port_id(port_id); + } + } + return NULL; +} + +void flow_hw_set_port_info(struct rte_eth_dev *dev); +void flow_hw_clear_port_info(struct rte_eth_dev *dev); + typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 12498794a5..fe809a83b9 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2208,6 +2208,35 @@ flow_hw_resource_release(struct rte_eth_dev *dev) priv->nb_queue = 0; } +/* Sets vport tag and mask, for given port, used in HWS rules. */ +void +flow_hw_set_port_info(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint16_t port_id = dev->data->port_id; + struct flow_hw_port_info *info; + + MLX5_ASSERT(port_id < RTE_MAX_ETHPORTS); + info = &mlx5_flow_hw_port_infos[port_id]; + info->regc_mask = priv->vport_meta_mask; + info->regc_value = priv->vport_meta_tag; + info->is_wire = priv->master; +} + +/* Clears vport tag and mask used for HWS rules. */ +void +flow_hw_clear_port_info(struct rte_eth_dev *dev) +{ + uint16_t port_id = dev->data->port_id; + struct flow_hw_port_info *info; + + MLX5_ASSERT(port_id < RTE_MAX_ETHPORTS); + info = &mlx5_flow_hw_port_infos[port_id]; + info->regc_mask = 0; + info->regc_value = 0; + info->is_wire = 0; +} + /** * Create shared action. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 05/19] common/mlx5: query set capability of registers 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (3 preceding siblings ...) 2022-09-22 19:03 ` [v1 04/19] net/mlx5: add port to metadata conversion Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 06/19] net/mlx5: provide the available tag registers Alex Vesker ` (18 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad Cc: dev, orika, Bing Zhao From: Bing Zhao <bingz@nvidia.com> In the flow table capabilities, new fields are added to query the capability to set, add, copy to a REG_C_x. The set capability are queried and saved for the future usage. Signed-off-by: Bing Zhao <bingz@nvidia.com> --- drivers/common/mlx5/mlx5_devx_cmds.c | 30 +++++++++++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 2 ++ drivers/common/mlx5/mlx5_prm.h | 44 +++++++++++++++++++++++++--- 3 files changed, 72 insertions(+), 4 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index fb33023138..ac6891145d 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -1058,6 +1058,24 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->modify_outer_ip_ecn = MLX5_GET (flow_table_nic_cap, hcattr, ft_header_modify_nic_receive.outer_ip_ecn); + attr->set_reg_c = 0xff; + if (attr->nic_flow_table) { +#define GET_RX_REG_X_BITS \ + MLX5_GET(flow_table_nic_cap, hcattr, \ + ft_header_modify_nic_receive.metadata_reg_c_x) +#define GET_TX_REG_X_BITS \ + MLX5_GET(flow_table_nic_cap, hcattr, \ + ft_header_modify_nic_transmit.metadata_reg_c_x) + + uint32_t tx_reg, rx_reg; + + tx_reg = GET_TX_REG_X_BITS; + rx_reg = GET_RX_REG_X_BITS; + attr->set_reg_c &= (rx_reg & tx_reg); + +#undef GET_RX_REG_X_BITS +#undef GET_TX_REG_X_BITS + } attr->pkt_integrity_match = mlx5_devx_query_pkt_integrity_match(hcattr); attr->inner_ipv4_ihl = MLX5_GET (flow_table_nic_cap, hcattr, @@ -1157,6 +1175,18 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->esw_mgr_vport_id = MLX5_GET(esw_cap, hcattr, esw_manager_vport_number); } + if (attr->eswitch_manager) { + uint32_t esw_reg; + + hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + if (!hcattr) + return rc; + esw_reg = MLX5_GET(flow_table_esw_cap, hcattr, + ft_header_modify_esw_fdb.metadata_reg_c_x); + attr->set_reg_c &= esw_reg; + } return 0; error: rc = (rc > 0) ? -rc : rc; diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index af6053a788..d69dad613e 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -260,6 +260,8 @@ struct mlx5_hca_attr { uint32_t crypto_wrapped_import_method:1; uint16_t esw_mgr_vport_id; /* E-Switch Mgr vport ID . */ uint16_t max_wqe_sz_sq; + uint32_t set_reg_c:8; + uint32_t nic_flow_table:1; uint32_t modify_outer_ip_ecn:1; }; diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 4346279c81..12eb7b3b7f 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1892,6 +1892,7 @@ struct mlx5_ifc_roce_caps_bits { }; struct mlx5_ifc_ft_fields_support_bits { + /* set_action_field_support */ u8 outer_dmac[0x1]; u8 outer_smac[0x1]; u8 outer_ether_type[0x1]; @@ -1919,7 +1920,7 @@ struct mlx5_ifc_ft_fields_support_bits { u8 outer_gre_key[0x1]; u8 outer_vxlan_vni[0x1]; u8 reserved_at_1a[0x5]; - u8 source_eswitch_port[0x1]; + u8 source_eswitch_port[0x1]; /* end of DW0 */ u8 inner_dmac[0x1]; u8 inner_smac[0x1]; u8 inner_ether_type[0x1]; @@ -1943,8 +1944,33 @@ struct mlx5_ifc_ft_fields_support_bits { u8 inner_tcp_sport[0x1]; u8 inner_tcp_dport[0x1]; u8 inner_tcp_flags[0x1]; - u8 reserved_at_37[0x9]; - u8 reserved_at_40[0x40]; + u8 reserved_at_37[0x9]; /* end of DW1 */ + u8 reserved_at_40[0x20]; /* end of DW2 */ + u8 reserved_at_60[0x18]; + union { + struct { + u8 metadata_reg_c_7[0x1]; + u8 metadata_reg_c_6[0x1]; + u8 metadata_reg_c_5[0x1]; + u8 metadata_reg_c_4[0x1]; + u8 metadata_reg_c_3[0x1]; + u8 metadata_reg_c_2[0x1]; + u8 metadata_reg_c_1[0x1]; + u8 metadata_reg_c_0[0x1]; + }; + u8 metadata_reg_c_x[0x8]; + }; /* end of DW3 */ + /* set_action_field_support_2 */ + u8 reserved_at_80[0x80]; + /* add_action_field_support */ + u8 reserved_at_100[0x80]; + /* add_action_field_support_2 */ + u8 reserved_at_180[0x80]; + /* copy_action_field_support */ + u8 reserved_at_200[0x80]; + /* copy_action_field_support_2 */ + u8 reserved_at_280[0x80]; + u8 reserved_at_300[0x100]; }; /* @@ -1989,9 +2015,18 @@ struct mlx5_ifc_flow_table_nic_cap_bits { u8 reserved_at_e00[0x200]; struct mlx5_ifc_ft_fields_support_bits ft_header_modify_nic_receive; - u8 reserved_at_1080[0x380]; struct mlx5_ifc_ft_fields_support_2_bits ft_field_support_2_nic_receive; + u8 reserved_at_1480[0x780]; + struct mlx5_ifc_ft_fields_support_bits + ft_header_modify_nic_transmit; + u8 reserved_at_2000[0x6000]; +}; + +struct mlx5_ifc_flow_table_esw_cap_bits { + u8 reserved_at_0[0x800]; + struct mlx5_ifc_ft_fields_support_bits ft_header_modify_esw_fdb; + u8 reserved_at_C00[0x7400]; }; /* @@ -2041,6 +2076,7 @@ union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_qos_cap_bits qos_cap; struct mlx5_ifc_virtio_emulation_cap_bits vdpa_caps; struct mlx5_ifc_flow_table_nic_cap_bits flow_table_nic_cap; + struct mlx5_ifc_flow_table_esw_cap_bits flow_table_esw_cap; struct mlx5_ifc_esw_cap_bits esw_cap; struct mlx5_ifc_roce_caps_bits roce_caps; u8 reserved_at_0[0x8000]; -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 06/19] net/mlx5: provide the available tag registers 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (4 preceding siblings ...) 2022-09-22 19:03 ` [v1 05/19] common/mlx5: query set capability of registers Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 07/19] net/mlx5: Add additional glue functions for HWS Alex Vesker ` (17 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad Cc: dev, orika, Bing Zhao From: Bing Zhao <bingz@nvidia.com> The available tags that can be used by the application are fixed after startup. A global array is used to store the information and transfer the TAG item directly from the ID to the REG_C_x. Signed-off-by: Bing Zhao <bingz@nvidia.com> --- drivers/net/mlx5/linux/mlx5_os.c | 5 ++- drivers/net/mlx5/mlx5.c | 2 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_defs.h | 2 + drivers/net/mlx5/mlx5_flow.c | 11 +++++ drivers/net/mlx5/mlx5_flow.h | 27 ++++++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 76 ++++++++++++++++++++++++++++++++ 7 files changed, 123 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 7e316d9dce..6906914ba8 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1542,8 +1542,11 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, rte_rwlock_init(&priv->ind_tbls_lock); if (priv->vport_meta_mask) flow_hw_set_port_info(eth_dev); - if (priv->sh->config.dv_flow_en == 2) + if (priv->sh->config.dv_flow_en == 2) { + /* Only HWS requires this information. */ + flow_hw_init_tags_set(eth_dev); return eth_dev; + } /* Port representor shares the same max priority with pf port. */ if (!priv->sh->flow_priority_check_flag) { /* Supported Verbs flow priority number detection. */ diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index abdf867ea8..556709c697 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1946,6 +1946,8 @@ mlx5_dev_close(struct rte_eth_dev *dev) flow_hw_resource_release(dev); #endif flow_hw_clear_port_info(dev); + if (priv->sh->config.dv_flow_en == 2) + flow_hw_clear_tags_set(dev); if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ rte_delay_us_sleep(1000); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 9300dc02ff..e855dc6ab5 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1200,6 +1200,7 @@ struct mlx5_dev_ctx_shared { uint32_t drop_action_check_flag:1; /* Check Flag for drop action. */ uint32_t flow_priority_check_flag:1; /* Check Flag for flow priority. */ uint32_t metadata_regc_check_flag:1; /* Check Flag for metadata REGC. */ + uint32_t hws_tags:1; /* Check if tags info for HWS initialized. */ uint32_t max_port; /* Maximal IB device port index. */ struct mlx5_bond_info bond; /* Bonding information. */ struct mlx5_common_device *cdev; /* Backend mlx5 device. */ diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index 018d3f0f0c..585afb0a98 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -139,6 +139,8 @@ #define MLX5_XMETA_MODE_META32 2 /* Provide info on patrial hw miss. Implies MLX5_XMETA_MODE_META16 */ #define MLX5_XMETA_MODE_MISS_INFO 3 +/* Only valid in HWS, 32bits extended META without MARK support in FDB. */ +#define MLX5_XMETA_MODE_META32_HWS 4 /* Tx accurate scheduling on timestamps parameters. */ #define MLX5_TXPP_WAIT_INIT_TS 1000ul /* How long to wait timestamp. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index acf1467bf6..45109001ca 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -39,6 +39,17 @@ */ struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; +/* + * A global structure to save the available REG_C_x for tags usage. + * The Meter color REG (ASO) and the last available one will be reserved + * for PMD internal usage. + * Since there is no "port" concept in the driver, it is assumed that the + * available tags set will be the minimum intersection. + * 3 - in FDB mode / 5 - in legacy mode + */ +uint32_t mlx5_flow_hw_avl_tags_init_cnt; +enum modify_reg mlx5_flow_hw_avl_tags[MLX5_FLOW_HW_TAGS_MAX] = {REG_NON}; + struct tunnel_default_miss_ctx { uint16_t *queue; __extension__ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2eb2b46060..cae1a64def 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1328,6 +1328,10 @@ struct flow_hw_port_info { extern struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; +#define MLX5_FLOW_HW_TAGS_MAX 8 +extern uint32_t mlx5_flow_hw_avl_tags_init_cnt; +extern enum modify_reg mlx5_flow_hw_avl_tags[]; + /* * Get metadata match tag and mask for given rte_eth_dev port. * Used in HWS rule creation. @@ -1367,9 +1371,32 @@ flow_hw_get_wire_port(struct ibv_context *ibctx) return NULL; } +/* + * Convert metadata or tag to the actual register. + * META: Can only be used to match in the FDB in this stage, fixed C_1. + * TAG: C_x expect meter color reg and the reserved ones. + * TODO: Per port / device, FDB or NIC for Meta matching. + */ +static __rte_always_inline int +flow_hw_get_reg_id(enum rte_flow_item_type type, uint32_t id) +{ + switch (type) { + case RTE_FLOW_ITEM_TYPE_META: + return REG_C_1; + case RTE_FLOW_ITEM_TYPE_TAG: + MLX5_ASSERT(id < MLX5_FLOW_HW_TAGS_MAX); + return mlx5_flow_hw_avl_tags[id]; + default: + return REG_NON; + } +} + void flow_hw_set_port_info(struct rte_eth_dev *dev); void flow_hw_clear_port_info(struct rte_eth_dev *dev); +void flow_hw_init_tags_set(struct rte_eth_dev *dev); +void flow_hw_clear_tags_set(struct rte_eth_dev *dev); + typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index fe809a83b9..78c741bb91 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2237,6 +2237,82 @@ flow_hw_clear_port_info(struct rte_eth_dev *dev) info->is_wire = 0; } +/* + * Initialize the information of available tag registers and an intersection + * of all the probed devices' REG_C_Xs. + * PS. No port concept in steering part, right now it cannot be per port level. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + */ +void flow_hw_init_tags_set(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t meta_mode = priv->sh->config.dv_xmeta_en; + uint8_t masks = (uint8_t)priv->sh->cdev->config.hca_attr.set_reg_c; + uint32_t i, j; + enum modify_reg copy[MLX5_FLOW_HW_TAGS_MAX] = {REG_NON}; + uint8_t unset = 0; + uint8_t copy_masks = 0; + + /* + * The CAPA is global for common device but only used in net. + * It is shared per eswitch domain. + */ + if (!!priv->sh->hws_tags) + return; + unset |= 1 << (priv->mtr_color_reg - REG_C_0); + unset |= 1 << (REG_C_6 - REG_C_0); + if (meta_mode == MLX5_XMETA_MODE_META32_HWS) { + unset |= 1 << (REG_C_1 - REG_C_0); + unset |= 1 << (REG_C_0 - REG_C_0); + } + masks &= ~unset; + if (mlx5_flow_hw_avl_tags_init_cnt) { + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) { + if (mlx5_flow_hw_avl_tags[i] != REG_NON && !!((1 << i) & masks)) { + copy[mlx5_flow_hw_avl_tags[i] - REG_C_0] = + mlx5_flow_hw_avl_tags[i]; + copy_masks |= (1 << i); + } + } + if (copy_masks != masks) { + j = 0; + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) + if (!!((1 << i) & copy_masks)) + mlx5_flow_hw_avl_tags[j++] = copy[i]; + } + } else { + j = 0; + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) { + if (!!((1 << i) & masks)) + mlx5_flow_hw_avl_tags[j++] = + (enum modify_reg)(i + (uint32_t)REG_C_0); + } + } + priv->sh->hws_tags = 1; + mlx5_flow_hw_avl_tags_init_cnt++; +} + +/* + * Reset the available tag registers information to NONE. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + */ +void flow_hw_clear_tags_set(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + if (!priv->sh->hws_tags) + return; + priv->sh->hws_tags = 0; + mlx5_flow_hw_avl_tags_init_cnt--; + if (!mlx5_flow_hw_avl_tags_init_cnt) + memset(mlx5_flow_hw_avl_tags, REG_NON, + sizeof(enum modify_reg) * MLX5_FLOW_HW_TAGS_MAX); +} + /** * Create shared action. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 07/19] net/mlx5: Add additional glue functions for HWS 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (5 preceding siblings ...) 2022-09-22 19:03 ` [v1 06/19] net/mlx5: provide the available tag registers Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 08/19] net/mlx5: Remove stub HWS support Alex Vesker ` (16 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Add missing glue support for HWS mlx5dr layer. The new glue functions are needed for mlx5dv create matcher and action, which are used as the kernel root table as well as for capabilities query like device name and ports info. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/common/mlx5/linux/mlx5_glue.c | 121 ++++++++++++++++++++++++-- drivers/common/mlx5/linux/mlx5_glue.h | 17 ++++ 2 files changed, 131 insertions(+), 7 deletions(-) diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c index 450dd6a06a..943d4bf833 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.c +++ b/drivers/common/mlx5/linux/mlx5_glue.c @@ -111,6 +111,12 @@ mlx5_glue_query_device_ex(struct ibv_context *context, return ibv_query_device_ex(context, input, attr); } +static const char * +mlx5_glue_get_device_name(struct ibv_device *device) +{ + return ibv_get_device_name(device); +} + static int mlx5_glue_query_rt_values_ex(struct ibv_context *context, struct ibv_values_ex *values) @@ -620,6 +626,20 @@ mlx5_glue_dv_create_qp(struct ibv_context *context, #endif } +static void * +__mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, + struct mlx5dv_flow_matcher_attr *matcher_attr) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_matcher(context, matcher_attr); +#else + (void)context; + (void)matcher_attr; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, struct mlx5dv_flow_matcher_attr *matcher_attr, @@ -633,7 +653,7 @@ mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, matcher_attr->match_mask); #else (void)tbl; - return mlx5dv_create_flow_matcher(context, matcher_attr); + return __mlx5_glue_dv_create_flow_matcher(context, matcher_attr); #endif #else (void)context; @@ -644,6 +664,26 @@ mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, #endif } +static void * +__mlx5_glue_dv_create_flow(void *matcher, + void *match_value, + size_t num_actions, + void *actions) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow(matcher, + match_value, + num_actions, + (struct mlx5dv_flow_action_attr *)actions); +#else + (void)matcher; + (void)match_value; + (void)num_actions; + (void)actions; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow(void *matcher, void *match_value, @@ -663,8 +703,8 @@ mlx5_glue_dv_create_flow(void *matcher, for (i = 0; i < num_actions; i++) actions_attr[i] = *((struct mlx5dv_flow_action_attr *)(actions[i])); - return mlx5dv_create_flow(matcher, match_value, - num_actions, actions_attr); + return __mlx5_glue_dv_create_flow(matcher, match_value, + num_actions, actions_attr); #endif #else (void)matcher; @@ -735,6 +775,26 @@ mlx5_glue_dv_create_flow_action_dest_devx_tir(void *tir) #endif } +static void * +__mlx5_glue_dv_create_flow_action_modify_header + (struct ibv_context *ctx, + size_t actions_sz, + uint64_t actions[], + enum mlx5dv_flow_table_type ft_type) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_action_modify_header + (ctx, actions_sz, actions, ft_type); +#else + (void)ctx; + (void)ft_type; + (void)actions_sz; + (void)actions; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_action_modify_header (struct ibv_context *ctx, @@ -758,7 +818,7 @@ mlx5_glue_dv_create_flow_action_modify_header if (!action) return NULL; action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; - action->action = mlx5dv_create_flow_action_modify_header + action->action = __mlx5_glue_dv_create_flow_action_modify_header (ctx, actions_sz, actions, ft_type); return action; #endif @@ -774,6 +834,27 @@ mlx5_glue_dv_create_flow_action_modify_header #endif } +static void * +__mlx5_glue_dv_create_flow_action_packet_reformat + (struct ibv_context *ctx, + size_t data_sz, void *data, + enum mlx5dv_flow_action_packet_reformat_type reformat_type, + enum mlx5dv_flow_table_type ft_type) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_action_packet_reformat + (ctx, data_sz, data, reformat_type, ft_type); +#else + (void)ctx; + (void)reformat_type; + (void)ft_type; + (void)data_sz; + (void)data; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_action_packet_reformat (struct ibv_context *ctx, @@ -798,7 +879,7 @@ mlx5_glue_dv_create_flow_action_packet_reformat if (!action) return NULL; action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; - action->action = mlx5dv_create_flow_action_packet_reformat + action->action = __mlx5_glue_dv_create_flow_action_packet_reformat (ctx, data_sz, data, reformat_type, ft_type); return action; #endif @@ -908,6 +989,18 @@ mlx5_glue_dv_destroy_flow(void *flow_id) #endif } +static int +__mlx5_glue_dv_destroy_flow_matcher(void *matcher) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_destroy_flow_matcher(matcher); +#else + (void)matcher; + errno = ENOTSUP; + return errno; +#endif +} + static int mlx5_glue_dv_destroy_flow_matcher(void *matcher) { @@ -915,7 +1008,7 @@ mlx5_glue_dv_destroy_flow_matcher(void *matcher) #ifdef HAVE_MLX5DV_DR return mlx5dv_dr_matcher_destroy(matcher); #else - return mlx5dv_destroy_flow_matcher(matcher); + return __mlx5_glue_dv_destroy_flow_matcher(matcher); #endif #else (void)matcher; @@ -1164,12 +1257,18 @@ mlx5_glue_devx_port_query(struct ibv_context *ctx, info->vport_id = devx_port.vport; info->query_flags |= MLX5_PORT_QUERY_VPORT; } + if (devx_port.flags & MLX5DV_QUERY_PORT_ESW_OWNER_VHCA_ID) { + info->esw_owner_vhca_id = devx_port.esw_owner_vhca_id; + info->query_flags |= MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID; + } #else #ifdef HAVE_MLX5DV_DR_DEVX_PORT /* The legacy DevX port query API is implemented (prior v35). */ struct mlx5dv_devx_port devx_port = { .comp_mask = MLX5DV_DEVX_PORT_VPORT | - MLX5DV_DEVX_PORT_MATCH_REG_C_0 + MLX5DV_DEVX_PORT_MATCH_REG_C_0 | + MLX5DV_DEVX_PORT_VPORT_VHCA_ID | + MLX5DV_DEVX_PORT_ESW_OWNER_VHCA_ID }; err = mlx5dv_query_devx_port(ctx, port_num, &devx_port); @@ -1449,6 +1548,7 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { .close_device = mlx5_glue_close_device, .query_device = mlx5_glue_query_device, .query_device_ex = mlx5_glue_query_device_ex, + .get_device_name = mlx5_glue_get_device_name, .query_rt_values_ex = mlx5_glue_query_rt_values_ex, .query_port = mlx5_glue_query_port, .create_comp_channel = mlx5_glue_create_comp_channel, @@ -1507,7 +1607,9 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { .dv_init_obj = mlx5_glue_dv_init_obj, .dv_create_qp = mlx5_glue_dv_create_qp, .dv_create_flow_matcher = mlx5_glue_dv_create_flow_matcher, + .dv_create_flow_matcher_root = __mlx5_glue_dv_create_flow_matcher, .dv_create_flow = mlx5_glue_dv_create_flow, + .dv_create_flow_root = __mlx5_glue_dv_create_flow, .dv_create_flow_action_counter = mlx5_glue_dv_create_flow_action_counter, .dv_create_flow_action_dest_ibv_qp = @@ -1516,8 +1618,12 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { mlx5_glue_dv_create_flow_action_dest_devx_tir, .dv_create_flow_action_modify_header = mlx5_glue_dv_create_flow_action_modify_header, + .dv_create_flow_action_modify_header_root = + __mlx5_glue_dv_create_flow_action_modify_header, .dv_create_flow_action_packet_reformat = mlx5_glue_dv_create_flow_action_packet_reformat, + .dv_create_flow_action_packet_reformat_root = + __mlx5_glue_dv_create_flow_action_packet_reformat, .dv_create_flow_action_tag = mlx5_glue_dv_create_flow_action_tag, .dv_create_flow_action_meter = mlx5_glue_dv_create_flow_action_meter, .dv_modify_flow_action_meter = mlx5_glue_dv_modify_flow_action_meter, @@ -1526,6 +1632,7 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { mlx5_glue_dr_create_flow_action_default_miss, .dv_destroy_flow = mlx5_glue_dv_destroy_flow, .dv_destroy_flow_matcher = mlx5_glue_dv_destroy_flow_matcher, + .dv_destroy_flow_matcher_root = __mlx5_glue_dv_destroy_flow_matcher, .dv_open_device = mlx5_glue_dv_open_device, .devx_obj_create = mlx5_glue_devx_obj_create, .devx_obj_destroy = mlx5_glue_devx_obj_destroy, diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h index c4903a6dce..ef7341a76a 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.h +++ b/drivers/common/mlx5/linux/mlx5_glue.h @@ -91,10 +91,12 @@ struct mlx5dv_port; #define MLX5_PORT_QUERY_VPORT (1u << 0) #define MLX5_PORT_QUERY_REG_C0 (1u << 1) +#define MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID (1u << 2) struct mlx5_port_info { uint16_t query_flags; uint16_t vport_id; /* Associated VF vport index (if any). */ + uint16_t esw_owner_vhca_id; /* Associated the esw_owner that this VF belongs to. */ uint32_t vport_meta_tag; /* Used for vport index match ove VF LAG. */ uint32_t vport_meta_mask; /* Used for vport index field match mask. */ }; @@ -164,6 +166,7 @@ struct mlx5_glue { int (*query_device_ex)(struct ibv_context *context, const struct ibv_query_device_ex_input *input, struct ibv_device_attr_ex *attr); + const char *(*get_device_name)(struct ibv_device *device); int (*query_rt_values_ex)(struct ibv_context *context, struct ibv_values_ex *values); int (*query_port)(struct ibv_context *context, uint8_t port_num, @@ -268,8 +271,13 @@ struct mlx5_glue { (struct ibv_context *context, struct mlx5dv_flow_matcher_attr *matcher_attr, void *tbl); + void *(*dv_create_flow_matcher_root) + (struct ibv_context *context, + struct mlx5dv_flow_matcher_attr *matcher_attr); void *(*dv_create_flow)(void *matcher, void *match_value, size_t num_actions, void *actions[]); + void *(*dv_create_flow_root)(void *matcher, void *match_value, + size_t num_actions, void *actions); void *(*dv_create_flow_action_counter)(void *obj, uint32_t offset); void *(*dv_create_flow_action_dest_ibv_qp)(void *qp); void *(*dv_create_flow_action_dest_devx_tir)(void *tir); @@ -277,12 +285,20 @@ struct mlx5_glue { (struct ibv_context *ctx, enum mlx5dv_flow_table_type ft_type, void *domain, uint64_t flags, size_t actions_sz, uint64_t actions[]); + void *(*dv_create_flow_action_modify_header_root) + (struct ibv_context *ctx, size_t actions_sz, uint64_t actions[], + enum mlx5dv_flow_table_type ft_type); void *(*dv_create_flow_action_packet_reformat) (struct ibv_context *ctx, enum mlx5dv_flow_action_packet_reformat_type reformat_type, enum mlx5dv_flow_table_type ft_type, struct mlx5dv_dr_domain *domain, uint32_t flags, size_t data_sz, void *data); + void *(*dv_create_flow_action_packet_reformat_root) + (struct ibv_context *ctx, + size_t data_sz, void *data, + enum mlx5dv_flow_action_packet_reformat_type reformat_type, + enum mlx5dv_flow_table_type ft_type); void *(*dv_create_flow_action_tag)(uint32_t tag); void *(*dv_create_flow_action_meter) (struct mlx5dv_dr_flow_meter_attr *attr); @@ -291,6 +307,7 @@ struct mlx5_glue { void *(*dr_create_flow_action_default_miss)(void); int (*dv_destroy_flow)(void *flow); int (*dv_destroy_flow_matcher)(void *matcher); + int (*dv_destroy_flow_matcher_root)(void *matcher); struct ibv_context *(*dv_open_device)(struct ibv_device *device); struct mlx5dv_var *(*dv_alloc_var)(struct ibv_context *context, uint32_t flags); -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 08/19] net/mlx5: Remove stub HWS support 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (6 preceding siblings ...) 2022-09-22 19:03 ` [v1 07/19] net/mlx5: Add additional glue functions for HWS Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 09/19] net/mlx5/hws: Add HWS command layer Alex Vesker ` (15 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad; +Cc: dev, orika This change brakes compilation, which is bad, but it will be fixed for the final submission. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/meson.build | 1 - drivers/net/mlx5/mlx5.h | 1 - drivers/net/mlx5/mlx5_dr.c | 383 ----------------------------- drivers/net/mlx5/mlx5_dr.h | 456 ----------------------------------- 4 files changed, 841 deletions(-) delete mode 100644 drivers/net/mlx5/mlx5_dr.c delete mode 100644 drivers/net/mlx5/mlx5_dr.h diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index 6a84d96380..c7ddd4b65c 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -14,7 +14,6 @@ sources = files( 'mlx5.c', 'mlx5_ethdev.c', 'mlx5_flow.c', - 'mlx5_dr.c', 'mlx5_flow_meter.c', 'mlx5_flow_dv.c', 'mlx5_flow_hw.c', diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index e855dc6ab5..05a1bad0e6 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -34,7 +34,6 @@ #include "mlx5_os.h" #include "mlx5_autoconf.h" #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) -#include "mlx5_dr.h" #endif #define MLX5_SH(dev) (((struct mlx5_priv *)(dev)->data->dev_private)->sh) diff --git a/drivers/net/mlx5/mlx5_dr.c b/drivers/net/mlx5/mlx5_dr.c deleted file mode 100644 index 7218708986..0000000000 --- a/drivers/net/mlx5/mlx5_dr.c +++ /dev/null @@ -1,383 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved. - */ -#include <rte_flow.h> - -#include "mlx5_defs.h" -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) -#include "mlx5_dr.h" - -/* - * The following null stubs are prepared in order not to break the linkage - * before the HW steering low-level implementation is added. - */ - -/* Open a context used for direct rule insertion using hardware steering. - * Each context can contain multiple tables of different types. - * - * @param[in] ibv_ctx - * The ibv context to used for HWS. - * @param[in] attr - * Attributes used for context open. - * @return pointer to mlx5dr_context on success NULL otherwise. - */ -__rte_weak struct mlx5dr_context * -mlx5dr_context_open(void *ibv_ctx, - struct mlx5dr_context_attr *attr) -{ - (void)ibv_ctx; - (void)attr; - return NULL; -} - -/* Close a context used for direct hardware steering. - * - * @param[in] ctx - * mlx5dr context to close. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_context_close(struct mlx5dr_context *ctx) -{ - (void)ctx; - return 0; -} - -/* Create a new direct rule table. Each table can contain multiple matchers. - * - * @param[in] ctx - * The context in which the new table will be opened. - * @param[in] attr - * Attributes used for table creation. - * @return pointer to mlx5dr_table on success NULL otherwise. - */ -__rte_weak struct mlx5dr_table * -mlx5dr_table_create(struct mlx5dr_context *ctx, - struct mlx5dr_table_attr *attr) -{ - (void)ctx; - (void)attr; - return NULL; -} - -/* Destroy direct rule table. - * - * @param[in] tbl - * mlx5dr table to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int mlx5dr_table_destroy(struct mlx5dr_table *tbl) -{ - (void)tbl; - return 0; -} - -/* Create new match template based on items mask, the match template - * will be used for matcher creation. - * - * @param[in] items - * Describe the mask for template creation - * @param[in] flags - * Template creation flags - * @return pointer to mlx5dr_match_template on success NULL otherwise - */ -__rte_weak struct mlx5dr_match_template * -mlx5dr_match_template_create(const struct rte_flow_item items[], - enum mlx5dr_match_template_flags flags) -{ - (void)items; - (void)flags; - return NULL; -} - -/* Destroy match template. - * - * @param[in] mt - * Match template to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt) -{ - (void)mt; - return 0; -} - -/* Create a new direct rule matcher. Each matcher can contain multiple rules. - * Matchers on the table will be processed by priority. Matching fields and - * mask are described by the match template. In some cases multiple match - * templates can be used on the same matcher. - * - * @param[in] table - * The table in which the new matcher will be opened. - * @param[in] mt - * Array of match templates to be used on matcher. - * @param[in] num_of_mt - * Number of match templates in mt array. - * @param[in] attr - * Attributes used for matcher creation. - * @return pointer to mlx5dr_matcher on success NULL otherwise. - */ -__rte_weak struct mlx5dr_matcher * -mlx5dr_matcher_create(struct mlx5dr_table *table __rte_unused, - struct mlx5dr_match_template *mt[] __rte_unused, - uint8_t num_of_mt __rte_unused, - struct mlx5dr_matcher_attr *attr __rte_unused) -{ - return NULL; -} - -/* Destroy direct rule matcher. - * - * @param[in] matcher - * Matcher to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher __rte_unused) -{ - return 0; -} - -/* Enqueue create rule operation. - * - * @param[in] matcher - * The matcher in which the new rule will be created. - * @param[in] mt_idx - * Match template index to create the rule with. - * @param[in] items - * The items used for the value matching. - * @param[in] rule_actions - * Rule action to be executed on match. - * @param[in] num_of_actions - * Number of rule actions. - * @param[in] attr - * Rule creation attributes. - * @param[in, out] rule_handle - * A valid rule handle. The handle doesn't require any initialization. - * @return zero on successful enqueue non zero otherwise. - */ -__rte_weak int -mlx5dr_rule_create(struct mlx5dr_matcher *matcher __rte_unused, - uint8_t mt_idx __rte_unused, - const struct rte_flow_item items[] __rte_unused, - struct mlx5dr_rule_action rule_actions[] __rte_unused, - uint8_t num_of_actions __rte_unused, - struct mlx5dr_rule_attr *attr __rte_unused, - struct mlx5dr_rule *rule_handle __rte_unused) -{ - return 0; -} - -/* Enqueue destroy rule operation. - * - * @param[in] rule - * The rule destruction to enqueue. - * @param[in] attr - * Rule destruction attributes. - * @return zero on successful enqueue non zero otherwise. - */ -__rte_weak int -mlx5dr_rule_destroy(struct mlx5dr_rule *rule __rte_unused, - struct mlx5dr_rule_attr *attr __rte_unused) -{ - return 0; -} - -/* Create direct rule drop action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_drop(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule default miss action. - * Defaults are RX: Drop TX: Wire. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_default_miss(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule goto table action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] tbl - * Destination table. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx __rte_unused, - struct mlx5dr_table *tbl __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule goto TIR action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] obj - * Direct rule TIR devx object. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx __rte_unused, - struct mlx5dr_devx_obj *obj __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule TAG action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_tag(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule counter action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] obj - * Direct rule counter devx object. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_counter(struct mlx5dr_context *ctx, - struct mlx5dr_devx_obj *obj, - uint32_t flags); - -/* Create direct rule reformat action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] reformat_type - * Type of reformat. - * @param[in] data_sz - * Size in bytes of data. - * @param[in] inline_data - * Header data array in case of inline action. - * @param[in] log_bulk_size - * Number of unique values used with this pattern. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_reformat(struct mlx5dr_context *ctx __rte_unused, - enum mlx5dr_action_reformat_type reformat_type __rte_unused, - size_t data_sz __rte_unused, - void *inline_data __rte_unused, - uint32_t log_bulk_size __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule modify header action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] pattern_sz - * Byte size of the pattern array. - * @param[in] pattern - * PRM format modify pattern action array. - * @param[in] log_bulk_size - * Number of unique values used with this pattern. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, - size_t pattern_sz, - rte_be64_t pattern[], - uint32_t log_bulk_size, - uint32_t flags); - -/* Destroy direct rule action. - * - * @param[in] action - * The action to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_action_destroy(struct mlx5dr_action *action __rte_unused) -{ - return 0; -} - -/* Poll queue for rule creation and deletions completions. - * - * @param[in] ctx - * The context to which the queue belong to. - * @param[in] queue_id - * The id of the queue to poll. - * @param[in, out] res - * Completion array. - * @param[in] res_nb - * Maximum number of results to return. - * @return negative number on failure, the number of completions otherwise. - */ -__rte_weak int -mlx5dr_send_queue_poll(struct mlx5dr_context *ctx __rte_unused, - uint16_t queue_id __rte_unused, - struct rte_flow_op_result res[] __rte_unused, - uint32_t res_nb __rte_unused) -{ - return 0; -} - -/* Perform an action on the queue - * - * @param[in] ctx - * The context to which the queue belong to. - * @param[in] queue_id - * The id of the queue to perform the action on. - * @param[in] actions - * Actions to perform on the queue. (enum mlx5dr_send_queue_actions) - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_send_queue_action(struct mlx5dr_context *ctx __rte_unused, - uint16_t queue_id __rte_unused, - uint32_t actions __rte_unused) -{ - return 0; -} - -#endif diff --git a/drivers/net/mlx5/mlx5_dr.h b/drivers/net/mlx5/mlx5_dr.h deleted file mode 100644 index d0b2c15652..0000000000 --- a/drivers/net/mlx5/mlx5_dr.h +++ /dev/null @@ -1,456 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved. - */ - -#ifndef MLX5_DR_H_ -#define MLX5_DR_H_ - -#include <rte_flow.h> - -struct mlx5dr_context; -struct mlx5dr_table; -struct mlx5dr_matcher; -struct mlx5dr_rule; - -enum mlx5dr_table_type { - MLX5DR_TABLE_TYPE_NIC_RX, - MLX5DR_TABLE_TYPE_NIC_TX, - MLX5DR_TABLE_TYPE_FDB, - MLX5DR_TABLE_TYPE_MAX, -}; - -enum mlx5dr_matcher_resource_mode { - /* Allocate resources based on number of rules with minimal failure probability */ - MLX5DR_MATCHER_RESOURCE_MODE_RULE, - /* Allocate fixed size hash table based on given column and rows */ - MLX5DR_MATCHER_RESOURCE_MODE_HTABLE, -}; - -enum mlx5dr_action_flags { - MLX5DR_ACTION_FLAG_ROOT_RX = 1 << 0, - MLX5DR_ACTION_FLAG_ROOT_TX = 1 << 1, - MLX5DR_ACTION_FLAG_ROOT_FDB = 1 << 2, - MLX5DR_ACTION_FLAG_HWS_RX = 1 << 3, - MLX5DR_ACTION_FLAG_HWS_TX = 1 << 4, - MLX5DR_ACTION_FLAG_HWS_FDB = 1 << 5, - MLX5DR_ACTION_FLAG_INLINE = 1 << 6, -}; - -enum mlx5dr_action_reformat_type { - MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2, - MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2, - MLX5DR_ACTION_REFORMAT_TYPE_TNL_L3_TO_L2, - MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3, -}; - -enum mlx5dr_match_template_flags { - /* Allow relaxed matching by skipping derived dependent match fields. */ - MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH = 1, -}; - -enum mlx5dr_send_queue_actions { - /* Start executing all pending queued rules and write to HW */ - MLX5DR_SEND_QUEUE_ACTION_DRAIN = 1 << 0, -}; - -struct mlx5dr_context_attr { - uint16_t queues; - uint16_t queue_size; - size_t initial_log_ste_memory; - /* Optional PD used for allocating res ources */ - struct ibv_pd *pd; -}; - -struct mlx5dr_table_attr { - enum mlx5dr_table_type type; - uint32_t level; -}; - -struct mlx5dr_matcher_attr { - uint32_t priority; - enum mlx5dr_matcher_resource_mode mode; - union { - struct { - uint8_t sz_row_log; - uint8_t sz_col_log; - } table; - - struct { - uint8_t num_log; - } rule; - }; -}; - -struct mlx5dr_rule_attr { - uint16_t queue_id; - void *user_data; - uint32_t burst:1; -}; - -struct mlx5dr_devx_obj { - struct mlx5dv_devx_obj *obj; - uint32_t id; -}; - -struct mlx5dr_rule_action { - struct mlx5dr_action *action; - union { - struct { - uint32_t value; - } tag; - - struct { - uint32_t offset; - } counter; - - struct { - uint32_t offset; - uint8_t *data; - } modify_header; - - struct { - uint32_t offset; - uint8_t *data; - } reformat; - - struct { - rte_be32_t vlan_hdr; - } push_vlan; - }; -}; - -enum { - MLX5DR_MATCH_TAG_SZ = 32, - MLX5DR_JAMBO_TAG_SZ = 44, -}; - -enum mlx5dr_rule_status { - MLX5DR_RULE_STATUS_UNKNOWN, - MLX5DR_RULE_STATUS_CREATING, - MLX5DR_RULE_STATUS_CREATED, - MLX5DR_RULE_STATUS_DELETING, - MLX5DR_RULE_STATUS_DELETED, - MLX5DR_RULE_STATUS_FAILED, -}; - -struct mlx5dr_rule { - struct mlx5dr_matcher *matcher; - union { - uint8_t match_tag[MLX5DR_MATCH_TAG_SZ]; - struct ibv_flow *flow; - }; - enum mlx5dr_rule_status status; - uint32_t rtc_used; /* The RTC into which the STE was inserted */ -}; - -/* Open a context used for direct rule insertion using hardware steering. - * Each context can contain multiple tables of different types. - * - * @param[in] ibv_ctx - * The ibv context to used for HWS. - * @param[in] attr - * Attributes used for context open. - * @return pointer to mlx5dr_context on success NULL otherwise. - */ -struct mlx5dr_context * -mlx5dr_context_open(void *ibv_ctx, - struct mlx5dr_context_attr *attr); - -/* Close a context used for direct hardware steering. - * - * @param[in] ctx - * mlx5dr context to close. - * @return zero on success non zero otherwise. - */ -int mlx5dr_context_close(struct mlx5dr_context *ctx); - -/* Create a new direct rule table. Each table can contain multiple matchers. - * - * @param[in] ctx - * The context in which the new table will be opened. - * @param[in] attr - * Attributes used for table creation. - * @return pointer to mlx5dr_table on success NULL otherwise. - */ -struct mlx5dr_table * -mlx5dr_table_create(struct mlx5dr_context *ctx, - struct mlx5dr_table_attr *attr); - -/* Destroy direct rule table. - * - * @param[in] tbl - * mlx5dr table to destroy. - * @return zero on success non zero otherwise. - */ -int mlx5dr_table_destroy(struct mlx5dr_table *tbl); - -/* Create new match template based on items mask, the match template - * will be used for matcher creation. - * - * @param[in] items - * Describe the mask for template creation - * @param[in] flags - * Template creation flags - * @return pointer to mlx5dr_match_template on success NULL otherwise - */ -struct mlx5dr_match_template * -mlx5dr_match_template_create(const struct rte_flow_item items[], - enum mlx5dr_match_template_flags flags); - -/* Destroy match template. - * - * @param[in] mt - * Match template to destroy. - * @return zero on success non zero otherwise. - */ -int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt); - -/* Create a new direct rule matcher. Each matcher can contain multiple rules. - * Matchers on the table will be processed by priority. Matching fields and - * mask are described by the match template. In some cases multiple match - * templates can be used on the same matcher. - * - * @param[in] table - * The table in which the new matcher will be opened. - * @param[in] mt - * Array of match templates to be used on matcher. - * @param[in] num_of_mt - * Number of match templates in mt array. - * @param[in] attr - * Attributes used for matcher creation. - * @return pointer to mlx5dr_matcher on success NULL otherwise. - */ -struct mlx5dr_matcher * -mlx5dr_matcher_create(struct mlx5dr_table *table, - struct mlx5dr_match_template *mt[], - uint8_t num_of_mt, - struct mlx5dr_matcher_attr *attr); - -/* Destroy direct rule matcher. - * - * @param[in] matcher - * Matcher to destroy. - * @return zero on success non zero otherwise. - */ -int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher); - -/* Get the size of the rule handle (mlx5dr_rule) to be used on rule creation. - * - * @return size in bytes of rule handle struct. - */ -size_t mlx5dr_rule_get_handle_size(void); - -/* Enqueue create rule operation. - * - * @param[in] matcher - * The matcher in which the new rule will be created. - * @param[in] mt_idx - * Match template index to create the rule with. - * @param[in] items - * The items used for the value matching. - * @param[in] rule_actions - * Rule action to be executed on match. - * @param[in] num_of_actions - * Number of rule actions. - * @param[in] attr - * Rule creation attributes. - * @param[in, out] rule_handle - * A valid rule handle. The handle doesn't require any initialization. - * @return zero on successful enqueue non zero otherwise. - */ -int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, - uint8_t mt_idx, - const struct rte_flow_item items[], - struct mlx5dr_rule_action rule_actions[], - uint8_t num_of_actions, - struct mlx5dr_rule_attr *attr, - struct mlx5dr_rule *rule_handle); - -/* Enqueue destroy rule operation. - * - * @param[in] rule - * The rule destruction to enqueue. - * @param[in] attr - * Rule destruction attributes. - * @return zero on successful enqueue non zero otherwise. - */ -int mlx5dr_rule_destroy(struct mlx5dr_rule *rule, - struct mlx5dr_rule_attr *attr); - -/* Create direct rule drop action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -struct mlx5dr_action * -mlx5dr_action_create_dest_drop(struct mlx5dr_context *ctx, - uint32_t flags); - -/* Create direct rule default miss action. - * Defaults are RX: Drop TX: Wire. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -struct mlx5dr_action * -mlx5dr_action_create_default_miss(struct mlx5dr_context *ctx, - uint32_t flags); - -/* Create direct rule goto table action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] tbl - * Destination table. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -struct mlx5dr_action * -mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx, - struct mlx5dr_table *tbl, - uint32_t flags); - -/* Create direct rule goto TIR action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] obj - * Direct rule TIR devx object. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -struct mlx5dr_action * -mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx, - struct mlx5dr_devx_obj *obj, - uint32_t flags); - -/* Create direct rule TAG action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -struct mlx5dr_action * -mlx5dr_action_create_tag(struct mlx5dr_context *ctx, - uint32_t flags); - -/* Create direct rule counter action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] obj - * Direct rule counter devx object. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -struct mlx5dr_action * -mlx5dr_action_create_counter(struct mlx5dr_context *ctx, - struct mlx5dr_devx_obj *obj, - uint32_t flags); - -/* Create direct rule reformat action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] reformat_type - * Type of reformat. - * @param[in] data_sz - * Size in bytes of data. - * @param[in] inline_data - * Header data array in case of inline action. - * @param[in] log_bulk_size - * Number of unique values used with this pattern. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -struct mlx5dr_action * -mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, - enum mlx5dr_action_reformat_type reformat_type, - size_t data_sz, - void *inline_data, - uint32_t log_bulk_size, - uint32_t flags); - -/* Create direct rule modify header action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] pattern_sz - * Byte size of the pattern array. - * @param[in] pattern - * PRM format modify pattern action array. - * @param[in] log_bulk_size - * Number of unique values used with this pattern. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -struct mlx5dr_action * -mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, - size_t pattern_sz, - rte_be64_t pattern[], - uint32_t log_bulk_size, - uint32_t flags); - -/* Destroy direct rule action. - * - * @param[in] action - * The action to destroy. - * @return zero on success non zero otherwise. - */ -int mlx5dr_action_destroy(struct mlx5dr_action *action); - -/* Poll queue for rule creation and deletions completions. - * - * @param[in] ctx - * The context to which the queue belong to. - * @param[in] queue_id - * The id of the queue to poll. - * @param[in, out] res - * Completion array. - * @param[in] res_nb - * Maximum number of results to return. - * @return negative number on failure, the number of completions otherwise. - */ -int mlx5dr_send_queue_poll(struct mlx5dr_context *ctx, - uint16_t queue_id, - struct rte_flow_op_result res[], - uint32_t res_nb); - -/* Perform an action on the queue - * - * @param[in] ctx - * The context to which the queue belong to. - * @param[in] queue_id - * The id of the queue to perform the action on. - * @param[in] actions - * Actions to perform on the queue. (enum mlx5dr_send_queue_actions) - * @return zero on success non zero otherwise. - */ -int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, - uint16_t queue_id, - uint32_t actions); - -/* Dump HWS info - * - * @param[in] ctx - * The context which to dump the info from. - * @param[in] f - * The file to write the dump to. - * @return zero on success non zero otherwise. - */ -int mlx5dr_debug_dump(struct mlx5dr_context *ctx, FILE *f); - -#endif -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 09/19] net/mlx5/hws: Add HWS command layer 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (7 preceding siblings ...) 2022-09-22 19:03 ` [v1 08/19] net/mlx5: Remove stub HWS support Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 10/19] net/mlx5/hws: Add HWS pool and buddy Alex Vesker ` (14 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad; +Cc: dev, orika The command layer is used to communicate with the FW, query capabilities and allocate FW resources needed for HWS. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/common/mlx5/mlx5_prm.h | 609 ++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_cmd.c | 957 ++++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_cmd.h | 232 ++++++++ 3 files changed, 1787 insertions(+), 11 deletions(-) create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.h diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 12eb7b3b7f..b5624e7cd1 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -289,6 +289,8 @@ /* The alignment needed for CQ buffer. */ #define MLX5_CQE_BUF_ALIGNMENT rte_mem_page_size() +#define MAX_ACTIONS_DATA_IN_HEADER_MODIFY 512 + /* Completion mode. */ enum mlx5_completion_mode { MLX5_COMP_ONLY_ERR = 0x0, @@ -677,6 +679,10 @@ enum { MLX5_MODIFICATION_TYPE_SET = 0x1, MLX5_MODIFICATION_TYPE_ADD = 0x2, MLX5_MODIFICATION_TYPE_COPY = 0x3, + MLX5_MODIFICATION_TYPE_INSERT = 0x4, + MLX5_MODIFICATION_TYPE_REMOVE = 0x5, + MLX5_MODIFICATION_TYPE_NOP = 0x6, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS = 0x7, }; /* The field of packet to be modified. */ @@ -1111,6 +1117,10 @@ enum { MLX5_CMD_OP_QUERY_TIS = 0x915, MLX5_CMD_OP_CREATE_RQT = 0x916, MLX5_CMD_OP_MODIFY_RQT = 0x917, + MLX5_CMD_OP_CREATE_FLOW_TABLE = 0x930, + MLX5_CMD_OP_CREATE_FLOW_GROUP = 0x933, + MLX5_CMD_OP_SET_FLOW_TABLE_ENTRY = 0x936, + MLX5_CMD_OP_MODIFY_FLOW_TABLE = 0x93c, MLX5_CMD_OP_ALLOC_FLOW_COUNTER = 0x939, MLX5_CMD_OP_QUERY_FLOW_COUNTER = 0x93b, MLX5_CMD_OP_CREATE_GENERAL_OBJECT = 0xa00, @@ -1295,9 +1305,11 @@ enum { MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP = 0xc << 1, MLX5_GET_HCA_CAP_OP_MOD_ROCE = 0x4 << 1, MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE = 0x7 << 1, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE = 0x8 << 1, MLX5_SET_HCA_CAP_OP_MOD_ESW = 0x9 << 1, MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1, MLX5_GET_HCA_CAP_OP_MOD_CRYPTO = 0x1A << 1, + MLX5_GET_HCA_CAP_OP_MOD_WQE_BASED_FLOW_TABLE = 0x1B << 1, MLX5_GET_HCA_CAP_OP_MOD_PARSE_GRAPH_NODE_CAP = 0x1C << 1, MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE_2 = 0x20 << 1, }; @@ -1316,6 +1328,14 @@ enum { (1ULL << MLX5_GENERAL_OBJ_TYPE_GENEVE_TLV_OPT) #define MLX5_GENERAL_OBJ_TYPES_CAP_CONN_TRACK_OFFLOAD \ (1ULL << MLX5_GENERAL_OBJ_TYPE_CONN_TRACK_OFFLOAD) +#define MLX5_GENERAL_OBJ_TYPES_CAP_RTC \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_RTC) +#define MLX5_GENERAL_OBJ_TYPES_CAP_STC \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_STC) +#define MLX5_GENERAL_OBJ_TYPES_CAP_STE \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_STE) +#define MLX5_GENERAL_OBJ_TYPES_CAP_DEFINER \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_DEFINER) #define MLX5_GENERAL_OBJ_TYPES_CAP_DEK \ (1ULL << MLX5_GENERAL_OBJ_TYPE_DEK) #define MLX5_GENERAL_OBJ_TYPES_CAP_IMPORT_KEK \ @@ -1372,6 +1392,11 @@ enum { #define MLX5_HCA_FLEX_VXLAN_GPE_ENABLED (1UL << 7) #define MLX5_HCA_FLEX_ICMP_ENABLED (1UL << 8) #define MLX5_HCA_FLEX_ICMPV6_ENABLED (1UL << 9) +#define MLX5_HCA_FLEX_GTPU_ENABLED (1UL << 11) +#define MLX5_HCA_FLEX_GTPU_DW_2_ENABLED (1UL << 16) +#define MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED (1UL << 17) +#define MLX5_HCA_FLEX_GTPU_DW_0_ENABLED (1UL << 18) +#define MLX5_HCA_FLEX_GTPU_TEID_ENABLED (1UL << 19) /* The device steering logic format. */ #define MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 0x0 @@ -1504,7 +1529,8 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 wol_u[0x1]; u8 wol_p[0x1]; u8 stat_rate_support[0x10]; - u8 reserved_at_1f0[0xc]; + u8 reserved_at_1ef[0xb]; + u8 wqe_based_flow_table_update_cap[0x1]; u8 cqe_version[0x4]; u8 compact_address_vector[0x1]; u8 striding_rq[0x1]; @@ -1680,7 +1706,9 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 cqe_compression[0x1]; u8 cqe_compression_timeout[0x10]; u8 cqe_compression_max_num[0x10]; - u8 reserved_at_5e0[0x10]; + u8 reserved_at_5e0[0x8]; + u8 flex_parser_id_gtpu_dw_0[0x4]; + u8 reserved_at_5ec[0x4]; u8 tag_matching[0x1]; u8 rndv_offload_rc[0x1]; u8 rndv_offload_dc[0x1]; @@ -1690,17 +1718,38 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 affiliate_nic_vport_criteria[0x8]; u8 native_port_num[0x8]; u8 num_vhca_ports[0x8]; - u8 reserved_at_618[0x6]; + u8 flex_parser_id_gtpu_teid[0x4]; + u8 reserved_at_61c[0x2]; u8 sw_owner_id[0x1]; u8 reserved_at_61f[0x6C]; u8 wait_on_data[0x1]; u8 wait_on_time[0x1]; - u8 reserved_at_68d[0xBB]; + u8 reserved_at_68d[0x37]; + u8 flex_parser_id_geneve_opt_0[0x4]; + u8 flex_parser_id_icmp_dw1[0x4]; + u8 flex_parser_id_icmp_dw0[0x4]; + u8 flex_parser_id_icmpv6_dw1[0x4]; + u8 flex_parser_id_icmpv6_dw0[0x4]; + u8 flex_parser_id_outer_first_mpls_over_gre[0x4]; + u8 flex_parser_id_outer_first_mpls_over_udp_label[0x4]; + u8 reserved_at_6e0[0x20]; + u8 flex_parser_id_gtpu_dw_2[0x4]; + u8 flex_parser_id_gtpu_first_ext_dw_0[0x4]; + u8 reserved_at_708[0x40]; u8 dma_mmo_qp[0x1]; u8 regexp_mmo_qp[0x1]; u8 compress_mmo_qp[0x1]; u8 decompress_mmo_qp[0x1]; - u8 reserved_at_624[0xd4]; + u8 reserved_at_74c[0x14]; + u8 reserved_at_760[0x3]; + u8 log_max_num_header_modify_argument[0x5]; + u8 log_header_modify_argument_granularity_offset[0x4]; + u8 log_header_modify_argument_granularity[0x4]; + u8 reserved_at_770[0x3]; + u8 log_header_modify_argument_max_alloc[0x5]; + u8 reserved_at_778[0x8]; + u8 reserved_at_780[0x40]; + u8 match_definer_format_supported[0x40]; }; struct mlx5_ifc_qos_cap_bits { @@ -1875,7 +1924,9 @@ struct mlx5_ifc_flow_table_prop_layout_bits { u8 log_max_ft_sampler_num[8]; u8 metadata_reg_b_width[0x8]; u8 metadata_reg_a_width[0x8]; - u8 reserved_at_60[0x18]; + u8 reserved_at_60[0xa]; + u8 reparse[0x1]; + u8 reserved_at_6b[0xd]; u8 log_max_ft_num[0x8]; u8 reserved_at_80[0x10]; u8 log_max_flow_counter[0x8]; @@ -2054,8 +2105,48 @@ struct mlx5_ifc_cmd_hca_cap_2_bits { u8 log_conn_track_max_alloc[0x5]; u8 reserved_at_d8[0x3]; u8 log_max_conn_track_offload[0x5]; - u8 reserved_at_e0[0x20]; /* End of DW7. */ - u8 reserved_at_100[0x700]; + u8 reserved_at_e0[0xc0]; + u8 reserved_at_1a0[0xb]; + u8 format_select_dw_8_6_ext[0x1]; + u8 reserved_at_1ac[0x14]; + u8 general_obj_types_127_64[0x40]; + u8 reserved_at_200[0x80]; + u8 format_select_dw_gtpu_dw_0[0x8]; + u8 format_select_dw_gtpu_dw_1[0x8]; + u8 format_select_dw_gtpu_dw_2[0x8]; + u8 format_select_dw_gtpu_first_ext_dw_0[0x8]; + u8 reserved_at_2a0[0x560]; +}; + +struct mlx5_ifc_wqe_based_flow_table_cap_bits { + u8 reserved_at_0[0x3]; + u8 log_max_num_ste[0x5]; + u8 reserved_at_8[0x3]; + u8 log_max_num_stc[0x5]; + u8 reserved_at_10[0x3]; + u8 log_max_num_rtc[0x5]; + u8 reserved_at_18[0x3]; + u8 log_max_num_header_modify_pattern[0x5]; + u8 reserved_at_20[0x3]; + u8 stc_alloc_log_granularity[0x5]; + u8 reserved_at_28[0x3]; + u8 stc_alloc_log_max[0x5]; + u8 reserved_at_30[0x3]; + u8 ste_alloc_log_granularity[0x5]; + u8 reserved_at_38[0x3]; + u8 ste_alloc_log_max[0x5]; + u8 reserved_at_40[0xb]; + u8 rtc_reparse_mode[0x5]; + u8 reserved_at_50[0x3]; + u8 rtc_index_mode[0x5]; + u8 reserved_at_58[0x3]; + u8 rtc_log_depth_max[0x5]; + u8 reserved_at_60[0x10]; + u8 ste_format[0x10]; + u8 stc_action_type[0x80]; + u8 header_insert_type[0x10]; + u8 header_remove_type[0x10]; + u8 trivial_match_definer[0x20]; }; struct mlx5_ifc_esw_cap_bits { @@ -2079,6 +2170,7 @@ union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_flow_table_esw_cap_bits flow_table_esw_cap; struct mlx5_ifc_esw_cap_bits esw_cap; struct mlx5_ifc_roce_caps_bits roce_caps; + struct mlx5_ifc_wqe_based_flow_table_cap_bits wqe_based_flow_table_cap; u8 reserved_at_0[0x8000]; }; @@ -2092,6 +2184,20 @@ struct mlx5_ifc_set_action_in_bits { u8 data[0x20]; }; +struct mlx5_ifc_copy_action_in_bits { + u8 action_type[0x4]; + u8 src_field[0xc]; + u8 reserved_at_10[0x3]; + u8 src_offset[0x5]; + u8 reserved_at_18[0x3]; + u8 length[0x5]; + u8 reserved_at_20[0x4]; + u8 dst_field[0xc]; + u8 reserved_at_30[0x3]; + u8 dst_offset[0x5]; + u8 reserved_at_38[0x8]; +}; + struct mlx5_ifc_query_hca_cap_out_bits { u8 status[0x8]; u8 reserved_at_8[0x18]; @@ -2958,6 +3064,7 @@ enum { MLX5_GENERAL_OBJ_TYPE_GENEVE_TLV_OPT = 0x000b, MLX5_GENERAL_OBJ_TYPE_DEK = 0x000c, MLX5_GENERAL_OBJ_TYPE_VIRTQ = 0x000d, + MLX5_GENERAL_OBJ_TYPE_DEFINER = 0x0018, MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS = 0x001c, MLX5_GENERAL_OBJ_TYPE_IMPORT_KEK = 0x001d, MLX5_GENERAL_OBJ_TYPE_CREDENTIAL = 0x001e, @@ -2966,6 +3073,11 @@ enum { MLX5_GENERAL_OBJ_TYPE_FLOW_METER_ASO = 0x0024, MLX5_GENERAL_OBJ_TYPE_FLOW_HIT_ASO = 0x0025, MLX5_GENERAL_OBJ_TYPE_CONN_TRACK_OFFLOAD = 0x0031, + MLX5_GENERAL_OBJ_TYPE_ARG = 0x0023, + MLX5_GENERAL_OBJ_TYPE_STC = 0x0040, + MLX5_GENERAL_OBJ_TYPE_RTC = 0x0041, + MLX5_GENERAL_OBJ_TYPE_STE = 0x0042, + MLX5_GENERAL_OBJ_TYPE_MODIFY_HEADER_PATTERN = 0x0043, }; struct mlx5_ifc_general_obj_in_cmd_hdr_bits { @@ -2973,9 +3085,14 @@ struct mlx5_ifc_general_obj_in_cmd_hdr_bits { u8 reserved_at_10[0x20]; u8 obj_type[0x10]; u8 obj_id[0x20]; - u8 reserved_at_60[0x3]; - u8 log_obj_range[0x5]; - u8 reserved_at_58[0x18]; + union { + struct { + u8 reserved_at_60[0x3]; + u8 log_obj_range[0x5]; + u8 reserved_at_58[0x18]; + }; + u8 obj_offset[0x20]; + }; }; struct mlx5_ifc_general_obj_out_cmd_hdr_bits { @@ -3009,6 +3126,243 @@ struct mlx5_ifc_geneve_tlv_option_bits { u8 reserved_at_80[0x180]; }; + +enum mlx5_ifc_rtc_update_mode { + MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH = 0x0, + MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET = 0x1, +}; + +enum mlx5_ifc_rtc_ste_format { + MLX5_IFC_RTC_STE_FORMAT_8DW = 0x4, + MLX5_IFC_RTC_STE_FORMAT_11DW = 0x5, +}; + +enum mlx5_ifc_rtc_reparse_mode { + MLX5_IFC_RTC_REPARSE_NEVER = 0x0, + MLX5_IFC_RTC_REPARSE_ALWAYS = 0x1, +}; + +struct mlx5_ifc_rtc_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x40]; + u8 update_index_mode[0x2]; + u8 reparse_mode[0x2]; + u8 reserved_at_84[0x4]; + u8 pd[0x18]; + u8 reserved_at_a0[0x13]; + u8 log_depth[0x5]; + u8 log_hash_size[0x8]; + u8 ste_format[0x8]; + u8 table_type[0x8]; + u8 reserved_at_d0[0x10]; + u8 match_definer_id[0x20]; + u8 stc_id[0x20]; + u8 ste_table_base_id[0x20]; + u8 ste_table_offset[0x20]; + u8 reserved_at_160[0x8]; + u8 miss_flow_table_id[0x18]; + u8 reserved_at_180[0x280]; +}; + +enum mlx5_ifc_stc_action_type { + MLX5_IFC_STC_ACTION_TYPE_NOP = 0x00, + MLX5_IFC_STC_ACTION_TYPE_COPY = 0x05, + MLX5_IFC_STC_ACTION_TYPE_SET = 0x06, + MLX5_IFC_STC_ACTION_TYPE_ADD = 0x07, + MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS = 0x08, + MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE = 0x09, + MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT = 0x0b, + MLX5_IFC_STC_ACTION_TYPE_TAG = 0x0c, + MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST = 0x0e, + MLX5_IFC_STC_ACTION_TYPE_ASO = 0x12, + MLX5_IFC_STC_ACTION_TYPE_COUNTER = 0x14, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE = 0x80, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR = 0x81, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT = 0x82, + MLX5_IFC_STC_ACTION_TYPE_DROP = 0x83, + MLX5_IFC_STC_ACTION_TYPE_ALLOW = 0x84, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT = 0x85, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK = 0x86, +}; + +struct mlx5_ifc_stc_ste_param_ste_table_bits { + u8 ste_obj_id[0x20]; + u8 match_definer_id[0x20]; + u8 reserved_at_40[0x3]; + u8 log_hash_size[0x5]; + u8 reserved_at_48[0x38]; +}; + +struct mlx5_ifc_stc_ste_param_tir_bits { + u8 reserved_at_0[0x8]; + u8 tirn[0x18]; + u8 reserved_at_20[0x60]; +}; + +struct mlx5_ifc_stc_ste_param_table_bits { + u8 reserved_at_0[0x8]; + u8 table_id[0x18]; + u8 reserved_at_20[0x60]; +}; + +struct mlx5_ifc_stc_ste_param_flow_counter_bits { + u8 flow_counter_id[0x20]; +}; + +enum { + MLX5_ASO_CT_NUM_PER_OBJ = 1, + MLX5_ASO_METER_NUM_PER_OBJ = 2, +}; + +struct mlx5_ifc_stc_ste_param_execute_aso_bits { + u8 aso_object_id[0x20]; + u8 return_reg_id[0x4]; + u8 aso_type[0x4]; + u8 reserved_at_28[0x18]; +}; + +struct mlx5_ifc_stc_ste_param_header_modify_list_bits { + u8 header_modify_pattern_id[0x20]; + u8 header_modify_argument_id[0x20]; +}; + +enum mlx5_ifc_header_anchors { + MLX5_HEADER_ANCHOR_PACKET_START = 0x0, + MLX5_HEADER_ANCHOR_FIRST_VLAN_START = 0x2, + MLX5_HEADER_ANCHOR_IPV6_IPV4 = 0x07, + MLX5_HEADER_ANCHOR_INNER_MAC = 0x13, + MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4 = 0x19, +}; + +struct mlx5_ifc_stc_ste_param_remove_bits { + u8 action_type[0x4]; + u8 decap[0x1]; + u8 reserved_at_5[0x5]; + u8 remove_start_anchor[0x6]; + u8 reserved_at_10[0x2]; + u8 remove_end_anchor[0x6]; + u8 reserved_at_18[0x8]; +}; + +struct mlx5_ifc_stc_ste_param_remove_words_bits { + u8 action_type[0x4]; + u8 reserved_at_4[0x6]; + u8 remove_start_anchor[0x6]; + u8 reserved_at_10[0x1]; + u8 remove_offset[0x7]; + u8 reserved_at_18[0x2]; + u8 remove_size[0x6]; +}; + +struct mlx5_ifc_stc_ste_param_insert_bits { + u8 action_type[0x4]; + u8 encap[0x1]; + u8 inline_data[0x1]; + u8 reserved_at_6[0x4]; + u8 insert_anchor[0x6]; + u8 reserved_at_10[0x1]; + u8 insert_offset[0x7]; + u8 reserved_at_18[0x1]; + u8 insert_size[0x7]; + u8 insert_argument[0x20]; +}; + +struct mlx5_ifc_stc_ste_param_vport_bits { + u8 eswitch_owner_vhca_id[0x10]; + u8 vport_number[0x10]; + u8 eswitch_owner_vhca_id_valid[0x1]; + u8 reserved_at_21[0x59]; +}; + +union mlx5_ifc_stc_param_bits { + struct mlx5_ifc_stc_ste_param_ste_table_bits ste_table; + struct mlx5_ifc_stc_ste_param_tir_bits tir; + struct mlx5_ifc_stc_ste_param_table_bits table; + struct mlx5_ifc_stc_ste_param_flow_counter_bits counter; + struct mlx5_ifc_stc_ste_param_header_modify_list_bits modify_header; + struct mlx5_ifc_stc_ste_param_execute_aso_bits aso; + struct mlx5_ifc_stc_ste_param_remove_bits remove_header; + struct mlx5_ifc_stc_ste_param_insert_bits insert_header; + struct mlx5_ifc_set_action_in_bits add; + struct mlx5_ifc_set_action_in_bits set; + struct mlx5_ifc_copy_action_in_bits copy; + struct mlx5_ifc_stc_ste_param_vport_bits vport; + u8 reserved_at_0[0x80]; +}; + +enum { + MLX5_IFC_MODIFY_STC_FIELD_SELECT_NEW_STC = 1 << 0, +}; + +struct mlx5_ifc_stc_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x48]; + u8 table_type[0x8]; + u8 ste_action_offset[0x8]; + u8 action_type[0x8]; + u8 reserved_at_a0[0x60]; + union mlx5_ifc_stc_param_bits stc_param; + u8 reserved_at_180[0x280]; +}; + +struct mlx5_ifc_ste_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x48]; + u8 table_type[0x8]; + u8 reserved_at_90[0x370]; +}; + +enum { + MLX5_IFC_DEFINER_FORMAT_ID_SELECT = 61, +}; + +struct mlx5_ifc_definer_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x50]; + u8 format_id[0x10]; + u8 reserved_at_60[0x60]; + u8 format_select_dw3[0x8]; + u8 format_select_dw2[0x8]; + u8 format_select_dw1[0x8]; + u8 format_select_dw0[0x8]; + u8 format_select_dw7[0x8]; + u8 format_select_dw6[0x8]; + u8 format_select_dw5[0x8]; + u8 format_select_dw4[0x8]; + u8 reserved_at_100[0x18]; + u8 format_select_dw8[0x8]; + u8 reserved_at_120[0x20]; + u8 format_select_byte3[0x8]; + u8 format_select_byte2[0x8]; + u8 format_select_byte1[0x8]; + u8 format_select_byte0[0x8]; + u8 format_select_byte7[0x8]; + u8 format_select_byte6[0x8]; + u8 format_select_byte5[0x8]; + u8 format_select_byte4[0x8]; + u8 reserved_at_180[0x40]; + u8 ctrl[0xa0]; + u8 match_mask[0x160]; +}; + +struct mlx5_ifc_arg_bits { + u8 rsvd0[0x88]; + u8 access_pd[0x18]; +}; + +struct mlx5_ifc_header_modify_pattern_in_bits { + u8 modify_field_select[0x40]; + + u8 reserved_at_40[0x40]; + + u8 pattern_length[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x60]; + + u8 pattern_data[MAX_ACTIONS_DATA_IN_HEADER_MODIFY * 8]; +}; + struct mlx5_ifc_create_virtio_q_counters_in_bits { struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters; @@ -3024,6 +3378,36 @@ struct mlx5_ifc_create_geneve_tlv_option_in_bits { struct mlx5_ifc_geneve_tlv_option_bits geneve_tlv_opt; }; +struct mlx5_ifc_create_rtc_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_rtc_bits rtc; +}; + +struct mlx5_ifc_create_stc_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_stc_bits stc; +}; + +struct mlx5_ifc_create_ste_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_ste_bits ste; +}; + +struct mlx5_ifc_create_definer_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_definer_bits definer; +}; + +struct mlx5_ifc_create_arg_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_arg_bits arg; +}; + +struct mlx5_ifc_create_header_modify_pattern_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_header_modify_pattern_in_bits pattern; +}; + enum { MLX5_CRYPTO_KEY_SIZE_128b = 0x0, MLX5_CRYPTO_KEY_SIZE_256b = 0x1, @@ -4233,6 +4617,209 @@ struct mlx5_ifc_query_q_counter_in_bits { u8 counter_set_id[0x8]; }; +enum { + FS_FT_NIC_RX = 0x0, + FS_FT_NIC_TX = 0x1, + FS_FT_FDB = 0x4, + FS_FT_FDB_RX = 0xa, + FS_FT_FDB_TX = 0xb, +}; + +struct mlx5_ifc_flow_table_context_bits { + u8 reformat_en[0x1]; + u8 decap_en[0x1]; + u8 sw_owner[0x1]; + u8 termination_table[0x1]; + u8 table_miss_action[0x4]; + u8 level[0x8]; + u8 rtc_valid[0x1]; + u8 reserved_at_11[0x7]; + u8 log_size[0x8]; + + u8 reserved_at_20[0x8]; + u8 table_miss_id[0x18]; + + u8 reserved_at_40[0x8]; + u8 lag_master_next_table_id[0x18]; + + u8 reserved_at_60[0x60]; + + u8 rtc_id_0[0x20]; + + u8 rtc_id_1[0x20]; + + u8 reserved_at_100[0x40]; +}; + +struct mlx5_ifc_create_flow_table_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + + u8 reserved_at_60[0x20]; + + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x20]; + + struct mlx5_ifc_flow_table_context_bits flow_table_context; +}; + +struct mlx5_ifc_create_flow_table_out_bits { + u8 status[0x8]; + u8 icm_address_63_40[0x18]; + u8 syndrome[0x20]; + u8 icm_address_39_32[0x8]; + u8 table_id[0x18]; + u8 icm_address_31_0[0x20]; +}; + +enum mlx5_flow_destination_type { + MLX5_FLOW_DESTINATION_TYPE_VPORT = 0x0, +}; + +enum { + MLX5_FLOW_CONTEXT_ACTION_FWD_DEST = 0x4, +}; + +struct mlx5_ifc_set_fte_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x40]; +}; + +struct mlx5_ifc_dest_format_bits { + u8 destination_type[0x8]; + u8 destination_id[0x18]; + u8 destination_eswitch_owner_vhca_id_valid[0x1]; + u8 packet_reformat[0x1]; + u8 reserved_at_22[0xe]; + u8 destination_eswitch_owner_vhca_id[0x10]; +}; + +struct mlx5_ifc_flow_counter_list_bits { + u8 flow_counter_id[0x20]; + u8 reserved_at_20[0x20]; +}; + +union mlx5_ifc_dest_format_flow_counter_list_auto_bits { + struct mlx5_ifc_dest_format_bits dest_format; + struct mlx5_ifc_flow_counter_list_bits flow_counter_list; + u8 reserved_at_0[0x40]; +}; + +struct mlx5_ifc_flow_context_bits { + u8 reserved_at_00[0x20]; + u8 group_id[0x20]; + u8 reserved_at_40[0x8]; + u8 flow_tag[0x18]; + u8 reserved_at_60[0x10]; + u8 action[0x10]; + u8 extended_destination[0x1]; + u8 reserved_at_81[0x7]; + u8 destination_list_size[0x18]; + u8 reserved_at_a0[0x8]; + u8 flow_counter_list_size[0x18]; + u8 reserved_at_c0[0x1740]; + /* Currently only one destnation */ + union mlx5_ifc_dest_format_flow_counter_list_auto_bits destination[1]; +}; + +struct mlx5_ifc_set_fte_in_bits { + u8 opcode[0x10]; + u8 reserved_at_10[0x10]; + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + u8 reserved_at_60[0x20]; + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + u8 ignore_flow_level[0x1]; + u8 reserved_at_c1[0x17]; + u8 modify_enable_mask[0x8]; + u8 reserved_at_e0[0x20]; + u8 flow_index[0x20]; + u8 reserved_at_120[0xe0]; + struct mlx5_ifc_flow_context_bits flow_context; +}; + +struct mlx5_ifc_create_flow_group_in_bits { + u8 opcode[0x10]; + u8 reserved_at_10[0x10]; + u8 reserved_at_20[0x20]; + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + u8 reserved_at_60[0x20]; + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + u8 reserved_at_c0[0x1f40]; +}; + +struct mlx5_ifc_create_flow_group_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x8]; + u8 group_id[0x18]; + u8 reserved_at_60[0x20]; +}; + +enum { + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION = 1 << 0, + MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID = 1 << 1, +}; + +enum { + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_DEFAULT = 0, + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_GOTO_TBL = 1, +}; + +struct mlx5_ifc_modify_flow_table_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 reserved_at_40[0x10]; + u8 vport_number[0x10]; + + u8 reserved_at_60[0x10]; + u8 modify_field_select[0x10]; + + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + + struct mlx5_ifc_flow_table_context_bits flow_table_context; +}; + +struct mlx5_ifc_modify_flow_table_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + + u8 syndrome[0x20]; + + u8 reserved_at_40[0x60]; +}; + /* CQE format mask. */ #define MLX5E_CQE_FORMAT_MASK 0xc diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c new file mode 100644 index 0000000000..cc9ad6863c --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c @@ -0,0 +1,957 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +int mlx5dr_cmd_destroy_obj(struct mlx5dr_devx_obj *devx_obj) +{ + int ret; + + ret = mlx5_glue->devx_obj_destroy(devx_obj->obj); + simple_free(devx_obj); + + return ret; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_table_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(create_flow_table_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_flow_table_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *ft_ctx; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for flow table object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_flow_table_in, in, opcode, MLX5_CMD_OP_CREATE_FLOW_TABLE); + MLX5_SET(create_flow_table_in, in, table_type, ft_attr->type); + + ft_ctx = MLX5_ADDR_OF(create_flow_table_in, in, flow_table_context); + MLX5_SET(flow_table_context, ft_ctx, level, ft_attr->level); + MLX5_SET(flow_table_context, ft_ctx, rtc_valid, ft_attr->rtc_valid); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create FT"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_flow_table_out, out, table_id); + + return devx_obj; +} + +int +mlx5dr_cmd_flow_table_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_ft_modify_attr *ft_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(modify_flow_table_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(modify_flow_table_in)] = {0}; + void *ft_ctx; + int ret; + + MLX5_SET(modify_flow_table_in, in, opcode, MLX5_CMD_OP_MODIFY_FLOW_TABLE); + MLX5_SET(modify_flow_table_in, in, table_type, ft_attr->type); + MLX5_SET(modify_flow_table_in, in, modify_field_select, ft_attr->modify_fs); + MLX5_SET(modify_flow_table_in, in, table_id, devx_obj->id); + + ft_ctx = MLX5_ADDR_OF(modify_flow_table_in, in, flow_table_context); + + MLX5_SET(flow_table_context, ft_ctx, table_miss_action, ft_attr->table_miss_action); + MLX5_SET(flow_table_context, ft_ctx, table_miss_id, ft_attr->table_miss_id); + MLX5_SET(flow_table_context, ft_ctx, rtc_id_0, ft_attr->rtc_id_0); + MLX5_SET(flow_table_context, ft_ctx, rtc_id_1, ft_attr->rtc_id_1); + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify FT"); + rte_errno = errno; + } + + return ret; +} + +static struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_group_create(struct ibv_context *ctx, + struct mlx5dr_cmd_fg_attr *fg_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(create_flow_group_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_flow_group_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for flow group object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_flow_group_in, in, opcode, MLX5_CMD_OP_CREATE_FLOW_GROUP); + MLX5_SET(create_flow_group_in, in, table_type, fg_attr->table_type); + MLX5_SET(create_flow_group_in, in, table_id, fg_attr->table_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create Flow group"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_flow_group_out, out, group_id); + + return devx_obj; +} + +static struct mlx5dr_devx_obj * +mlx5dr_cmd_set_vport_fte(struct ibv_context *ctx, + uint32_t table_type, + uint32_t table_id, + uint32_t group_id, + uint32_t vport_id) +{ + uint32_t in[MLX5_ST_SZ_DW(set_fte_in) + MLX5_ST_SZ_DW(dest_format)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(set_fte_out)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *in_flow_context; + void *in_dests; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for fte object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(set_fte_in, in, opcode, MLX5_CMD_OP_SET_FLOW_TABLE_ENTRY); + MLX5_SET(set_fte_in, in, table_type, table_type); + MLX5_SET(set_fte_in, in, table_id, table_id); + + in_flow_context = MLX5_ADDR_OF(set_fte_in, in, flow_context); + MLX5_SET(flow_context, in_flow_context, group_id, group_id); + MLX5_SET(flow_context, in_flow_context, destination_list_size, 1); + MLX5_SET(flow_context, in_flow_context, action, MLX5_FLOW_CONTEXT_ACTION_FWD_DEST); + + in_dests = MLX5_ADDR_OF(flow_context, in_flow_context, destination); + MLX5_SET(dest_format, in_dests, destination_type, + MLX5_FLOW_DESTINATION_TYPE_VPORT); + MLX5_SET(dest_format, in_dests, destination_id, vport_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create FTE"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + return devx_obj; +} + +void mlx5dr_cmd_miss_ft_destroy(struct mlx5dr_cmd_forward_tbl *tbl) +{ + mlx5dr_cmd_destroy_obj(tbl->fte); + mlx5dr_cmd_destroy_obj(tbl->fg); + mlx5dr_cmd_destroy_obj(tbl->ft); +} + +struct mlx5dr_cmd_forward_tbl * +mlx5dr_cmd_miss_ft_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr, + uint32_t vport) +{ + struct mlx5dr_cmd_fg_attr fg_attr = {0}; + struct mlx5dr_cmd_forward_tbl *tbl; + + tbl = simple_calloc(1, sizeof(*tbl)); + if (!tbl) { + DR_LOG(ERR, "Failed to allocate memory for forward default"); + rte_errno = ENOMEM; + return NULL; + } + + tbl->ft = mlx5dr_cmd_flow_table_create(ctx, ft_attr); + if (!tbl->ft) { + DR_LOG(ERR, "Failed to create FT for miss-table"); + goto free_tbl; + } + + fg_attr.table_id = tbl->ft->id; + fg_attr.table_type = ft_attr->type; + + tbl->fg = mlx5dr_cmd_flow_group_create(ctx, &fg_attr); + if (!tbl->fg) { + DR_LOG(ERR, "Failed to create FG for miss-table"); + goto free_ft; + } + + tbl->fte = mlx5dr_cmd_set_vport_fte(ctx, ft_attr->type, tbl->ft->id, tbl->fg->id, vport); + if (!tbl->fte) { + DR_LOG(ERR, "Failed to create FTE for miss-table"); + goto free_fg; + } + + return tbl; + +free_fg: + mlx5dr_cmd_destroy_obj(tbl->fg); +free_ft: + mlx5dr_cmd_destroy_obj(tbl->ft); +free_tbl: + simple_free(tbl); + return NULL; +} + +void mlx5dr_cmd_set_attr_connect_miss_tbl(struct mlx5dr_context *ctx, + uint32_t fw_ft_type, + enum mlx5dr_table_type type, + struct mlx5dr_cmd_ft_modify_attr *ft_attr) +{ + struct mlx5dr_devx_obj *default_miss_tbl; + + if (type != MLX5DR_TABLE_TYPE_FDB) + return; + + default_miss_tbl = ctx->common_res[type].default_miss->ft; + if (!default_miss_tbl) { + assert(false); + return; + } + ft_attr->modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION; + ft_attr->type = fw_ft_type; + ft_attr->table_miss_action = MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_GOTO_TBL; + ft_attr->table_miss_id = default_miss_tbl->id; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_rtc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_rtc_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for RTC object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_rtc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_RTC); + + attr = MLX5_ADDR_OF(create_rtc_in, in, rtc); + MLX5_SET(rtc, attr, ste_format, rtc_attr->is_jumbo ? + MLX5_IFC_RTC_STE_FORMAT_11DW : + MLX5_IFC_RTC_STE_FORMAT_8DW); + MLX5_SET(rtc, attr, pd, rtc_attr->pd); + MLX5_SET(rtc, attr, update_index_mode, rtc_attr->update_index_mode); + MLX5_SET(rtc, attr, log_depth, rtc_attr->log_depth); + MLX5_SET(rtc, attr, log_hash_size, rtc_attr->log_size); + MLX5_SET(rtc, attr, table_type, rtc_attr->table_type); + MLX5_SET(rtc, attr, match_definer_id, rtc_attr->definer_id); + MLX5_SET(rtc, attr, stc_id, rtc_attr->stc_base); + MLX5_SET(rtc, attr, ste_table_base_id, rtc_attr->ste_base); + MLX5_SET(rtc, attr, ste_table_offset, rtc_attr->ste_offset); + MLX5_SET(rtc, attr, miss_flow_table_id, rtc_attr->miss_ft_id); + MLX5_SET(rtc, attr, reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create RTC"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_stc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_stc_create_attr *stc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_stc_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for STC object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_stc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STC); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, stc_attr->log_obj_range); + + attr = MLX5_ADDR_OF(create_stc_in, in, stc); + MLX5_SET(stc, attr, table_type, stc_attr->table_type); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create STC"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +static int +mlx5dr_cmd_stc_modify_set_stc_param(struct mlx5dr_cmd_stc_modify_attr *stc_attr, + void *stc_parm) +{ + switch (stc_attr->action_type) { + case MLX5_IFC_STC_ACTION_TYPE_COUNTER: + MLX5_SET(stc_ste_param_flow_counter, stc_parm, flow_counter_id, stc_attr->id); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR: + MLX5_SET(stc_ste_param_tir, stc_parm, tirn, stc_attr->dest_tir_num); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT: + MLX5_SET(stc_ste_param_table, stc_parm, table_id, stc_attr->dest_table_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST: + MLX5_SET(stc_ste_param_header_modify_list, stc_parm, + header_modify_pattern_id, stc_attr->modify_header.pattern_id); + MLX5_SET(stc_ste_param_header_modify_list, stc_parm, + header_modify_argument_id, stc_attr->modify_header.arg_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE: + MLX5_SET(stc_ste_param_remove, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_REMOVE); + MLX5_SET(stc_ste_param_remove, stc_parm, decap, + stc_attr->remove_header.decap); + MLX5_SET(stc_ste_param_remove, stc_parm, remove_start_anchor, + stc_attr->remove_header.start_anchor); + MLX5_SET(stc_ste_param_remove, stc_parm, remove_end_anchor, + stc_attr->remove_header.end_anchor); + break; + case MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT: + MLX5_SET(stc_ste_param_insert, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_INSERT); + MLX5_SET(stc_ste_param_insert, stc_parm, encap, + stc_attr->insert_header.encap); + MLX5_SET(stc_ste_param_insert, stc_parm, inline_data, + stc_attr->insert_header.is_inline); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_anchor, + stc_attr->insert_header.insert_anchor); + /* HW gets the next 2 sizes in words */ + MLX5_SET(stc_ste_param_insert, stc_parm, insert_size, + stc_attr->insert_header.header_size / 2); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_offset, + stc_attr->insert_header.insert_offset / 2); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_argument, + stc_attr->insert_header.arg_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_COPY: + case MLX5_IFC_STC_ACTION_TYPE_SET: + case MLX5_IFC_STC_ACTION_TYPE_ADD: + *(__be64 *)stc_parm = stc_attr->modify_action.data; + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT: + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK: + MLX5_SET(stc_ste_param_vport, stc_parm, vport_number, + stc_attr->vport.vport_num); + MLX5_SET(stc_ste_param_vport, stc_parm, eswitch_owner_vhca_id, + stc_attr->vport.esw_owner_vhca_id); + MLX5_SET(stc_ste_param_vport, stc_parm, eswitch_owner_vhca_id_valid, 1); + break; + case MLX5_IFC_STC_ACTION_TYPE_DROP: + case MLX5_IFC_STC_ACTION_TYPE_NOP: + case MLX5_IFC_STC_ACTION_TYPE_TAG: + case MLX5_IFC_STC_ACTION_TYPE_ALLOW: + break; + case MLX5_IFC_STC_ACTION_TYPE_ASO: + MLX5_SET(stc_ste_param_execute_aso, stc_parm, aso_object_id, + stc_attr->aso.devx_obj_id); + MLX5_SET(stc_ste_param_execute_aso, stc_parm, return_reg_id, + stc_attr->aso.return_reg_id); + MLX5_SET(stc_ste_param_execute_aso, stc_parm, aso_type, + stc_attr->aso.aso_type); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE: + MLX5_SET(stc_ste_param_ste_table, stc_parm, ste_obj_id, + stc_attr->ste_table.ste_obj_id); + MLX5_SET(stc_ste_param_ste_table, stc_parm, match_definer_id, + stc_attr->ste_table.match_definer_id); + MLX5_SET(stc_ste_param_ste_table, stc_parm, log_hash_size, + stc_attr->ste_table.log_hash_size); + break; + case MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS: + MLX5_SET(stc_ste_param_remove_words, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS); + MLX5_SET(stc_ste_param_remove_words, stc_parm, remove_start_anchor, + stc_attr->remove_words.start_anchor); + MLX5_SET(stc_ste_param_remove_words, stc_parm, + remove_size, stc_attr->remove_words.num_of_words); + break; + default: + DR_LOG(ERR, "not supported type %d", stc_attr->action_type); + rte_errno = EINVAL; + return rte_errno; + break; + } + return 0; +} + +int +mlx5dr_cmd_stc_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_stc_modify_attr *stc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_stc_in)] = {0}; + void *stc_parm; + void *attr; + int ret; + + attr = MLX5_ADDR_OF(create_stc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_MODIFY_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STC); + MLX5_SET(general_obj_in_cmd_hdr, in, obj_id, devx_obj->id); + MLX5_SET(general_obj_in_cmd_hdr, in, obj_offset, stc_attr->stc_offset); + + attr = MLX5_ADDR_OF(create_stc_in, in, stc); + MLX5_SET(stc, attr, ste_action_offset, stc_attr->action_offset); + MLX5_SET(stc, attr, action_type, stc_attr->action_type); + MLX5_SET64(stc, attr, modify_field_select, + MLX5_IFC_MODIFY_STC_FIELD_SELECT_NEW_STC); + + /* Set destination TIRN, TAG, FT ID, STE ID */ + stc_parm = MLX5_ADDR_OF(stc, attr, stc_param); + ret = mlx5dr_cmd_stc_modify_set_stc_param(stc_attr, stc_parm); + if (ret) + return ret; + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify STC FW action_type %d", stc_attr->action_type); + rte_errno = errno; + } + + return ret; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_arg_create(struct ibv_context *ctx, + uint16_t log_obj_range, + uint32_t pd) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_arg_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for ARG object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_arg_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_ARG); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, log_obj_range); + + attr = MLX5_ADDR_OF(create_arg_in, in, arg); + MLX5_SET(arg, attr, access_pd, pd); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create ARG"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_header_modify_pattern_create(struct ibv_context *ctx, + uint32_t pattern_length, + uint8_t *actions) +{ + uint32_t in[MLX5_ST_SZ_DW(create_header_modify_pattern_in)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *pattern_data; + void *pattern; + void *attr; + + if (pattern_length > MAX_ACTIONS_DATA_IN_HEADER_MODIFY) { + DR_LOG(ERR, "too much patterns (%d), more than %d", + pattern_length, MAX_ACTIONS_DATA_IN_HEADER_MODIFY); + rte_errno = EINVAL; + return NULL; + } + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for header_modify_pattern object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_header_modify_pattern_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_MODIFY_HEADER_PATTERN); + + pattern = MLX5_ADDR_OF(create_header_modify_pattern_in, in, pattern); + /* pattern_length is in ddwords */ + MLX5_SET(header_modify_pattern_in, pattern, pattern_length, pattern_length / (2 * DW_SIZE)); + + pattern_data = MLX5_ADDR_OF(header_modify_pattern_in, pattern, pattern_data); + memcpy(pattern_data, actions, pattern_length); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create header_modify_pattern"); + rte_errno = errno; + goto free_obj; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; + +free_obj: + simple_free(devx_obj); + return NULL; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_ste_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ste_create_attr *ste_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_ste_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for STE object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_ste_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STE); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, ste_attr->log_obj_range); + + attr = MLX5_ADDR_OF(create_ste_in, in, ste); + MLX5_SET(ste, attr, table_type, ste_attr->table_type); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create STE"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_definer_create(struct ibv_context *ctx, + struct mlx5dr_cmd_definer_create_attr *def_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_definer_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *ptr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for definer object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(general_obj_in_cmd_hdr, + in, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + in, obj_type, MLX5_GENERAL_OBJ_TYPE_DEFINER); + + ptr = MLX5_ADDR_OF(create_definer_in, in, definer); + MLX5_SET(definer, ptr, format_id, MLX5_IFC_DEFINER_FORMAT_ID_SELECT); + + MLX5_SET(definer, ptr, format_select_dw0, def_attr->dw_selector[0]); + MLX5_SET(definer, ptr, format_select_dw1, def_attr->dw_selector[1]); + MLX5_SET(definer, ptr, format_select_dw2, def_attr->dw_selector[2]); + MLX5_SET(definer, ptr, format_select_dw3, def_attr->dw_selector[3]); + MLX5_SET(definer, ptr, format_select_dw4, def_attr->dw_selector[4]); + MLX5_SET(definer, ptr, format_select_dw5, def_attr->dw_selector[5]); + MLX5_SET(definer, ptr, format_select_dw6, def_attr->dw_selector[6]); + MLX5_SET(definer, ptr, format_select_dw7, def_attr->dw_selector[7]); + MLX5_SET(definer, ptr, format_select_dw8, def_attr->dw_selector[8]); + + MLX5_SET(definer, ptr, format_select_byte0, def_attr->byte_selector[0]); + MLX5_SET(definer, ptr, format_select_byte1, def_attr->byte_selector[1]); + MLX5_SET(definer, ptr, format_select_byte2, def_attr->byte_selector[2]); + MLX5_SET(definer, ptr, format_select_byte3, def_attr->byte_selector[3]); + MLX5_SET(definer, ptr, format_select_byte4, def_attr->byte_selector[4]); + MLX5_SET(definer, ptr, format_select_byte5, def_attr->byte_selector[5]); + MLX5_SET(definer, ptr, format_select_byte6, def_attr->byte_selector[6]); + MLX5_SET(definer, ptr, format_select_byte7, def_attr->byte_selector[7]); + + ptr = MLX5_ADDR_OF(definer, ptr, match_mask); + memcpy(ptr, def_attr->match_mask, MLX5_FLD_SZ_BYTES(definer, match_mask)); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create Definer"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_sq_create(struct ibv_context *ctx, + struct mlx5dr_cmd_sq_create_attr *attr) +{ + uint32_t out[DEVX_ST_SZ_DW(create_sq_out)] = {0}; + uint32_t in[DEVX_ST_SZ_DW(create_sq_in)] = {0}; + void *sqc = DEVX_ADDR_OF(create_sq_in, in, ctx); + void *wqc = DEVX_ADDR_OF(sqc, sqc, wq); + struct mlx5dr_devx_obj *devx_obj; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to create SQ"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_sq_in, in, opcode, MLX5_CMD_OP_CREATE_SQ); + MLX5_SET(sqc, sqc, cqn, attr->cqn); + MLX5_SET(sqc, sqc, flush_in_error_en, 1); + MLX5_SET(sqc, sqc, non_wire, 1); + MLX5_SET(wq, wqc, wq_type, MLX5_WQ_TYPE_CYCLIC); + MLX5_SET(wq, wqc, pd, attr->pdn); + MLX5_SET(wq, wqc, uar_page, attr->page_id); + MLX5_SET(wq, wqc, log_wq_stride, log2above(MLX5_SEND_WQE_BB)); + MLX5_SET(wq, wqc, log_wq_sz, attr->log_wq_sz); + MLX5_SET(wq, wqc, dbr_umem_id, attr->dbr_id); + MLX5_SET(wq, wqc, wq_umem_id, attr->wq_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_sq_out, out, sqn); + + return devx_obj; +} + +int mlx5dr_cmd_sq_modify_rdy(struct mlx5dr_devx_obj *devx_obj) +{ + uint32_t out[DEVX_ST_SZ_DW(modify_sq_out)] = {0}; + uint32_t in[DEVX_ST_SZ_DW(modify_sq_in)] = {0}; + void *sqc = DEVX_ADDR_OF(modify_sq_in, in, ctx); + int ret; + + MLX5_SET(modify_sq_in, in, opcode, MLX5_CMD_OP_MODIFY_SQ); + MLX5_SET(modify_sq_in, in, sqn, devx_obj->id); + MLX5_SET(modify_sq_in, in, sq_state, MLX5_SQC_STATE_RST); + MLX5_SET(sqc, sqc, state, MLX5_SQC_STATE_RDY); + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify SQ"); + rte_errno = errno; + } + + return ret; +} + +int mlx5dr_cmd_query_caps(struct ibv_context *ctx, + struct mlx5dr_cmd_query_caps *caps) +{ + uint32_t out[DEVX_ST_SZ_DW(query_hca_cap_out)] = {0}; + uint32_t in[DEVX_ST_SZ_DW(query_hca_cap_in)] = {0}; + const struct flow_hw_port_info *port_info; + struct ibv_device_attr_ex attr_ex; + int ret; + + MLX5_SET(query_hca_cap_in, in, opcode, MLX5_CMD_OP_QUERY_HCA_CAP); + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query device caps"); + rte_errno = errno; + return rte_errno; + } + + caps->wqe_based_update = + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.wqe_based_flow_table_update_cap); + + caps->eswitch_manager = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.eswitch_manager); + + caps->flex_protocols = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.flex_parser_protocols); + + caps->log_header_modify_argument_granularity = + MLX5_GET(query_hca_cap_out, + out, + capability.cmd_hca_cap.log_header_modify_argument_granularity); + + caps->log_header_modify_argument_granularity -= + MLX5_GET(query_hca_cap_out, + out, + capability.cmd_hca_cap.log_header_modify_argument_granularity_offset); + + caps->log_header_modify_argument_max_alloc = + MLX5_GET(query_hca_cap_out, + out, + capability.cmd_hca_cap.log_header_modify_argument_max_alloc); + + caps->definer_format_sup = + MLX5_GET64(query_hca_cap_out, + out, + capability.cmd_hca_cap.match_definer_format_supported); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE_2 | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query device caps"); + rte_errno = errno; + return rte_errno; + } + + caps->full_dw_jumbo_support = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_8_6_ext); + + caps->format_select_gtpu_dw_0 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_0); + + caps->format_select_gtpu_dw_1 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_1); + + caps->format_select_gtpu_dw_2 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_2); + + caps->format_select_gtpu_ext_dw_0 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_first_ext_dw_0); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query flow table caps"); + rte_errno = errno; + return rte_errno; + } + + caps->nic_ft.max_level = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.max_ft_level); + + caps->nic_ft.reparse = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.reparse); + + if (caps->wqe_based_update) { + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_WQE_BASED_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query WQE based FT caps"); + rte_errno = errno; + return rte_errno; + } + + caps->rtc_reparse_mode = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_reparse_mode); + + caps->ste_format = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_format); + + caps->rtc_index_mode = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_index_mode); + + caps->rtc_log_depth_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_log_depth_max); + + caps->ste_alloc_log_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_alloc_log_max); + + caps->ste_alloc_log_gran = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_alloc_log_granularity); + + caps->trivial_match_definer = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + trivial_match_definer); + + caps->stc_alloc_log_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + stc_alloc_log_max); + + caps->stc_alloc_log_gran = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + stc_alloc_log_granularity); + } + + if (caps->eswitch_manager) { + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query flow table esw caps"); + rte_errno = errno; + return rte_errno; + } + + caps->fdb_ft.max_level = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.max_ft_level); + + caps->fdb_ft.reparse = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.reparse); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_SET_HCA_CAP_OP_MOD_ESW | MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Query eswitch capabilities failed %d\n", ret); + rte_errno = errno; + return rte_errno; + } + + if (MLX5_GET(query_hca_cap_out, out, + capability.esw_cap.esw_manager_vport_number_valid)) + caps->eswitch_manager_vport_number = + MLX5_GET(query_hca_cap_out, out, + capability.esw_cap.esw_manager_vport_number); + } + + // TODO Current FW don't set this bit (yet) + caps->nic_ft.reparse = 1; + caps->fdb_ft.reparse = 1; + + ret = mlx5_glue->query_device_ex(ctx, NULL, &attr_ex); + if (ret) { + DR_LOG(ERR, "Failed to query device attributes"); + rte_errno = ret; + return rte_errno; + } + + strlcpy(caps->fw_ver, attr_ex.orig_attr.fw_ver, sizeof(caps->fw_ver)); + + port_info = flow_hw_get_wire_port(ctx); + if (port_info) { + caps->wire_regc = port_info->regc_value; + caps->wire_regc_mask = port_info->regc_mask; + } else { + DR_LOG(INFO, "Failed to query wire port regc value"); + } + + return ret; +} + +int mlx5dr_cmd_query_ib_port(struct ibv_context *ctx, + struct mlx5dr_cmd_query_vport_caps *vport_caps, + uint32_t port_num) +{ + struct mlx5_port_info port_info = {0}; + uint32_t flags; + int ret; + + flags = MLX5_PORT_QUERY_VPORT | MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID; + + ret = mlx5_glue->devx_port_query(ctx, port_num, &port_info); + /* Check if query succeed and vport is enabled */ + if (ret || (port_info.query_flags & flags) != flags) { + rte_errno = ENOTSUP; + return rte_errno; + } + + vport_caps->vport_num = port_info.vport_id; + vport_caps->esw_owner_vhca_id = port_info.esw_owner_vhca_id; + + if (port_info.query_flags & MLX5_PORT_QUERY_REG_C0) { + vport_caps->metadata_c = port_info.vport_meta_tag; + vport_caps->metadata_c_mask = port_info.vport_meta_mask; + } + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h new file mode 100644 index 0000000000..deef6eb454 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h @@ -0,0 +1,232 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_CMD_H_ +#define MLX5DR_CMD_H_ + +#define WIRE_PORT 0xFFFF + +struct mlx5dr_cmd_ft_create_attr { + uint8_t type; + uint8_t level; + bool rtc_valid; +}; + +struct mlx5dr_cmd_ft_modify_attr { + uint8_t type; + uint32_t rtc_id_0; + uint32_t rtc_id_1; + uint32_t table_miss_id; + uint8_t table_miss_action; + uint64_t modify_fs; +}; + +struct mlx5dr_cmd_fg_attr { + uint32_t table_id; + uint32_t table_type; +}; + +struct mlx5dr_cmd_forward_tbl { + struct mlx5dr_devx_obj *ft; + struct mlx5dr_devx_obj *fg; + struct mlx5dr_devx_obj *fte; + uint32_t refcount; +}; + +struct mlx5dr_cmd_rtc_create_attr { + uint32_t pd; + uint32_t stc_base; + uint32_t ste_base; + uint32_t ste_offset; + uint32_t miss_ft_id; + uint8_t update_index_mode; + uint8_t log_depth; + uint8_t log_size; + uint8_t table_type; + uint8_t definer_id; + bool is_jumbo; +}; + +struct mlx5dr_cmd_stc_create_attr { + uint8_t log_obj_range; + uint8_t table_type; +}; + +struct mlx5dr_cmd_stc_modify_attr { + uint32_t stc_offset; + uint8_t action_offset; + enum mlx5_ifc_stc_action_type action_type; + union { + uint32_t id; /* TIRN, TAG, FT ID, STE ID */ + struct { + uint8_t decap; + uint16_t start_anchor; + uint16_t end_anchor; + } remove_header; + struct { + uint32_t arg_id; + uint32_t pattern_id; + } modify_header; + struct { + __be64 data; + } modify_action; + struct { + uint32_t arg_id; + uint32_t header_size; + uint8_t is_inline; + uint8_t encap; + uint16_t insert_anchor; + uint16_t insert_offset; + } insert_header; + struct { + uint8_t aso_type; + uint32_t devx_obj_id; + uint8_t return_reg_id; + } aso; + struct { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + } vport; + struct { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_pool *ste_pool; + uint32_t ste_obj_id; /* Internal */ + uint32_t match_definer_id; + uint8_t log_hash_size; + } ste_table; + struct { + uint16_t start_anchor; + uint16_t num_of_words; + } remove_words; + + uint32_t dest_table_id; + uint32_t dest_tir_num; + }; +}; + +struct mlx5dr_cmd_ste_create_attr { + uint8_t log_obj_range; + uint8_t table_type; +}; + +struct mlx5dr_cmd_definer_create_attr { + uint8_t *dw_selector; + uint8_t *byte_selector; + uint8_t *match_mask; +}; + +struct mlx5dr_cmd_sq_create_attr { + uint32_t cqn; + uint32_t pdn; + uint32_t page_id; + uint32_t dbr_id; + uint32_t wq_id; + uint32_t log_wq_sz; +}; + +struct mlx5dr_cmd_query_ft_caps { + uint8_t max_level; + uint8_t reparse; +}; + +struct mlx5dr_cmd_query_vport_caps { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + uint32_t metadata_c; + uint32_t metadata_c_mask; +}; + +struct mlx5dr_cmd_query_caps { + uint32_t wire_regc; + uint32_t wire_regc_mask; + uint32_t flex_protocols; + uint8_t wqe_based_update; + uint8_t rtc_reparse_mode; + uint16_t ste_format; + uint8_t rtc_index_mode; + uint8_t ste_alloc_log_max; + uint8_t ste_alloc_log_gran; + uint8_t stc_alloc_log_max; + uint8_t stc_alloc_log_gran; + uint8_t rtc_log_depth_max; + uint8_t format_select_gtpu_dw_0; + uint8_t format_select_gtpu_dw_1; + uint8_t format_select_gtpu_dw_2; + uint8_t format_select_gtpu_ext_dw_0; + bool full_dw_jumbo_support; + struct mlx5dr_cmd_query_ft_caps nic_ft; + struct mlx5dr_cmd_query_ft_caps fdb_ft; + bool eswitch_manager; + uint32_t eswitch_manager_vport_number; + uint8_t log_header_modify_argument_granularity; + uint8_t log_header_modify_argument_max_alloc; + uint64_t definer_format_sup; + uint32_t trivial_match_definer; + char fw_ver[64]; +}; + +int mlx5dr_cmd_destroy_obj(struct mlx5dr_devx_obj *devx_obj); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_table_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr); + +int +mlx5dr_cmd_flow_table_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_ft_modify_attr *ft_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_rtc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_stc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_stc_create_attr *stc_attr); + +int +mlx5dr_cmd_stc_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_stc_modify_attr *stc_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_ste_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ste_create_attr *ste_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_definer_create(struct ibv_context *ctx, + struct mlx5dr_cmd_definer_create_attr *def_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_sq_create(struct ibv_context *ctx, + struct mlx5dr_cmd_sq_create_attr *attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_arg_create(struct ibv_context *ctx, + uint16_t log_obj_range, + uint32_t pd); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_header_modify_pattern_create(struct ibv_context *ctx, + uint32_t pattern_length, + uint8_t *actions); + +int mlx5dr_cmd_sq_modify_rdy(struct mlx5dr_devx_obj *devx_obj); + +int mlx5dr_cmd_query_ib_port(struct ibv_context *ctx, + struct mlx5dr_cmd_query_vport_caps *vport_caps, + uint32_t port_num); +int mlx5dr_cmd_query_caps(struct ibv_context *ctx, + struct mlx5dr_cmd_query_caps *caps); + +void mlx5dr_cmd_miss_ft_destroy(struct mlx5dr_cmd_forward_tbl *tbl); + +struct mlx5dr_cmd_forward_tbl * +mlx5dr_cmd_miss_ft_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr, + uint32_t vport); + +void mlx5dr_cmd_set_attr_connect_miss_tbl(struct mlx5dr_context *ctx, + uint32_t fw_ft_type, + enum mlx5dr_table_type type, + struct mlx5dr_cmd_ft_modify_attr *ft_attr); +#endif -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 10/19] net/mlx5/hws: Add HWS pool and buddy 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (8 preceding siblings ...) 2022-09-22 19:03 ` [v1 09/19] net/mlx5/hws: Add HWS command layer Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 11/19] net/mlx5/hws: Add HWS send layer Alex Vesker ` (13 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad; +Cc: dev, orika HWS needs to manage different types of device memory in an efficient and quick way. For this, memory pools are being used. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_buddy.c | 201 +++++++++ drivers/net/mlx5/hws/mlx5dr_buddy.h | 18 + drivers/net/mlx5/hws/mlx5dr_pool.c | 672 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_pool.h | 152 +++++++ 4 files changed, 1043 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.h diff --git a/drivers/net/mlx5/hws/mlx5dr_buddy.c b/drivers/net/mlx5/hws/mlx5dr_buddy.c new file mode 100644 index 0000000000..9675f44e0d --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_buddy.c @@ -0,0 +1,201 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include <rte_bitmap.h> +#include <rte_malloc.h> +#include "mlx5dr_internal.h" +#include "mlx5dr_buddy.h" + +static struct rte_bitmap *bitmap_alloc0(int s) +{ + struct rte_bitmap *bitmap; + uint32_t bmp_size; + void *mem; + + bmp_size = rte_bitmap_get_memory_footprint(s); + mem = rte_zmalloc("create_bmap", bmp_size, RTE_CACHE_LINE_SIZE); + if (!mem) { + DR_LOG(ERR, "no mem for bitmap"); + rte_errno = ENOMEM; + return NULL; + } + + bitmap = rte_bitmap_init(s, mem, bmp_size); + if (!bitmap) { + DR_LOG(ERR, "%s Failed to initialize bitmap", __func__); + rte_errno = EINVAL; + goto err_mem_alloc; + } + + return bitmap; + +err_mem_alloc: + rte_free(mem); + return NULL; +} + +static void bitmap_set_bit(struct rte_bitmap *bmp, uint32_t pos) +{ + rte_bitmap_set(bmp, pos); +} + +static void bitmap_clear_bit(struct rte_bitmap *bmp, uint32_t pos) +{ + rte_bitmap_clear(bmp, pos); +} + +static bool bitmap_test_bit(struct rte_bitmap *bmp, unsigned long n) +{ + return !!rte_bitmap_get(bmp, n); +} + +static unsigned long bitmap_ffs(struct rte_bitmap *bmap, + unsigned long n, unsigned long m) +{ + uint32_t pos = 0; /* compilation warn */ + uint64_t out_slab = 0; + + __rte_bitmap_scan_init(bmap); + if (!rte_bitmap_scan(bmap, &pos, &out_slab)) { + DR_LOG(ERR, "Failed to get slab from bitmap."); + return m; + } + pos = pos + __builtin_ctzll(out_slab); + + if (pos < n) { + DR_LOG(ERR, "got unexpected bit (%d < %ld) from bitmap", pos, n); + return m; + } + return pos; +} + +static unsigned long mlx5dr_buddy_find_first_bit(struct rte_bitmap *addr, + uint32_t size) +{ + return bitmap_ffs(addr, 0, size); +} + +static int mlx5dr_buddy_init(struct mlx5dr_buddy_mem *buddy, uint32_t max_order) +{ + int i, s; + + buddy->max_order = max_order; + + buddy->bits = simple_calloc(buddy->max_order + 1, sizeof(long *)); + if (!buddy->bits) { + rte_errno = ENOMEM; + return -1; + } + + buddy->num_free = simple_calloc(buddy->max_order + 1, sizeof(*buddy->num_free)); + if (!buddy->num_free) { + rte_errno = ENOMEM; + goto err_out_free_bits; + } + + for (i = 0; i <= (int) buddy->max_order; ++i) { + s = 1 << (buddy->max_order - i); + buddy->bits[i] = bitmap_alloc0(s); + if (!buddy->bits[i]) + goto err_out_free_num_free; + } + + bitmap_set_bit(buddy->bits[buddy->max_order], 0); + + buddy->num_free[buddy->max_order] = 1; + + return 0; + +err_out_free_num_free: + for (i = 0; i <= (int) buddy->max_order; ++i) + rte_free(buddy->bits[i]); + + simple_free(buddy->num_free); + +err_out_free_bits: + simple_free(buddy->bits); + return -1; +} + +struct mlx5dr_buddy_mem *mlx5dr_buddy_create(uint32_t max_order) +{ + struct mlx5dr_buddy_mem *buddy; + + buddy = simple_calloc(1, sizeof(*buddy)); + if (!buddy) { + rte_errno = ENOMEM; + return NULL; + } + + if (mlx5dr_buddy_init(buddy, max_order)) + goto free_buddy; + + return buddy; + +free_buddy: + simple_free(buddy); + return NULL; +} + +void mlx5dr_buddy_cleanup(struct mlx5dr_buddy_mem *buddy) +{ + int i; + + for (i = 0; i <= (int) buddy->max_order; ++i) { + rte_free(buddy->bits[i]); + } + + simple_free(buddy->num_free); + simple_free(buddy->bits); +} + +int mlx5dr_buddy_alloc_mem(struct mlx5dr_buddy_mem *buddy, int order) +{ + int seg; + int o, m; + + for (o = order; o <= (int) buddy->max_order; ++o) + if (buddy->num_free[o]) { + m = 1 << (buddy->max_order - o); + seg = mlx5dr_buddy_find_first_bit(buddy->bits[o], m); + if (m <= seg) + return -1; + + goto found; + } + + return -1; + +found: + bitmap_clear_bit(buddy->bits[o], seg); + --buddy->num_free[o]; + + while (o > order) { + --o; + seg <<= 1; + bitmap_set_bit(buddy->bits[o], seg ^ 1); + ++buddy->num_free[o]; + } + + seg <<= order; + + return seg; +} + +void mlx5dr_buddy_free_mem(struct mlx5dr_buddy_mem *buddy, uint32_t seg, int order) +{ + seg >>= order; + + while (bitmap_test_bit(buddy->bits[order], seg ^ 1)) { + bitmap_clear_bit(buddy->bits[order], seg ^ 1); + --buddy->num_free[order]; + seg >>= 1; + ++order; + } + + bitmap_set_bit(buddy->bits[order], seg); + + ++buddy->num_free[order]; +} + diff --git a/drivers/net/mlx5/hws/mlx5dr_buddy.h b/drivers/net/mlx5/hws/mlx5dr_buddy.h new file mode 100644 index 0000000000..c456be90a1 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_buddy.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#ifndef MLX5DR_BUDDY_H_ +#define MLX5DR_BUDDY_H_ + +struct mlx5dr_buddy_mem { + struct rte_bitmap **bits; + unsigned int *num_free; + uint32_t max_order; +}; + +struct mlx5dr_buddy_mem *mlx5dr_buddy_create(uint32_t max_order); +void mlx5dr_buddy_cleanup(struct mlx5dr_buddy_mem *buddy); +int mlx5dr_buddy_alloc_mem(struct mlx5dr_buddy_mem *buddy, int order); +void mlx5dr_buddy_free_mem(struct mlx5dr_buddy_mem *buddy, uint32_t seg, int order); +#endif diff --git a/drivers/net/mlx5/hws/mlx5dr_pool.c b/drivers/net/mlx5/hws/mlx5dr_pool.c new file mode 100644 index 0000000000..ab739ca843 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pool.c @@ -0,0 +1,672 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#include <rte_bitmap.h> +#include <rte_malloc.h> +#include "mlx5dr_buddy.h" +#include "mlx5dr_internal.h" + +static void mlx5dr_pool_free_one_resource(struct mlx5dr_pool_resource *resource) +{ + mlx5dr_cmd_destroy_obj(resource->devx_obj); + + simple_free(resource); +} + +static void mlx5dr_pool_resource_free(struct mlx5dr_pool *pool, + int resource_idx) +{ + mlx5dr_pool_free_one_resource(pool->resource[resource_idx]); + pool->resource[resource_idx] = NULL; + + if (pool->tbl_type == MLX5DR_TABLE_TYPE_FDB) { + mlx5dr_pool_free_one_resource(pool->mirror_resource[resource_idx]); + pool->mirror_resource[resource_idx] = NULL; + } +} + +static struct mlx5dr_pool_resource * +mlx5dr_pool_create_one_resource(struct mlx5dr_pool *pool, uint32_t log_range, + uint32_t fw_ft_type) +{ + struct mlx5dr_cmd_ste_create_attr ste_attr; + struct mlx5dr_cmd_stc_create_attr stc_attr; + struct mlx5dr_pool_resource *resource; + struct mlx5dr_devx_obj *devx_obj; + + resource = simple_malloc(sizeof(*resource)); + if (!resource) { + rte_errno = ENOMEM; + return NULL; + } + + switch (pool->type) { + case MLX5DR_POOL_TYPE_STE: + ste_attr.log_obj_range = log_range; + ste_attr.table_type = fw_ft_type; + devx_obj = mlx5dr_cmd_ste_create(pool->ctx->ibv_ctx, &ste_attr); + break; + case MLX5DR_POOL_TYPE_STC: + stc_attr.log_obj_range = log_range; + stc_attr.table_type = fw_ft_type; + devx_obj = mlx5dr_cmd_stc_create(pool->ctx->ibv_ctx, &stc_attr); + break; + default: + assert(0); + break; + } + + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate resource objects"); + goto free_resource; + } + + resource->pool = pool; + resource->devx_obj = devx_obj; + resource->range = 1 << log_range; + resource->base_id = devx_obj->id; + + return resource; + +free_resource: + simple_free(resource); + return NULL; +} + +static int +mlx5dr_pool_resource_alloc(struct mlx5dr_pool *pool, uint32_t log_range, int idx) +{ + struct mlx5dr_pool_resource *resource; + uint32_t fw_ft_type, opt_log_range; + + fw_ft_type = mlx5dr_table_get_res_fw_ft_type(pool->tbl_type, false); + opt_log_range = pool->opt_type == MLX5DR_POOL_OPTIMIZE_ORIG ? 0 : log_range; + resource = mlx5dr_pool_create_one_resource(pool, opt_log_range, fw_ft_type); + if (!resource) { + DR_LOG(ERR, "Failed allocating resource"); + return rte_errno; + } + pool->resource[idx] = resource; + + if (pool->tbl_type == MLX5DR_TABLE_TYPE_FDB) { + struct mlx5dr_pool_resource *mir_resource; + + fw_ft_type = mlx5dr_table_get_res_fw_ft_type(pool->tbl_type, true); + opt_log_range = pool->opt_type == MLX5DR_POOL_OPTIMIZE_MIRROR ? 0 : log_range; + mir_resource = mlx5dr_pool_create_one_resource(pool, opt_log_range, fw_ft_type); + if (!mir_resource) { + DR_LOG(ERR, "Failed allocating mirrored resource"); + mlx5dr_pool_free_one_resource(resource); + pool->resource[idx] = NULL; + return rte_errno; + } + pool->mirror_resource[idx] = mir_resource; + } + + return 0; +} + +static int mlx5dr_pool_bitmap_get_free_slot(struct rte_bitmap *bitmap, uint32_t *iidx) +{ + uint64_t slab = 0; + + __rte_bitmap_scan_init(bitmap); + + if (!rte_bitmap_scan(bitmap, iidx, &slab)) + return ENOMEM; + + *iidx += __builtin_ctzll(slab); + + rte_bitmap_clear(bitmap, *iidx); + + return 0; +} + +static struct rte_bitmap *mlx5dr_pool_create_and_init_bitmap(uint32_t log_range) +{ + struct rte_bitmap *cur_bmp; + uint32_t bmp_size; + void *mem; + + bmp_size = rte_bitmap_get_memory_footprint(1 << log_range); + mem = rte_zmalloc("create_stc_bmap", bmp_size, RTE_CACHE_LINE_SIZE); + if (!mem) { + DR_LOG(ERR, "No mem for bitmap"); + rte_errno = ENOMEM; + return NULL; + } + + cur_bmp = rte_bitmap_init_with_all_set(1 << log_range, mem, bmp_size); + if (!cur_bmp) { + rte_free(mem); + DR_LOG(ERR, "Failed to initialize stc bitmap."); + rte_errno = ENOMEM; + return NULL; + } + + return cur_bmp; +} + +static void mlx5dr_pool_buddy_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + struct mlx5dr_buddy_mem *buddy; + + buddy = pool->db.buddy_manager->buddies[chunk->resource_idx]; + if (!buddy) { + assert(false); + DR_LOG(ERR, "no shuch buddy (%d)", chunk->resource_idx); + return; + } + + mlx5dr_buddy_free_mem(buddy, chunk->offset, chunk->order); +} + +static struct mlx5dr_buddy_mem * +mlx5dr_pool_buddy_get_next_buddy(struct mlx5dr_pool *pool, int idx, + uint32_t order, bool *is_new_buddy) +{ + static struct mlx5dr_buddy_mem *buddy; + uint32_t new_buddy_size; + + buddy = pool->db.buddy_manager->buddies[idx]; + if (buddy) + return buddy; + + new_buddy_size = RTE_MAX(pool->alloc_log_sz, order); + *is_new_buddy = true; + buddy = mlx5dr_buddy_create(new_buddy_size); + if (!buddy) { + DR_LOG(ERR, "Failed to create buddy order: %d index: %d", + new_buddy_size, idx); + return NULL; + } + + if (mlx5dr_pool_resource_alloc(pool, new_buddy_size, idx) != 0) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, new_buddy_size, idx); + mlx5dr_buddy_cleanup(buddy); + return NULL; + } + + pool->db.buddy_manager->buddies[idx] = buddy; + + return buddy; +} + +static int mlx5dr_pool_buddy_get_mem_chunk(struct mlx5dr_pool *pool, + int order, + uint32_t *buddy_idx, + int *seg) +{ + struct mlx5dr_buddy_mem *buddy; + bool new_mem = false; + int err = 0; + int i; + + *seg = -1; + + /* find the next free place from the buddy array */ + while (*seg == -1) { + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + buddy = mlx5dr_pool_buddy_get_next_buddy(pool, i, + order, + &new_mem); + if (!buddy) { + err = rte_errno; + goto out; + } + + *seg = mlx5dr_buddy_alloc_mem(buddy, order); + if (*seg != -1) + goto found; + + if (pool->flags & MLX5DR_POOL_FLAGS_ONE_RESOURCE) { + DR_LOG(ERR, "Fail to allocate seg for one resource pool"); + err = rte_errno; + goto out; + } + + if (new_mem) { + /* We have new memory pool, should be place for us */ + assert(false); + DR_LOG(ERR, "No memory for order: %d with buddy no: %d", + order, i); + rte_errno = ENOMEM; + err = ENOMEM; + goto out; + } + } + } + +found: + *buddy_idx = i; +out: + return err; +} + +static int mlx5dr_pool_buddy_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret = 0; + + /* go over the buddies and find next free slot */ + ret = mlx5dr_pool_buddy_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_pool_buddy_db_uninit(struct mlx5dr_pool *pool) +{ + struct mlx5dr_buddy_mem *buddy; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + buddy = pool->db.buddy_manager->buddies[i]; + if (buddy) { + mlx5dr_buddy_cleanup(buddy); + simple_free(buddy); + pool->db.buddy_manager->buddies[i] = NULL; + } + } + + simple_free(pool->db.buddy_manager); +} + +static int mlx5dr_pool_buddy_db_init(struct mlx5dr_pool *pool, uint32_t log_range) +{ + pool->db.buddy_manager = simple_calloc(1, sizeof(*pool->db.buddy_manager)); + if (!pool->db.buddy_manager) { + DR_LOG(ERR, "No mem for buddy_manager with log_range: %d", log_range); + rte_errno = ENOMEM; + return rte_errno; + } + + if (pool->flags & MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE) { + bool new_buddy; + + if (!mlx5dr_pool_buddy_get_next_buddy(pool, 0, log_range, &new_buddy)) { + DR_LOG(ERR, "Failed allocating memory on create log_sz: %d", log_range); + simple_free(pool->db.buddy_manager); + return rte_errno; + } + } + + pool->p_db_uninit = &mlx5dr_pool_buddy_db_uninit; + pool->p_get_chunk = &mlx5dr_pool_buddy_db_get_chunk; + pool->p_put_chunk = &mlx5dr_pool_buddy_db_put_chunk; + + return 0; +} + +static int mlx5dr_pool_create_resource_on_index(struct mlx5dr_pool *pool, + uint32_t alloc_size, int idx) +{ + if (mlx5dr_pool_resource_alloc(pool, alloc_size, idx) != 0) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, alloc_size, idx); + return rte_errno; + } + + return 0; +} + +static struct mlx5dr_pool_elements * +mlx5dr_pool_element_create_new_elem(struct mlx5dr_pool *pool, uint32_t order, int idx) +{ + struct mlx5dr_pool_elements *elem; + uint32_t alloc_size; + + alloc_size = pool->alloc_log_sz; + + elem = simple_calloc(1, sizeof(*elem)); + if (!elem) { + DR_LOG(ERR, "Failed to create elem order: %d index: %d", + order, idx); + rte_errno = ENOMEM; + return NULL; + } + /*sharing the same resource, also means that all the elements are with size 1*/ + if ((pool->flags & MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS) && + !(pool->flags & MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK)) { + /* Currently all chunks in size 1 */ + elem->bitmap = mlx5dr_pool_create_and_init_bitmap(alloc_size - order); + if (!elem->bitmap) { + DR_LOG(ERR, "Failed to create bitmap type: %d: size %d index: %d", + pool->type, alloc_size, idx); + goto free_elem; + } + } + + if (mlx5dr_pool_create_resource_on_index(pool, alloc_size, idx)) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, alloc_size, idx); + goto free_db; + } + + pool->db.element_manager->elements[idx] = elem; + + return elem; + +free_db: + rte_free(elem->bitmap); +free_elem: + simple_free(elem); + return NULL; +} + +static int mlx5dr_pool_element_find_seg(struct mlx5dr_pool_elements *elem, int *seg) +{ + if (mlx5dr_pool_bitmap_get_free_slot(elem->bitmap, (uint32_t *)seg)) { + elem->is_full = true; + return ENOMEM; + } + return 0; +} + +static int +mlx5dr_pool_onesize_element_get_mem_chunk(struct mlx5dr_pool *pool, uint32_t order, + uint32_t *idx, int *seg) +{ + struct mlx5dr_pool_elements *elem; + + elem = pool->db.element_manager->elements[0]; + if (!elem) + elem = mlx5dr_pool_element_create_new_elem(pool, order, 0); + if (!elem) + goto err_no_elem; + + *idx = 0; + + if (mlx5dr_pool_element_find_seg(elem, seg) != 0) { + DR_LOG(ERR, "No more resources (last request order: %d)", order); + rte_errno = ENOMEM; + return ENOMEM; + } + + elem->num_of_elements++; + return 0; + +err_no_elem: + DR_LOG(ERR, "Failed to allocate element for order: %d", order); + return ENOMEM; +} + +static int +mlx5dr_pool_general_element_get_mem_chunk(struct mlx5dr_pool *pool, uint32_t order, + uint32_t *idx, int *seg) +{ + int ret; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + if (!pool->resource[i]) { + ret = mlx5dr_pool_create_resource_on_index(pool, order, i); + if (ret) + goto err_no_res; + *idx = i; + *seg = 0; /* one memory slot in that element */ + return 0; + } + } + + rte_errno = ENOMEM; + DR_LOG(ERR, "No more resources (last request order: %d)", order); + return ENOMEM; + +err_no_res: + DR_LOG(ERR, "Failed to allocate element for order: %d", order); + return ENOMEM; +} + +static int mlx5dr_pool_general_element_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret; + + /* go over all memory elements and find/allocate free slot */ + ret = mlx5dr_pool_general_element_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_pool_general_element_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + assert(pool->resource[chunk->resource_idx]); + + if (pool->flags & MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE) + mlx5dr_pool_resource_free(pool, chunk->resource_idx); +} + +static void mlx5dr_pool_general_element_db_uninit(struct mlx5dr_pool *pool) +{ + (void)pool; +} + +/* This memory management works as the following: + * - At start doesn't allocate no mem at all. + * - When new request for chunk arrived: + * allocate resource and give it. + * - When free that chunk: + * the resource is freed. + */ +static int mlx5dr_pool_general_element_db_init(struct mlx5dr_pool *pool) +{ + pool->db.element_manager = simple_calloc(1, sizeof(*pool->db.element_manager)); + if (!pool->db.element_manager) { + DR_LOG(ERR, "No mem for general elemnt_manager"); + rte_errno = ENOMEM; + return rte_errno; + } + + pool->p_db_uninit = &mlx5dr_pool_general_element_db_uninit; + pool->p_get_chunk = &mlx5dr_pool_general_element_db_get_chunk; + pool->p_put_chunk = &mlx5dr_pool_general_element_db_put_chunk; + + return 0; +} + +static void mlx5dr_onesize_element_db_destroy_element(struct mlx5dr_pool *pool, + struct mlx5dr_pool_elements *elem, + struct mlx5dr_pool_chunk *chunk) +{ + assert(pool->resource[chunk->resource_idx]); + + mlx5dr_pool_resource_free(pool, chunk->resource_idx); + + simple_free(elem); + pool->db.element_manager->elements[chunk->resource_idx] = NULL; +} + +static void mlx5dr_onesize_element_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + struct mlx5dr_pool_elements *elem; + + assert(chunk->resource_idx == 0); + + elem = pool->db.element_manager->elements[chunk->resource_idx]; + if (!elem) { + assert(false); + DR_LOG(ERR, "No such element (%d)", chunk->resource_idx); + return; + } + + rte_bitmap_set(elem->bitmap, chunk->offset); + elem->is_full = false; + elem->num_of_elements--; + + if (pool->flags & MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE && + !elem->num_of_elements) + mlx5dr_onesize_element_db_destroy_element(pool, elem, chunk); +} + +static int mlx5dr_onesize_element_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret = 0; + + /* go over all memory elements and find/allocate free slot */ + ret = mlx5dr_pool_onesize_element_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_onesize_element_db_uninit(struct mlx5dr_pool *pool) +{ + struct mlx5dr_pool_elements *elem; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + elem = pool->db.element_manager->elements[i]; + if (elem) { + if (elem->bitmap) + rte_free(elem->bitmap); + simple_free(elem); + pool->db.element_manager->elements[i] = NULL; + } + } + simple_free(pool->db.element_manager); +} + +/* This memory management works as the following: + * - At start doesn't allocate no mem at all. + * - When new request for chunk arrived: + * aloocate the first and only slot of memory/resource + * when it ended return error. + */ +static int mlx5dr_pool_onesize_element_db_init(struct mlx5dr_pool *pool) +{ + pool->db.element_manager = simple_calloc(1, sizeof(*pool->db.element_manager)); + if (!pool->db.element_manager) { + DR_LOG(ERR, "No mem for general elemnt_manager"); + rte_errno = ENOMEM; + return rte_errno; + } + + pool->p_db_uninit = &mlx5dr_onesize_element_db_uninit; + pool->p_get_chunk = &mlx5dr_onesize_element_db_get_chunk; + pool->p_put_chunk = &mlx5dr_onesize_element_db_put_chunk; + + return 0; +} + +static int mlx5dr_pool_db_init(struct mlx5dr_pool *pool, + enum mlx5dr_db_type db_type) +{ + int ret; + + if (db_type == MLX5DR_POOL_DB_TYPE_GENERAL_SIZE) + ret = mlx5dr_pool_general_element_db_init(pool); + else if (db_type == MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE) + ret = mlx5dr_pool_onesize_element_db_init(pool); + else + ret = mlx5dr_pool_buddy_db_init(pool, pool->alloc_log_sz); + + if (ret) { + DR_LOG(ERR, "Failed to init general db : %d (ret: %d)", db_type, ret); + return ret; + } + + return 0; +} + +static void mlx5dr_pool_db_unint(struct mlx5dr_pool *pool) +{ + pool->p_db_uninit(pool); +} + +int +mlx5dr_pool_chunk_alloc(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret; + + pthread_spin_lock(&pool->lock); + ret = pool->p_get_chunk(pool, chunk); + pthread_spin_unlock(&pool->lock); + + return ret; +} + +void mlx5dr_pool_chunk_free(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + pthread_spin_lock(&pool->lock); + pool->p_put_chunk(pool, chunk); + pthread_spin_unlock(&pool->lock); +} + +struct mlx5dr_pool * +mlx5dr_pool_create(struct mlx5dr_context *ctx, struct mlx5dr_pool_attr *pool_attr) +{ + enum mlx5dr_db_type res_db_type; + struct mlx5dr_pool *pool; + + pool = simple_calloc(1, sizeof(*pool)); + if (!pool) + return NULL; + + pool->ctx = ctx; + pool->type = pool_attr->pool_type; + pool->alloc_log_sz = pool_attr->alloc_log_sz; + pool->flags = pool_attr->flags; + pool->tbl_type = pool_attr->table_type; + pool->opt_type = pool_attr->opt_type; + + pthread_spin_init(&pool->lock, PTHREAD_PROCESS_PRIVATE); + + /* support general db */ + if (pool->flags == (MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE | + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK)) + res_db_type = MLX5DR_POOL_DB_TYPE_GENERAL_SIZE; + else if (pool->flags == (MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS)) + res_db_type = MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE; + else + res_db_type = MLX5DR_POOL_DB_TYPE_BUDDY; + + pool->alloc_log_sz = pool_attr->alloc_log_sz; + + if (mlx5dr_pool_db_init(pool, res_db_type)) + goto free_pool; + + return pool; + +free_pool: + pthread_spin_destroy(&pool->lock); + simple_free(pool); + return NULL; +} + +int mlx5dr_pool_destroy(struct mlx5dr_pool *pool) +{ + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) + if (pool->resource[i]) + mlx5dr_pool_resource_free(pool, i); + + mlx5dr_pool_db_unint(pool); + + pthread_spin_destroy(&pool->lock); + simple_free(pool); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_pool.h b/drivers/net/mlx5/hws/mlx5dr_pool.h new file mode 100644 index 0000000000..9e712744bf --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pool.h @@ -0,0 +1,152 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#ifndef MLX5DR_POOL_H_ +#define MLX5DR_POOL_H_ + +enum mlx5dr_pool_type { + MLX5DR_POOL_TYPE_STE, + MLX5DR_POOL_TYPE_STC, +}; + +#define MLX5DR_POOL_STC_LOG_SZ 14 + +#define MLX5DR_POOL_RESOURCE_ARR_SZ 100 + +struct mlx5dr_pool_chunk { + uint32_t resource_idx; + /* Internal offset, relative to base index */ + int offset; + int order; +}; + +struct mlx5dr_pool_resource { + struct mlx5dr_pool *pool; + struct mlx5dr_devx_obj *devx_obj; + uint32_t base_id; + uint32_t range; +}; + +enum mlx5dr_pool_flags { + /* Only a one resource in that pool */ + MLX5DR_POOL_FLAGS_ONE_RESOURCE = 1 << 0, + MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE = 1 << 1, + /* No sharing resources between chunks */ + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK = 1 << 2, + /* All objects are in the same size */ + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS = 1 << 3, + /* Manged by buddy allocator */ + MLX5DR_POOL_FLAGS_BUDDY_MANAGED = 1 << 4, + /* Allocate pool_type memory on pool creation */ + MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE = 1 << 5, + + /* These values should be used by the caller */ + MLX5DR_POOL_FLAGS_FOR_STC_POOL = + MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS, + MLX5DR_POOL_FLAGS_FOR_MATCHER_STE_POOL = + MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE | + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK, + MLX5DR_POOL_FLAGS_FOR_STE_ACTION_POOL = + MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_BUDDY_MANAGED | + MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE, +}; + +enum mlx5dr_pool_optimize { + MLX5DR_POOL_OPTIMIZE_NONE = 0x0, + MLX5DR_POOL_OPTIMIZE_ORIG = 0x1, + MLX5DR_POOL_OPTIMIZE_MIRROR = 0x2, +}; + +struct mlx5dr_pool_attr { + enum mlx5dr_pool_type pool_type; + enum mlx5dr_table_type table_type; + enum mlx5dr_pool_flags flags; + enum mlx5dr_pool_optimize opt_type; + /* Allocation size once memory is depleted */ + size_t alloc_log_sz; +}; + +enum mlx5dr_db_type { + /* uses for allocating chunk of big memory, each element has its own resource in the FW*/ + MLX5DR_POOL_DB_TYPE_GENERAL_SIZE, + /* one resource only, all the elements are with same one size */ + MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE, + /* many resources, the memory allocated with buddy mechanism */ + MLX5DR_POOL_DB_TYPE_BUDDY, +}; + +struct mlx5dr_buddy_manager { + struct mlx5dr_buddy_mem *buddies[MLX5DR_POOL_RESOURCE_ARR_SZ]; +}; + +struct mlx5dr_pool_elements { + uint32_t num_of_elements; + struct rte_bitmap *bitmap; + bool is_full; +}; + +struct mlx5dr_element_manager { + struct mlx5dr_pool_elements *elements[MLX5DR_POOL_RESOURCE_ARR_SZ]; +}; + +struct mlx5dr_pool_db { + enum mlx5dr_db_type type; + union { + struct mlx5dr_element_manager *element_manager; + struct mlx5dr_buddy_manager *buddy_manager; + }; +}; + +typedef int (*mlx5dr_pool_db_get_chunk)(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); +typedef void (*mlx5dr_pool_db_put_chunk)(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); +typedef void (*mlx5dr_pool_unint_db)(struct mlx5dr_pool *pool); + +struct mlx5dr_pool { + struct mlx5dr_context *ctx; + enum mlx5dr_pool_type type; + enum mlx5dr_pool_flags flags; + pthread_spinlock_t lock; + size_t alloc_log_sz; + enum mlx5dr_table_type tbl_type; + enum mlx5dr_pool_optimize opt_type; + struct mlx5dr_pool_resource *resource[MLX5DR_POOL_RESOURCE_ARR_SZ]; + struct mlx5dr_pool_resource *mirror_resource[MLX5DR_POOL_RESOURCE_ARR_SZ]; + /* db */ + struct mlx5dr_pool_db db; + /* functions */ + mlx5dr_pool_unint_db p_db_uninit; + mlx5dr_pool_db_get_chunk p_get_chunk; + mlx5dr_pool_db_put_chunk p_put_chunk; +}; + +struct mlx5dr_pool * +mlx5dr_pool_create(struct mlx5dr_context *ctx, + struct mlx5dr_pool_attr *pool_attr); + +int mlx5dr_pool_destroy(struct mlx5dr_pool *pool); + +int mlx5dr_pool_chunk_alloc(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); + +void mlx5dr_pool_chunk_free(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); + +static inline struct mlx5dr_devx_obj * +mlx5dr_pool_chunk_get_base_devx_obj(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + return pool->resource[chunk->resource_idx]->devx_obj; +} + +static inline struct mlx5dr_devx_obj * +mlx5dr_pool_chunk_get_base_devx_obj_mirror(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + return pool->mirror_resource[chunk->resource_idx]->devx_obj; +} +#endif -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 11/19] net/mlx5/hws: Add HWS send layer 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (9 preceding siblings ...) 2022-09-22 19:03 ` [v1 10/19] net/mlx5/hws: Add HWS pool and buddy Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 12/19] net/mlx5/hws: Add HWS definer layer Alex Vesker ` (12 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad Cc: dev, orika, Mark Bloch HWS configures flows to the HW using a QP, each WQE has the details of the flow we want to offload. The send layer allocates the resources needed to send the request to the HW as well as managing the queues, getting completions and handling failures. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_send.c | 849 +++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_send.h | 273 ++++++++++ 2 files changed, 1122 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.h diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c new file mode 100644 index 0000000000..63aba53792 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_send.c @@ -0,0 +1,849 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#include "mlx5dr_internal.h" + +struct mlx5dr_send_ring_dep_wqe * +mlx5dr_send_add_new_dep_wqe(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *send_sq = &queue->send_ring->send_sq; + unsigned idx = send_sq->head_dep_idx++ & (queue->num_entries - 1); + + memset(&send_sq->dep_wqe[idx].wqe_data.tag, 0, MLX5DR_MATCH_TAG_SZ); + + return &send_sq->dep_wqe[idx]; +} + +void mlx5dr_send_abort_new_dep_wqe(struct mlx5dr_send_engine *queue) +{ + queue->send_ring->send_sq.head_dep_idx--; +} + +void mlx5dr_send_all_dep_wqe(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *send_sq = &queue->send_ring->send_sq; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE; + + /* Fence first from previous depend WQEs */ + ste_attr.send_attr.fence = 1; + + while (send_sq->head_dep_idx != send_sq->tail_dep_idx) { + dep_wqe = &send_sq->dep_wqe[send_sq->tail_dep_idx++ & (queue->num_entries - 1)]; + + /* Notify HW on the last WQE */ + ste_attr.send_attr.notify_hw = (send_sq->tail_dep_idx == send_sq->head_dep_idx); + ste_attr.send_attr.user_data = dep_wqe->user_data; + ste_attr.send_attr.rule = dep_wqe->rule; + + ste_attr.rtc_0 = dep_wqe->rtc_0; + ste_attr.rtc_1 = dep_wqe->rtc_1; + ste_attr.retry_rtc_0 = dep_wqe->retry_rtc_0; + ste_attr.retry_rtc_1 = dep_wqe->retry_rtc_1; + ste_attr.used_id_rtc_0 = &dep_wqe->rule->rtc_0; + ste_attr.used_id_rtc_1 = &dep_wqe->rule->rtc_1; + ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; + ste_attr.wqe_data = &dep_wqe->wqe_data; + + mlx5dr_send_ste(queue, &ste_attr); + + /* Fencing is done only on the first WQE */ + ste_attr.send_attr.fence = 0; + } +} + +struct mlx5dr_send_engine_post_ctrl +mlx5dr_send_engine_post_start(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_engine_post_ctrl ctrl; + + ctrl.queue = queue; + ctrl.send_ring = &queue->send_ring[0]; // TODO: Change when send rings > 1 + ctrl.num_wqebbs = 0; + + return ctrl; +} + +void mlx5dr_send_engine_post_req_wqe(struct mlx5dr_send_engine_post_ctrl *ctrl, + char **buf, size_t *len) +{ + struct mlx5dr_send_ring_sq *send_sq = &ctrl->send_ring->send_sq; + unsigned int idx; + + idx = (send_sq->cur_post + ctrl->num_wqebbs) & send_sq->buf_mask; + + *buf = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + *len = MLX5_SEND_WQE_BB; + + if (!ctrl->num_wqebbs) { + *buf += sizeof(struct mlx5dr_wqe_ctrl_seg); + *len -= sizeof(struct mlx5dr_wqe_ctrl_seg); + } + + ctrl->num_wqebbs++; +} + +static void mlx5dr_send_engine_post_ring(struct mlx5dr_send_ring_sq *sq, + struct mlx5dv_devx_uar *uar, + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl) +{ + rte_compiler_barrier(); + sq->db[MLX5_SND_DBR] = rte_cpu_to_be_32(sq->cur_post); + + rte_wmb(); + mlx5dr_uar_write64_relaxed(*((uint64_t *)wqe_ctrl), uar->reg_addr); + rte_wmb(); +} + +static void +mlx5dr_send_wqe_set_tag(struct mlx5dr_wqe_gta_data_seg_ste *wqe_data, + struct mlx5dr_rule_match_tag *tag, + bool is_jumbo) +{ + if (is_jumbo) { + /* Clear previous possibly dirty control */ + memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ); + memcpy(wqe_data->action, tag->jumbo, MLX5DR_JUMBO_TAG_SZ); + } else { + /* Clear previous possibly dirty control and actions */ + memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ + MLX5DR_ACTIONS_SZ); + memcpy(wqe_data->tag, tag->match, MLX5DR_MATCH_TAG_SZ); + } +} + +void mlx5dr_send_engine_post_end(struct mlx5dr_send_engine_post_ctrl *ctrl, + struct mlx5dr_send_engine_post_attr *attr) +{ + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_ring_sq *sq; + uint32_t flags = 0; + unsigned idx; + + sq = &ctrl->send_ring->send_sq; + idx = sq->cur_post & sq->buf_mask; + sq->last_idx = idx; + + wqe_ctrl = (void *)(sq->buf + (idx << MLX5_SEND_WQE_SHIFT)); + + wqe_ctrl->opmod_idx_opcode = + rte_cpu_to_be_32((attr->opmod << 24) | + ((sq->cur_post & 0xffff) << 8) | + attr->opcode); + wqe_ctrl->qpn_ds = rte_cpu_to_be_32((attr->len + sizeof(struct mlx5dr_wqe_ctrl_seg)) / 16 | + sq->sqn << 8); + wqe_ctrl->imm = rte_cpu_to_be_32(attr->id); + + flags |= attr->notify_hw ? MLX5_WQE_CTRL_CQ_UPDATE : 0; + flags |= attr->fence ? MLX5_WQE_CTRL_INITIATOR_SMALL_FENCE : 0; + wqe_ctrl->flags = rte_cpu_to_be_32(flags); + + sq->wr_priv[idx].id = attr->id; + sq->wr_priv[idx].retry_id = attr->retry_id; + + sq->wr_priv[idx].rule = attr->rule; + sq->wr_priv[idx].user_data = attr->user_data; + sq->wr_priv[idx].num_wqebbs = ctrl->num_wqebbs; + + if (attr->rule) { + sq->wr_priv[idx].rule->pending_wqes++; + sq->wr_priv[idx].used_id = attr->used_id; + } + + sq->cur_post += ctrl->num_wqebbs; + + if (attr->notify_hw) + mlx5dr_send_engine_post_ring(sq, ctrl->queue->uar, wqe_ctrl); +} + +static +void mlx5dr_send_wqe(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_engine_post_attr *send_attr, + struct mlx5dr_wqe_gta_ctrl_seg *send_wqe_ctrl, + void *send_wqe_data, + void *send_wqe_tag, + bool is_jumbo, + uint8_t gta_opcode, + uint32_t direct_index) +{ + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_engine_post_ctrl ctrl; + size_t wqe_len; + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_data, &wqe_len); + + wqe_ctrl->op_dirix = htobe32(gta_opcode << 28 | direct_index); + memcpy(wqe_ctrl->stc_ix, send_wqe_ctrl->stc_ix, sizeof(send_wqe_ctrl->stc_ix)); + + if (send_wqe_data) + memcpy(wqe_data, send_wqe_data, sizeof(*wqe_data)); + else + mlx5dr_send_wqe_set_tag(wqe_data, send_wqe_tag, is_jumbo); + + mlx5dr_send_engine_post_end(&ctrl, send_attr); +} + +void mlx5dr_send_ste(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ste_attr *ste_attr) +{ + struct mlx5dr_send_engine_post_attr *send_attr = &ste_attr->send_attr; + uint8_t notify_hw = send_attr->notify_hw; + uint8_t fence = send_attr->fence; + + if (ste_attr->rtc_1) { + send_attr->id = ste_attr->rtc_1; + send_attr->used_id = ste_attr->used_id_rtc_1; + send_attr->retry_id = ste_attr->retry_rtc_1; + send_attr->fence = fence; + send_attr->notify_hw = notify_hw && !ste_attr->rtc_0; + mlx5dr_send_wqe(queue, send_attr, + ste_attr->wqe_ctrl, + ste_attr->wqe_data, + ste_attr->wqe_tag, + ste_attr->wqe_tag_is_jumbo, + ste_attr->gta_opcode, + ste_attr->direct_index); + } + + if (ste_attr->rtc_0) { + send_attr->id = ste_attr->rtc_0; + send_attr->used_id = ste_attr->used_id_rtc_0; + send_attr->retry_id = ste_attr->retry_rtc_0; + send_attr->fence = fence && !ste_attr->rtc_1; + send_attr->notify_hw = notify_hw; + mlx5dr_send_wqe(queue, send_attr, + ste_attr->wqe_ctrl, + ste_attr->wqe_data, + ste_attr->wqe_tag, + ste_attr->wqe_tag_is_jumbo, + ste_attr->gta_opcode, + ste_attr->direct_index); + } + + /* Restore to ortginal requested values */ + send_attr->notify_hw = notify_hw; + send_attr->fence = fence; +} + +static void mlx5dr_send_engine_retry_post_send(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_priv *priv, + uint16_t wqe_cnt) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_send_ring_sq *send_sq; + unsigned int idx; + size_t wqe_len; + char *p; + + send_attr.rule = priv->rule; + send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + send_attr.len = MLX5_SEND_WQE_BB * 2 - sizeof(struct mlx5dr_wqe_ctrl_seg); + send_attr.notify_hw = 1; + send_attr.fence = 0; + send_attr.user_data = priv->user_data; + send_attr.id = priv->retry_id; + send_attr.used_id = priv->used_id; + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_data, &wqe_len); + + send_sq = &ctrl.send_ring->send_sq; + idx = wqe_cnt & send_sq->buf_mask; + p = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + + /* Copy old gta ctrl */ + memcpy(wqe_ctrl, p + sizeof(struct mlx5dr_wqe_ctrl_seg), + MLX5_SEND_WQE_BB - sizeof(struct mlx5dr_wqe_ctrl_seg)); + + idx = (wqe_cnt + 1) & send_sq->buf_mask; + p = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + + /* Copy old gta data */ + memcpy(wqe_data, p, MLX5_SEND_WQE_BB); + + mlx5dr_send_engine_post_end(&ctrl, &send_attr); +} + +void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *sq = &queue->send_ring[0].send_sq; + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl; + + wqe_ctrl = (void *)(sq->buf + (sq->last_idx << MLX5_SEND_WQE_SHIFT)); + + wqe_ctrl->flags |= rte_cpu_to_be_32(MLX5_WQE_CTRL_CQ_UPDATE); + + mlx5dr_send_engine_post_ring(sq, queue->uar, wqe_ctrl); +} + +static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_priv *priv, + uint16_t wqe_cnt, + enum rte_flow_op_status *status) +{ + priv->rule->pending_wqes--; + + if (*status == RTE_FLOW_OP_ERROR) { + if (priv->retry_id) { + mlx5dr_send_engine_retry_post_send(queue, priv, wqe_cnt); + return; + } + /* Some part of the rule failed */ + priv->rule->status = MLX5DR_RULE_STATUS_FAILING; + *priv->used_id = 0; + } else { + *priv->used_id = priv->id; + } + + /* Update rule status for the last completion */ + if (!priv->rule->pending_wqes) { + if (unlikely(priv->rule->status == MLX5DR_RULE_STATUS_FAILING)) { + /* Rule completely failed and doesn't require cleanup */ + if (!priv->rule->rtc_0 && !priv->rule->rtc_1) + priv->rule->status = MLX5DR_RULE_STATUS_FAILED; + + *status = RTE_FLOW_OP_ERROR; + } else { + /* Increase the status, this only works on good flow as the enum + * is arrange it away creating -> created -> deleting -> deleted + */ + priv->rule->status++; + *status = RTE_FLOW_OP_SUCCESS; + /* Rule was deleted now we can safely release action STEs */ + if (priv->rule->status == MLX5DR_RULE_STATUS_DELETED) + mlx5dr_rule_free_action_ste_idx(priv->rule); + } + } +} + +static void mlx5dr_send_engine_update(struct mlx5dr_send_engine *queue, + struct mlx5_cqe64 *cqe, + struct mlx5dr_send_ring_priv *priv, + struct rte_flow_op_result res[], + int64_t *i, + uint32_t res_nb, + uint16_t wqe_cnt) +{ + enum rte_flow_op_status status; + + if (!cqe || (likely(rte_be_to_cpu_32(cqe->byte_cnt) >> 31 == 0) && + likely(mlx5dv_get_cqe_opcode(cqe) == MLX5_CQE_REQ))) { + status = RTE_FLOW_OP_SUCCESS; + } else { + status = RTE_FLOW_OP_ERROR; + } + + if (priv->user_data) { + if (priv->rule) { + mlx5dr_send_engine_update_rule(queue, priv, wqe_cnt, &status); + /* Completion is provided on the last rule WQE */ + if (priv->rule->pending_wqes) + return; + } + + if (*i < res_nb) { + res[*i].user_data = priv->user_data; + res[*i].status = status; + (*i)++; + mlx5dr_send_engine_dec_rule(queue); + } else { + mlx5dr_send_engine_gen_comp(queue, priv->user_data, status); + } + } +} + +static void mlx5dr_send_engine_poll_cq(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring *send_ring, + struct rte_flow_op_result res[], + int64_t *i, + uint32_t res_nb) +{ + struct mlx5dr_send_ring_cq *cq = &send_ring->send_cq; + struct mlx5dr_send_ring_sq *sq = &send_ring->send_sq; + uint32_t cq_idx = cq->cons_index & (cq->ncqe_mask); + struct mlx5dr_send_ring_priv *priv; + struct mlx5_cqe64 *cqe; + uint32_t offset_cqe64; + uint8_t cqe_opcode; + uint8_t cqe_owner; + uint16_t wqe_cnt; + uint8_t sw_own; + + offset_cqe64 = RTE_CACHE_LINE_SIZE - sizeof(struct mlx5_cqe64); + cqe = (void *)(cq->buf + (cq_idx << cq->cqe_log_sz) + offset_cqe64); + + sw_own = (cq->cons_index & cq->ncqe) ? 1 : 0; + cqe_opcode = mlx5dv_get_cqe_opcode(cqe); + cqe_owner = mlx5dv_get_cqe_owner(cqe); + + if (cqe_opcode == MLX5_CQE_INVALID || + cqe_owner != sw_own) + return; + + if (unlikely(mlx5dv_get_cqe_opcode(cqe) != MLX5_CQE_REQ)) + queue->err = true; + + rte_io_rmb(); + + wqe_cnt = be16toh(cqe->wqe_counter) & sq->buf_mask; + + while (cq->poll_wqe != wqe_cnt) { + priv = &sq->wr_priv[cq->poll_wqe]; + mlx5dr_send_engine_update(queue, NULL, priv, res, i, res_nb, 0); + cq->poll_wqe = (cq->poll_wqe + priv->num_wqebbs) & sq->buf_mask; + } + + priv = &sq->wr_priv[wqe_cnt]; + cq->poll_wqe = (wqe_cnt + priv->num_wqebbs) & sq->buf_mask; + mlx5dr_send_engine_update(queue, cqe, priv, res, i, res_nb, wqe_cnt); + cq->cons_index++; +} + +static void mlx5dr_send_engine_poll_cqs(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + int64_t *polled, + uint32_t res_nb) +{ + int j; + + for (j = 0; j < MLX5DR_NUM_SEND_RINGS; j++) { + mlx5dr_send_engine_poll_cq(queue, &queue->send_ring[j], + res, polled, res_nb); + + *queue->send_ring[j].send_cq.db = htobe32(queue->send_ring[j].send_cq.cons_index & 0xffffff); + } +} + +static void mlx5dr_send_engine_poll_list(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + int64_t *polled, + uint32_t res_nb) +{ + struct mlx5dr_completed_poll *comp = &queue->completed; + + while (comp->ci != comp->pi) { + if (*polled < res_nb) { + res[*polled].status = + comp->entries[comp->ci].status; + res[*polled].user_data = + comp->entries[comp->ci].user_data; + (*polled)++; + comp->ci = (comp->ci + 1) & comp->mask; + mlx5dr_send_engine_dec_rule(queue); + } else { + return; + } + } +} + +static int mlx5dr_send_engine_poll(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + uint32_t res_nb) +{ + int64_t polled = 0; + + mlx5dr_send_engine_poll_list(queue, res, &polled, res_nb); + + if (polled >= res_nb) + return polled; + + mlx5dr_send_engine_poll_cqs(queue, res, &polled, res_nb); + + return polled; +} + +int mlx5dr_send_queue_poll(struct mlx5dr_context *ctx, + uint16_t queue_id, + struct rte_flow_op_result res[], + uint32_t res_nb) +{ + return mlx5dr_send_engine_poll(&ctx->send_queue[queue_id], + res, res_nb); +} + +static int mlx5dr_send_ring_create_sq_obj(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_sq *sq, + struct mlx5dr_send_ring_cq *cq, + size_t log_wq_sz) +{ + struct mlx5dr_cmd_sq_create_attr attr = {0}; + int err; + + attr.cqn = cq->cqn; + attr.pdn = ctx->pd_num; + attr.page_id = queue->uar->page_id; + attr.dbr_id = sq->db_umem->umem_id; + attr.wq_id = sq->buf_umem->umem_id; + attr.log_wq_sz = log_wq_sz; + + sq->obj = mlx5dr_cmd_sq_create(ctx->ibv_ctx, &attr); + if (!sq->obj) + return rte_errno; + + sq->sqn = sq->obj->id; + + err = mlx5dr_cmd_sq_modify_rdy(sq->obj); + if (err) + goto free_sq; + + return 0; + +free_sq: + mlx5dr_cmd_destroy_obj(sq->obj); + + return err; +} + +static inline unsigned long align(unsigned long val, unsigned long align) +{ + return (val + align - 1) & ~(align - 1); +} + +static int mlx5dr_send_ring_open_sq(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_sq *sq, + struct mlx5dr_send_ring_cq *cq) +{ + size_t sq_log_buf_sz; + size_t buf_aligned; + size_t sq_buf_sz; + size_t buf_sz; + int err; + + buf_sz = queue->num_entries * MAX_WQES_PER_RULE; + sq_log_buf_sz = log2above(buf_sz); + sq_buf_sz = 1 << (sq_log_buf_sz + log2above(MLX5_SEND_WQE_BB)); + sq->reg_addr = queue->uar->reg_addr; + + buf_aligned = align(sq_buf_sz, sysconf(_SC_PAGESIZE)); + err = posix_memalign((void **)&sq->buf, sysconf(_SC_PAGESIZE), buf_aligned); + if (err) { + rte_errno = ENOMEM; + return err; + } + memset(sq->buf, 0, buf_aligned); + + err = posix_memalign((void **)&sq->db, 8, 8); + if (err) + goto free_buf; + + sq->buf_umem = mlx5_glue->devx_umem_reg(ctx->ibv_ctx, sq->buf, sq_buf_sz, 0); + + if (!sq->buf_umem) { + err = errno; + goto free_db; + } + + sq->db_umem = mlx5_glue->devx_umem_reg(ctx->ibv_ctx, sq->db, 8, 0); + if (!sq->db_umem) { + err = errno; + goto free_buf_umem; + } + + err = mlx5dr_send_ring_create_sq_obj(ctx, queue, sq, cq, sq_log_buf_sz); + + if (err) + goto free_db_umem; + + sq->wr_priv = simple_malloc(sizeof(*sq->wr_priv) * buf_sz); + if (!sq->wr_priv) { + err = ENOMEM; + goto destroy_sq_obj; + } + + sq->dep_wqe = simple_calloc(queue->num_entries ,sizeof(*sq->dep_wqe)); + if (!sq->dep_wqe) { + err = ENOMEM; + goto destroy_wr_priv; + } + + sq->buf_mask = buf_sz - 1; + + return 0; + +destroy_wr_priv: + simple_free(sq->wr_priv); +destroy_sq_obj: + mlx5dr_cmd_destroy_obj(sq->obj); +free_db_umem: + mlx5_glue->devx_umem_dereg(sq->db_umem); +free_buf_umem: + mlx5_glue->devx_umem_dereg(sq->buf_umem); +free_db: + free(sq->db); +free_buf: + free(sq->buf); + rte_errno = err; + return err; +} + +static void mlx5dr_send_ring_close_sq(struct mlx5dr_send_ring_sq *sq) +{ + simple_free(sq->dep_wqe); + mlx5dr_cmd_destroy_obj(sq->obj); + mlx5_glue->devx_umem_dereg(sq->db_umem); + mlx5_glue->devx_umem_dereg(sq->buf_umem); + simple_free(sq->wr_priv); + free(sq->db); + free(sq->buf); +} + +static int mlx5dr_send_ring_open_cq(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_cq *cq) +{ + struct mlx5dv_cq mlx5_cq = {0}; + struct mlx5dv_obj obj; + struct ibv_cq *ibv_cq; + size_t cq_size; + int err; + + cq_size = queue->num_entries; + ibv_cq = mlx5_glue->create_cq(ctx->ibv_ctx, cq_size, NULL, NULL, 0); + if (!ibv_cq) { + DR_LOG(ERR, "Failed to create CQ"); + rte_errno = errno; + return rte_errno; + } + + obj.cq.in = ibv_cq; + obj.cq.out = &mlx5_cq; + err = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_CQ); + if (err) { + err = errno; + goto close_cq; + } + + cq->buf = mlx5_cq.buf; + cq->db = mlx5_cq.dbrec; + cq->ncqe = mlx5_cq.cqe_cnt; + if (cq->ncqe < queue->num_entries) + DR_LOG(ERR, "%s - (ncqe: %u quque_num_entries: %u) Bug?!", + __func__, + cq->ncqe, + queue->num_entries); /* TODO - Debug test */ + cq->cqe_sz = mlx5_cq.cqe_size; + cq->cqe_log_sz = log2above(cq->cqe_sz); + cq->ncqe_mask = cq->ncqe - 1; + cq->buf_sz = cq->cqe_sz * cq->ncqe; + cq->cqn = mlx5_cq.cqn; + cq->ibv_cq = ibv_cq; + + return 0; + +close_cq: + mlx5_glue->destroy_cq(ibv_cq); + rte_errno = err; + return err; +} + +static void mlx5dr_send_ring_close_cq(struct mlx5dr_send_ring_cq *cq) +{ + mlx5_glue->destroy_cq(cq->ibv_cq); +} + +static void mlx5dr_send_ring_close(struct mlx5dr_send_ring *ring) +{ + mlx5dr_send_ring_close_sq(&ring->send_sq); + mlx5dr_send_ring_close_cq(&ring->send_cq); +} + +static int mlx5dr_send_ring_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring *ring) +{ + int err; + + err = mlx5dr_send_ring_open_cq(ctx, queue, &ring->send_cq); + if (err) + return err; + + err = mlx5dr_send_ring_open_sq(ctx, queue, &ring->send_sq, &ring->send_cq); + if (err) + goto close_cq; + + return err; + +close_cq: + mlx5dr_send_ring_close_cq(&ring->send_cq); + + return err; +} + +static void __mlx5dr_send_rings_close(struct mlx5dr_send_engine *queue, + uint16_t i) +{ + while (i--) + mlx5dr_send_ring_close(&queue->send_ring[i]); +} + +static void mlx5dr_send_rings_close(struct mlx5dr_send_engine *queue) +{ + __mlx5dr_send_rings_close(queue, queue->rings); +} + +static int mlx5dr_send_rings_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue) +{ + uint16_t i; + int err; + + for (i = 0; i < queue->rings; i++) { + err = mlx5dr_send_ring_open(ctx, queue, &queue->send_ring[i]); + if (err) + goto free_rings; + } + + return 0; + +free_rings: + __mlx5dr_send_rings_close(queue, i); + + return err; +} + +void mlx5dr_send_queue_close(struct mlx5dr_send_engine *queue) +{ + mlx5dr_send_rings_close(queue); + simple_free(queue->completed.entries); + mlx5_glue->devx_free_uar(queue->uar); +} + +int mlx5dr_send_queue_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + uint16_t queue_size) +{ + struct mlx5dv_devx_uar *uar; + int err; + +#ifdef MLX5DV_UAR_ALLOC_TYPE_NC + uar = mlx5_glue->devx_alloc_uar(ctx->ibv_ctx, MLX5_IB_UAPI_UAR_ALLOC_TYPE_NC); + if (!uar) { + rte_errno = errno; + return rte_errno; + } +#else + uar = NULL; + rte_errno = ENOTSUP; + return rte_errno; +#endif + + queue->uar = uar; + queue->rings = MLX5DR_NUM_SEND_RINGS; + queue->num_entries = roundup_pow_of_two(queue_size); /* TODO */ + queue->used_entries = 0; + queue->th_entries = queue->num_entries; + + queue->completed.entries = simple_calloc(queue->num_entries, + sizeof(queue->completed.entries[0])); + if (!queue->completed.entries) { + rte_errno = ENOMEM; + goto free_uar; + } + queue->completed.pi = 0; + queue->completed.ci = 0; + queue->completed.mask = queue->num_entries - 1; + + err = mlx5dr_send_rings_open(ctx, queue); + if (err) + goto free_completed_entries; + + return 0; + +free_completed_entries: + simple_free(queue->completed.entries); +free_uar: + mlx5_glue->devx_free_uar(uar); + return rte_errno; +} + +static void __mlx5dr_send_queues_close(struct mlx5dr_context *ctx, uint16_t queues) +{ + struct mlx5dr_send_engine *queue; + + while (queues--) { + queue = &ctx->send_queue[queues]; + + mlx5dr_send_queue_close(queue); + } +} + +void mlx5dr_send_queues_close(struct mlx5dr_context *ctx) +{ + __mlx5dr_send_queues_close(ctx, ctx->queues); + simple_free(ctx->send_queue); +} + +int mlx5dr_send_queues_open(struct mlx5dr_context *ctx, + uint16_t queues, + uint16_t queue_size) +{ + uint32_t i; + int err = 0; + + /* TODO: For now there is a 1:1 queue:ring mapping + * add middle logic layer if it ever changes. + */ + /* open one extra queue for control path */ + ctx->queues = queues + 1; + + ctx->send_queue = simple_calloc(ctx->queues, sizeof(*ctx->send_queue)); + if (!ctx->send_queue) { + rte_errno = ENOMEM; + return rte_errno; + } + + for (i = 0; i < ctx->queues; i++) { + err = mlx5dr_send_queue_open(ctx, &ctx->send_queue[i], queue_size); + if (err) + goto close_send_queues; + } + + return 0; + +close_send_queues: + __mlx5dr_send_queues_close(ctx, i); + + simple_free(ctx->send_queue); + + return err; +} + +int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, + uint16_t queue_id, + uint32_t actions) +{ + struct mlx5dr_send_ring_sq *send_sq; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[queue_id]; + send_sq = &queue->send_ring->send_sq; + + if (actions == MLX5DR_SEND_QUEUE_ACTION_DRAIN) { + if (send_sq->head_dep_idx != send_sq->tail_dep_idx) + /* Send dependent WQEs to drain the queue */ + mlx5dr_send_all_dep_wqe(queue); + else + /* Signal on the last posted WQE */ + mlx5dr_send_engine_flush_queue(queue); + } else { + rte_errno = -EINVAL; + return rte_errno; + } + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h new file mode 100644 index 0000000000..1897a1df9e --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_send.h @@ -0,0 +1,273 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#ifndef MLX5DR_SEND_H_ +#define MLX5DR_SEND_H_ + +#define MLX5DR_NUM_SEND_RINGS 1 + +/* As a single operation requires at least two WQEBBS this means a maximum of 16 + * such operations per rule + */ +#define MAX_WQES_PER_RULE 32 + +/* WQE Control segment. */ +struct mlx5dr_wqe_ctrl_seg { + __be32 opmod_idx_opcode; + __be32 qpn_ds; + __be32 flags; + __be32 imm; +}; + +enum mlx5dr_wqe_opcode { + MLX5DR_WQE_OPCODE_TBL_ACCESS = 0x2c, +}; + +enum mlx5dr_wqe_opmod { + MLX5DR_WQE_OPMOD_GTA_STE = 0, + MLX5DR_WQE_OPMOD_GTA_MOD_ARG = 1, +}; + +enum mlx5dr_wqe_gta_opcode { + MLX5DR_WQE_GTA_OP_ACTIVATE = 0, + MLX5DR_WQE_GTA_OP_DEACTIVATE = 1, +}; + +enum mlx5dr_wqe_gta_opmod { + MLX5DR_WQE_GTA_OPMOD_STE = 0, + MLX5DR_WQE_GTA_OPMOD_MOD_ARG = 1, +}; + +enum mlx5dr_wqe_gta_sz { + MLX5DR_WQE_SZ_GTA_CTRL = 48, + MLX5DR_WQE_SZ_GTA_DATA = 64, +}; + +struct mlx5dr_wqe_gta_ctrl_seg { + __be32 op_dirix; + __be32 stc_ix[5]; + __be32 rsvd0[6]; +}; + +struct mlx5dr_wqe_gta_data_seg_ste { + __be32 rsvd0_ctr_id; + __be32 rsvd1[4]; + __be32 action[3]; + __be32 tag[8]; +}; + +struct mlx5dr_wqe_gta_data_seg_arg { + __be32 action_args[8]; +}; + +struct mlx5dr_wqe_gta { + struct mlx5dr_wqe_gta_ctrl_seg gta_ctrl; + union { + struct mlx5dr_wqe_gta_data_seg_ste seg_ste; + struct mlx5dr_wqe_gta_data_seg_arg seg_arg; + }; +}; + +struct mlx5dr_send_ring_cq { + uint8_t *buf; + uint32_t cons_index; + uint32_t ncqe_mask; + uint32_t buf_sz; + uint32_t ncqe; + uint32_t cqe_log_sz; + __be32 *db; + uint16_t poll_wqe; + struct ibv_cq *ibv_cq; + uint32_t cqn; + uint32_t cqe_sz; +}; + +struct mlx5dr_send_ring_priv { + struct mlx5dr_rule *rule; + void *user_data; + uint32_t num_wqebbs; + uint32_t id; + uint32_t retry_id; + uint32_t *used_id; +}; + +struct mlx5dr_send_ring_dep_wqe { + struct mlx5dr_wqe_gta_ctrl_seg wqe_ctrl; + struct mlx5dr_wqe_gta_data_seg_ste wqe_data; + struct mlx5dr_rule *rule; + uint32_t rtc_0; + uint32_t rtc_1; + uint32_t retry_rtc_0; + uint32_t retry_rtc_1; + void *user_data; +}; + +struct mlx5dr_send_ring_sq { + char *buf; + uint32_t sqn; + __be32 *db; + void *reg_addr; + uint16_t cur_post; + uint16_t buf_mask; + struct mlx5dr_send_ring_priv *wr_priv; + unsigned last_idx; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + unsigned head_dep_idx; + unsigned tail_dep_idx; + struct mlx5dr_devx_obj *obj; + struct mlx5dv_devx_umem *buf_umem; + struct mlx5dv_devx_umem *db_umem; +}; + +struct mlx5dr_send_ring { + struct mlx5dr_send_ring_cq send_cq; + struct mlx5dr_send_ring_sq send_sq; +}; + +struct mlx5dr_completed_poll_entry { + void *user_data; + enum rte_flow_op_status status; +}; + +struct mlx5dr_completed_poll { + struct mlx5dr_completed_poll_entry *entries; + uint16_t ci; + uint16_t pi; + uint16_t mask; +}; + +struct mlx5dr_send_engine { + struct mlx5dr_send_ring send_ring[MLX5DR_NUM_SEND_RINGS]; /* For now 1:1 mapping */ + struct mlx5dv_devx_uar *uar; /* Uar is shared between rings of a queue */ + struct mlx5dr_completed_poll completed; + uint16_t used_entries; + uint16_t th_entries; + uint16_t rings; + uint16_t num_entries; + bool err; +} __rte_cache_aligned; + +struct mlx5dr_send_engine_post_ctrl { + struct mlx5dr_send_engine *queue; + struct mlx5dr_send_ring *send_ring; + size_t num_wqebbs; +}; + +struct mlx5dr_send_engine_post_attr { + uint8_t opcode; + uint8_t opmod; + uint8_t notify_hw; + uint8_t fence; + size_t len; + struct mlx5dr_rule *rule; + uint32_t id; + uint32_t retry_id; + uint32_t *used_id; + void *user_data; +}; + +struct mlx5dr_send_ste_attr { + /* rtc / retry_rtc / used_id_rtc override send_attr */ + uint32_t rtc_0; + uint32_t rtc_1; + uint32_t retry_rtc_0; + uint32_t retry_rtc_1; + uint32_t *used_id_rtc_0; + uint32_t *used_id_rtc_1; + bool wqe_tag_is_jumbo; + uint8_t gta_opcode; + uint32_t direct_index; + struct mlx5dr_send_engine_post_attr send_attr; + struct mlx5dr_rule_match_tag *wqe_tag; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; +}; + +/** + * Provide safe 64bit store operation to mlx5 UAR region for both 32bit and + * 64bit architectures. + * + * @param val + * value to write in CPU endian format. + * @param addr + * Address to write to. + * @param lock + * Address of the lock to use for that UAR access. + */ +static __rte_always_inline void +mlx5dr_uar_write64_relaxed(uint64_t val, void *addr) +{ +#ifdef RTE_ARCH_64 + *(uint64_t *)addr = val; +#else /* !RTE_ARCH_64 */ + *(uint32_t *)addr = val; + rte_io_wmb(); + *((uint32_t *)addr + 1) = val >> 32; +#endif +} + +struct mlx5dr_send_ring_dep_wqe * +mlx5dr_send_add_new_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_abort_new_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_all_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_queue_close(struct mlx5dr_send_engine *queue); + +int mlx5dr_send_queue_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + uint16_t queue_size); + +void mlx5dr_send_queues_close(struct mlx5dr_context *ctx); + +int mlx5dr_send_queues_open(struct mlx5dr_context *ctx, + uint16_t queues, + uint16_t queue_size); + +struct mlx5dr_send_engine_post_ctrl +mlx5dr_send_engine_post_start(struct mlx5dr_send_engine *queue); +void mlx5dr_send_engine_post_req_wqe(struct mlx5dr_send_engine_post_ctrl *ctrl, + char **buf, size_t *len); +void mlx5dr_send_engine_post_end(struct mlx5dr_send_engine_post_ctrl *ctrl, + struct mlx5dr_send_engine_post_attr *attr); + +void mlx5dr_send_ste(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ste_attr *ste_attr); + +void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue); + +static inline bool mlx5dr_send_engine_full(struct mlx5dr_send_engine *queue) +{ + return queue->used_entries >= queue->th_entries; +} + +static inline void mlx5dr_send_engine_inc_rule(struct mlx5dr_send_engine *queue) +{ + queue->used_entries++; +} + +static inline void mlx5dr_send_engine_dec_rule(struct mlx5dr_send_engine *queue) +{ + queue->used_entries--; +} + +static inline void mlx5dr_send_engine_gen_comp(struct mlx5dr_send_engine *queue, + void *user_data, + int comp_status) +{ + struct mlx5dr_completed_poll *comp = &queue->completed; + + comp->entries[comp->pi].status = comp_status; + comp->entries[comp->pi].user_data = user_data; + + comp->pi = (comp->pi + 1) & comp->mask; +} + +static inline bool mlx5dr_send_engine_err(struct mlx5dr_send_engine *queue) +{ + return queue->err; +} + +#endif -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 12/19] net/mlx5/hws: Add HWS definer layer 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (10 preceding siblings ...) 2022-09-22 19:03 ` [v1 11/19] net/mlx5/hws: Add HWS send layer Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 13/19] net/mlx5/hws: Add HWS context object Alex Vesker ` (11 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad Cc: dev, orika, Mark Bloch Definers are HW objects that are used for matching, rte items are translated to definers, each definer holds the fields and bit-masks used for HW flow matching. The definer layer is used for finding the most efficient definer for each set of items. In addition to definer creation we also calculate the field copy (fc) array used for efficient items to WQE conversion. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_definer.c | 1866 +++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_definer.h | 582 ++++++++ 2 files changed, 2448 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.h diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c new file mode 100644 index 0000000000..8507588c0d --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -0,0 +1,1866 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#include "mlx5dr_internal.h" + +#define GTP_PDU_SC 0x85 +#define BAD_PORT 0xBAD +#define ETH_TYPE_IPV4_VXLAN 0x0800 +#define ETH_TYPE_IPV6_VXLAN 0x86DD +#define ETH_VXLAN_DEFAULT_PORT 4789 + +#define STE_SVLAN 0x1 +#define STE_CVLAN 0x2 +#define STE_IPV4 0x1 +#define STE_IPV6 0x2 +#define STE_TCP 0x1 +#define STE_UDP 0x2 +#define STE_ICMP 0x3 + +/* Setter function based on bit offset and mask, for 32bit DW*/ +#define _DR_SET_32(p, v, byte_off, bit_off, mask) \ + do { \ + u32 _v = v; \ + *((rte_be32_t *)(p) + ((byte_off) / 4)) = \ + rte_cpu_to_be_32((rte_be_to_cpu_32(*((u32 *)(p) + \ + ((byte_off) / 4))) & \ + (~((mask) << (bit_off)))) | \ + (((_v) & (mask)) << \ + (bit_off))); \ + } while (0) + +/* Setter function based on bit offset and mask */ +#define DR_SET(p, v, byte_off, bit_off, mask) \ + do { \ + if (unlikely(bit_off < 0)) { \ + u32 _bit_off = -1 * (bit_off); \ + u32 second_dw_mask = mask & ((1 << _bit_off) - 1); \ + _DR_SET_32(p, (v) >> _bit_off, byte_off, 0, mask >> _bit_off); \ + _DR_SET_32(p, (v) & second_dw_mask, (byte_off) + DW_SIZE, \ + (bit_off) % BITS_IN_DW, second_dw_mask); \ + } else { \ + _DR_SET_32(p, v, byte_off, bit_off, mask); \ + } \ + } while (0) + +/* Setter function based on byte offset to directly set FULL BE32 value */ +#define DR_SET_BE32(p, v, byte_off, bit_off, mask) \ + do { \ + *((rte_be32_t *)((uint8_t *)(p) + byte_off)) = (v); \ + } while (0) + +/* Setter function based on byte offset to directly set FULL BE32 value from ptr */ +#define DR_SET_BE32P(p, v_ptr, byte_off, bit_off, mask) \ + memcpy((uint8_t *)(p) + byte_off, v_ptr, 4); + +/* Setter function based on byte offset to directly set FULL BE16 value */ +#define DR_SET_BE16(p, v, byte_off, bit_off, mask) \ + do { \ + *((rte_be16_t *)((uint8_t *)(p) + byte_off)) = (v); \ + } while (0) + +/* Setter function based on byte offset to directly set FULL BE16 value from ptr */ +#define DR_SET_BE16P(p, v_ptr, byte_off, bit_off, mask) \ + memcpy((uint8_t *)(p) + byte_off, v_ptr, 2); + +#define DR_CALC_FNAME(field, inner) \ + ((inner) ? MLX5DR_DEFINER_FNAME_##field##_I : \ + MLX5DR_DEFINER_FNAME_##field##_O) + +#define DR_CALC_SET_HDR(fc, hdr, field) \ + do { \ + (fc)->bit_mask = __mlx5_mask(definer_hl, hdr.field); \ + (fc)->bit_off = __mlx5_dw_bit_off(definer_hl, hdr.field); \ + (fc)->byte_off = MLX5_BYTE_OFF(definer_hl, hdr.field); \ + } while (0) + +/* Helper to calculate data used by DR_SET */ +#define DR_CALC_SET(fc, hdr, field, is_inner) \ + do { \ + if (is_inner) { \ + DR_CALC_SET_HDR(fc, hdr##_inner, field); \ + } else { \ + DR_CALC_SET_HDR(fc, hdr##_outer, field); \ + } \ + } while (0) + + #define DR_GET(typ, p, fld) \ + ((rte_be_to_cpu_32(*((const rte_be32_t *)(p) + \ + __mlx5_dw_off(typ, fld))) >> __mlx5_dw_bit_off(typ, fld)) & \ + __mlx5_mask(typ, fld)) + +struct mlx5dr_definer_sel_ctrl { + uint8_t allowed_full_dw; /* Full DW selectors cover all offsets */ + uint8_t allowed_lim_dw; /* Limited DW selectors cover offset < 64 */ + uint8_t allowed_bytes; /* Bytes selectors, up to offset 255 */ + uint8_t used_full_dw; + uint8_t used_lim_dw; + uint8_t used_bytes; + uint8_t full_dw_selector[DW_SELECTORS]; + uint8_t lim_dw_selector[DW_SELECTORS_LIMITED]; + uint8_t byte_selector[BYTE_SELECTORS]; +}; + +struct mlx5dr_definer_conv_data { + struct mlx5dr_cmd_query_caps *caps; + struct mlx5dr_definer_fc *fc; + uint8_t relaxed; + uint8_t tunnel; + uint8_t *hl; +}; + +/* Xmacro used to create generic item setter from items */ +#define LIST_OF_FIELDS_INFO \ + X(SET_BE16, eth_type, v->type, rte_flow_item_eth) \ + X(SET_BE32P, eth_smac_47_16, &v->src.addr_bytes[0], rte_flow_item_eth) \ + X(SET_BE16P, eth_smac_15_0, &v->src.addr_bytes[4], rte_flow_item_eth) \ + X(SET_BE32P, eth_dmac_47_16, &v->dst.addr_bytes[0], rte_flow_item_eth) \ + X(SET_BE16P, eth_dmac_15_0, &v->dst.addr_bytes[4], rte_flow_item_eth) \ + X(SET_BE16, tci, v->tci, rte_flow_item_vlan) \ + X(SET, first_vlan_q, v->has_more_vlan ? STE_SVLAN : STE_CVLAN, rte_flow_item_vlan) \ + X(SET, eth_first_vlan_q, v->has_vlan ? STE_CVLAN : 0, rte_flow_item_eth) \ + X(SET, ipv4_ihl, v->ihl, rte_ipv4_hdr) \ + X(SET, ipv4_tos, v->type_of_service, rte_ipv4_hdr) \ + X(SET, ipv4_time_to_live, v->time_to_live, rte_ipv4_hdr) \ + X(SET_BE32, ipv4_dst_addr, v->dst_addr, rte_ipv4_hdr) \ + X(SET_BE32, ipv4_src_addr, v->src_addr, rte_ipv4_hdr) \ + X(SET, ipv4_next_proto, v->next_proto_id, rte_ipv4_hdr) \ + X(SET, ipv4_version, STE_IPV4, rte_ipv4_hdr) \ + X(SET_BE16, ipv4_frag, v->fragment_offset, rte_ipv4_hdr) \ + X(SET_BE16, ipv6_payload_len, v->hdr.payload_len, rte_flow_item_ipv6) \ + X(SET, ipv6_proto, v->hdr.proto, rte_flow_item_ipv6) \ + X(SET, ipv6_hop_limits, v->hdr.hop_limits, rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_127_96, &v->hdr.src_addr[0], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_95_64, &v->hdr.src_addr[4], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_63_32, &v->hdr.src_addr[8], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_31_0, &v->hdr.src_addr[12], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_127_96, &v->hdr.dst_addr[0], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_95_64, &v->hdr.dst_addr[4], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_63_32, &v->hdr.dst_addr[8], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_31_0, &v->hdr.dst_addr[12], rte_flow_item_ipv6) \ + X(SET, ipv6_version, STE_IPV6, rte_flow_item_ipv6) \ + X(SET, ipv6_frag, v->has_frag_ext, rte_flow_item_ipv6) \ + X(SET, icmp_protocol, STE_ICMP, rte_flow_item_icmp) \ + X(SET, udp_protocol, STE_UDP, rte_flow_item_udp) \ + X(SET_BE16, udp_src_port, v->hdr.src_port, rte_flow_item_udp) \ + X(SET_BE16, udp_dst_port, v->hdr.dst_port, rte_flow_item_udp) \ + X(SET, tcp_flags, v->hdr.tcp_flags, rte_flow_item_tcp) \ + X(SET, tcp_protocol, STE_TCP, rte_flow_item_tcp) \ + X(SET_BE16, tcp_src_port, v->hdr.src_port, rte_flow_item_tcp) \ + X(SET_BE16, tcp_dst_port, v->hdr.dst_port, rte_flow_item_tcp) \ + X(SET, gtp_udp_port, RTE_GTPU_UDP_PORT, rte_flow_item_gtp) \ + X(SET_BE32, gtp_teid, v->teid, rte_flow_item_gtp) \ + X(SET, gtp_msg_type, v->msg_type, rte_flow_item_gtp) \ + X(SET, gtp_ext_flag, !!v->v_pt_rsv_flags, rte_flow_item_gtp) \ + X(SET, gtp_next_ext_hdr, GTP_PDU_SC, rte_flow_item_gtp_psc) \ + X(SET, vxlan_flags, v->flags, rte_flow_item_vxlan) \ + X(SET, vxlan_udp_port, ETH_VXLAN_DEFAULT_PORT, rte_flow_item_vxlan) \ + X(SET, tag, v->data, rte_flow_item_tag) \ + X(SET, metadata, v->data, rte_flow_item_meta) \ + X(SET_BE16, gre_c_ver, v->c_rsvd0_ver, rte_flow_item_gre) \ + X(SET_BE16, gre_protocol_type, v->protocol, rte_flow_item_gre) \ + X(SET, ipv4_protocol_gre, IPPROTO_GRE, rte_flow_item_gre) \ + X(SET_BE32, gre_opt_key, v->key.key, rte_flow_item_gre_opt) \ + X(SET_BE32, gre_opt_seq, v->sequence.sequence, rte_flow_item_gre_opt) \ + X(SET_BE16, gre_opt_checksum, v->checksum_rsvd.checksum, rte_flow_item_gre_opt) + +/* Item set function format */ +#define X(set_type, func_name, value, itme_type) \ +static void mlx5dr_definer_##func_name##_set( \ + struct mlx5dr_definer_fc *fc, \ + const void *item_spec, \ + uint8_t *tag) \ +{ \ + __rte_unused const struct itme_type *v = item_spec; \ + DR_##set_type(tag, value, fc->byte_off, fc->bit_off, fc->bit_mask); \ +} +LIST_OF_FIELDS_INFO +#undef X + +static void +mlx5dr_definer_ones_set(struct mlx5dr_definer_fc *fc, + __rte_unused const void *item_spec, + __rte_unused uint8_t *tag) +{ + DR_SET(tag, -1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_conntrack_mask(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_conntrack *m = item_spec; + uint32_t reg_mask = 0; + + if (m->flags & (RTE_FLOW_CONNTRACK_PKT_STATE_VALID | + RTE_FLOW_CONNTRACK_PKT_STATE_INVALID | + RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED)) + reg_mask |= (MLX5_CT_SYNDROME_VALID | MLX5_CT_SYNDROME_INVALID | + MLX5_CT_SYNDROME_TRAP); + + if (m->flags & RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED) + reg_mask |= MLX5_CT_SYNDROME_STATE_CHANGE; + + if (m->flags & RTE_FLOW_CONNTRACK_PKT_STATE_BAD) + reg_mask |= MLX5_CT_SYNDROME_BAD_PACKET; + + DR_SET(tag, reg_mask, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_conntrack_tag(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_conntrack *v = item_spec; + uint32_t reg_value = 0; + + /* The conflict should be checked in the validation. */ + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_VALID) + reg_value |= MLX5_CT_SYNDROME_VALID; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED) + reg_value |= MLX5_CT_SYNDROME_STATE_CHANGE; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_INVALID) + reg_value |= MLX5_CT_SYNDROME_INVALID; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED) + reg_value |= MLX5_CT_SYNDROME_TRAP; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_BAD) + reg_value |= MLX5_CT_SYNDROME_BAD_PACKET; + + DR_SET(tag, reg_value, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_integrity_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + bool inner = (fc->fname == MLX5DR_DEFINER_FNAME_INTEGRITY_I); + const struct rte_flow_item_integrity *v = item_spec; + uint32_t ok1_bits = 0; + + if (v->l3_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L3_OK) | + BIT(MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_L3_OK) | + BIT(MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK); + + if (v->ipv4_csum_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK): + BIT(MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK); + + if (v->l4_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_OK) | + BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_OK) | + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK); + + if (v->l4_csum_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK): + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK); + + DR_SET(tag, ok1_bits, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_gre_key_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const rte_be32_t *v = item_spec; + + DR_SET_BE32(tag, *v, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_vxlan_vni_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_vxlan *v = item_spec; + + memcpy(tag + fc->byte_off, v->vni, sizeof(v->vni)); +} + +static void +mlx5dr_definer_ipv6_tos_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ipv6 *v = item_spec; + uint8_t tos = DR_GET(header_ipv6_vtc, &v->hdr.vtc_flow, tos); + + DR_SET(tag, tos, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp_dw1_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp *v = item_spec; + rte_be32_t icmp_dw1; + + icmp_dw1 = (v->hdr.icmp_type << __mlx5_dw_bit_off(header_icmp, type)) | + (v->hdr.icmp_code << __mlx5_dw_bit_off(header_icmp, code)) | + (v->hdr.icmp_cksum << __mlx5_dw_bit_off(header_icmp, cksum)); + + DR_SET_BE32(tag, icmp_dw1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp_dw2_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp *v = item_spec; + rte_be32_t icmp_dw2; + + icmp_dw2 = (v->hdr.icmp_ident << __mlx5_dw_bit_off(header_icmp, ident)) | + (v->hdr.icmp_seq_nb << __mlx5_dw_bit_off(header_icmp, seq_nb)); + + DR_SET_BE32(tag, icmp_dw2, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp6_dw1_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp6 *v = item_spec; + rte_be32_t icmp_dw1; + + icmp_dw1 = (v->type << __mlx5_dw_bit_off(header_icmp, type)) | + (v->code << __mlx5_dw_bit_off(header_icmp, code)) | + (v->checksum << __mlx5_dw_bit_off(header_icmp, cksum)); + + DR_SET_BE32(tag, icmp_dw1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_ipv6_flow_label_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ipv6 *v = item_spec; + uint32_t flow_label = DR_GET(header_ipv6_vtc, &v->hdr.vtc_flow, flow_label); + + DR_SET(tag, flow_label, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_vport_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ethdev *v = item_spec; + const struct flow_hw_port_info *port_info; + uint32_t regc_value; + + port_info = flow_hw_conv_port_id(v->port_id); + if (unlikely(!port_info)) + regc_value = BAD_PORT; + else + regc_value = port_info->regc_value >> fc->bit_off; + + /* Bit offset is set to 0 to since regc value is 32bit */ + DR_SET(tag, regc_value, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static int +mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_eth *m = item->mask; + uint8_t empty_mac[RTE_ETHER_ADDR_LEN] = {0}; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->reserved) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->type) { + fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_type_set; + DR_CALC_SET(fc, eth_l2, l3_ethertype, inner); + } + + /* Check SMAC 47_16 */ + if (memcmp(m->src.addr_bytes, empty_mac, 4)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_48_16, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_smac_47_16_set; + DR_CALC_SET(fc, eth_l2_src, smac_47_16, inner); + } + + /* Check SMAC 15_0 */ + if (memcmp(m->src.addr_bytes + 4, empty_mac + 4, 2)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_15_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_smac_15_0_set; + DR_CALC_SET(fc, eth_l2_src, smac_15_0, inner); + } + + /* Check DMAC 47_16 */ + if (memcmp(m->dst.addr_bytes, empty_mac, 4)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_48_16, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_dmac_47_16_set; + DR_CALC_SET(fc, eth_l2, dmac_47_16, inner); + } + + /* Check DMAC 15_0 */ + if (memcmp(m->dst.addr_bytes + 4, empty_mac + 4, 2)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_15_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_dmac_15_0_set; + DR_CALC_SET(fc, eth_l2, dmac_15_0, inner); + } + + if (m->has_vlan) { + /* mark packet as tagged (CVLAN) */ + fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_eth_first_vlan_q_set; + DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_vlan *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->reserved) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (!cd->relaxed || m->has_more_vlan) { + /* mark packet as tagged (CVLAN or SVLAN) even if TCI is not specified.*/ + fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_first_vlan_q_set; + DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner); + } + + if (m->tci) { + fc = &cd->fc[DR_CALC_FNAME(VLAN_TCI, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tci_set; + DR_CALC_SET(fc, eth_l2, tci, inner); + } + + if (m->inner_type) { + fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_type_set; + DR_CALC_SET(fc, eth_l2, l3_ethertype, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_ipv4(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_ipv4_hdr *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_VERSION, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_version_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l3_type, inner); + + /* Overwrite - Unset ethertype if present */ + memset(&cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)], 0, sizeof(*fc)); + } + + if (!m) + return 0; + + if (m->total_length || m->packet_id || + m->hdr_checksum) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->fragment_offset) { + fc = &cd->fc[DR_CALC_FNAME(IP_FRAG, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_frag_set; + DR_CALC_SET(fc, eth_l3, fragment_offset, inner); + } + + if (m->next_proto_id) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_next_proto_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (m->dst_addr) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_DST, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_dst_addr_set; + DR_CALC_SET(fc, ipv4_src_dest, destination_address, inner); + } + + if (m->src_addr) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_SRC, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_src_addr_set; + DR_CALC_SET(fc, ipv4_src_dest, source_address, inner); + } + + if (m->ihl) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_IHL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_ihl_set; + DR_CALC_SET(fc, eth_l3, ihl, inner); + } + + if (m->time_to_live) { + fc = &cd->fc[DR_CALC_FNAME(IP_TTL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_time_to_live_set; + DR_CALC_SET(fc, eth_l3, time_to_live_hop_limit, inner); + } + + if (m->type_of_service) { + fc = &cd->fc[DR_CALC_FNAME(IP_TOS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_tos_set; + DR_CALC_SET(fc, eth_l3, tos, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_ipv6(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_ipv6 *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_VERSION, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_version_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l3_type, inner); + + /* Overwrite - Unset ethertype if present */ + memset(&cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)], 0, sizeof(*fc)); + } + + if (!m) + return 0; + + if (m->has_hop_ext || m->has_route_ext || m->has_auth_ext || + m->has_esp_ext || m->has_dest_ext || m->has_mobil_ext || + m->has_hip_ext || m->has_shim6_ext) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->has_frag_ext) { + fc = &cd->fc[DR_CALC_FNAME(IP_FRAG, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_frag_set; + DR_CALC_SET(fc, eth_l4, ip_fragmented, inner); + } + + if (DR_GET(header_ipv6_vtc, &m->hdr.vtc_flow, tos)) { + fc = &cd->fc[DR_CALC_FNAME(IP_TOS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_tos_set; + DR_CALC_SET(fc, eth_l3, tos, inner); + } + + if (DR_GET(header_ipv6_vtc, &m->hdr.vtc_flow, flow_label)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_FLOW_LABEL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_flow_label_set; + DR_CALC_SET(fc, eth_l3, flow_label, inner); + } + + if (m->hdr.payload_len) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_PAYLOAD_LEN, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_payload_len_set; + DR_CALC_SET(fc, eth_l3, ipv6_payload_length, inner); + } + + if (m->hdr.proto) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_proto_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (m->hdr.hop_limits) { + fc = &cd->fc[DR_CALC_FNAME(IP_TTL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_hop_limits_set; + DR_CALC_SET(fc, eth_l3, time_to_live_hop_limit, inner); + } + + if (!is_mem_zero(m->hdr.src_addr, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_127_96, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_127_96_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_127_96, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 4, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_95_64, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_95_64_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_95_64, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 8, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_63_32, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_63_32_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_63_32, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 12, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_31_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_31_0_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_31_0, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_127_96, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_127_96_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_127_96, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 4, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_95_64, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_95_64_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_95_64, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 8, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_63_32, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_63_32_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_63_32, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 12, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_31_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_31_0_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_31_0, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_udp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_udp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Set match on L4 type UDP */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + if (!m) + return 0; + + if (m->hdr.dgram_cksum || m->hdr.dgram_len) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->hdr.src_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_SPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_src_port_set; + DR_CALC_SET(fc, eth_l4, source_port, inner); + } + + if (m->hdr.dst_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_dst_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_tcp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_tcp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on L4 type TCP */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + if (!m) + return 0; + + if (m->hdr.tcp_flags) { + fc = &cd->fc[DR_CALC_FNAME(TCP_FLAGS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_flags_set; + DR_CALC_SET(fc, eth_l4, tcp_flags, inner); + } + + if (m->hdr.src_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_SPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_src_port_set; + DR_CALC_SET(fc, eth_l4, source_port, inner); + } + + if (m->hdr.dst_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_dst_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gtp *m = item->mask; + struct mlx5dr_definer_fc *fc; + + /* Overwrite GTPU dest port if not present */ + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, false)]; + if (!fc->tag_set && !cd->relaxed) { + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_udp_port_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l4, destination_port, false); + } + + if (!m) + return 0; + + if (m->msg_len || m->v_pt_rsv_flags & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->teid) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_TEID]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_teid_set; + fc->bit_mask = __mlx5_mask(header_gtp, teid); + fc->byte_off = cd->caps->format_select_gtpu_dw_1 * DW_SIZE; + } + + if (m->v_pt_rsv_flags) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_ext_flag_set; + fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + + if (m->msg_type) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_MSG_TYPE]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_msg_type_set; + fc->bit_mask = __mlx5_mask(header_gtp, msg_type); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, msg_type); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gtp_psc *m = item->mask; + struct mlx5dr_definer_fc *fc; + + /* Overwrite GTP extension flag to be 1 */ + if (!cd->relaxed) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ones_set; + fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + /* Overwrite next extension header type */ + if (!cd->relaxed) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_2_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_NEXT_EXT_HDR]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_next_ext_hdr_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->bit_mask = __mlx5_mask(header_opt_gtp, next_ext_hdr_type); + fc->bit_off = __mlx5_dw_bit_off(header_opt_gtp, next_ext_hdr_type); + fc->byte_off = cd->caps->format_select_gtpu_dw_2 * DW_SIZE; + } + + if (!m) + return 0; + + return 0; +} + +static int +mlx5dr_definer_conv_item_port(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_ethdev *m = item->mask; + struct mlx5dr_definer_fc *fc; + uint8_t bit_offset = 0; + + if (m->port_id) { + if (!cd->caps->wire_regc_mask) { + DR_LOG(ERR, "Port ID item not supported, missing wire REGC mask"); + rte_errno = ENOTSUP; + return rte_errno; + } + + while (!(cd->caps->wire_regc_mask & (1 << bit_offset))) + bit_offset++; + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VPORT_REG_C_0]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_vport_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET_HDR(fc, registers, register_c_0); + fc->bit_off = bit_offset; + fc->bit_mask = cd->caps->wire_regc_mask >> bit_offset; + } else { + DR_LOG(ERR, "Pord ID item mask must specify ID mask"); + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_vxlan(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_vxlan *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* In order to match on VXLAN we must match on ether_type, ip_protocol + * and l4_dport. + */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + if(!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_udp_protocol_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + if(!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_vxlan_udp_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + } + + if (!m) + return 0; + + if (m->flags) { + if (inner) { + DR_LOG(ERR, "Inner VXLAN flags item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VXLAN_FLAGS]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_vxlan_flags_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_vxlan, flags); + fc->bit_off = __mlx5_dw_bit_off(header_vxlan, flags); + } + + if (!is_mem_zero(m->vni, 3)) { + if (inner) { + DR_LOG(ERR, "Inner VXLAN vni item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VXLAN_VNI]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_vxlan_vni_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_1); + fc->bit_mask = __mlx5_mask(header_vxlan, vni); + fc->bit_off = __mlx5_dw_bit_off(header_vxlan, vni); + } + + return 0; +} + +static struct mlx5dr_definer_fc * +mlx5dr_definer_get_register_fc(struct mlx5dr_definer_conv_data *cd, int reg) +{ + struct mlx5dr_definer_fc *fc; + + switch (reg) { + case REG_C_0: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_0]; + DR_CALC_SET_HDR(fc, registers, register_c_0); + break; + case REG_C_1: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_1]; + DR_CALC_SET_HDR(fc, registers, register_c_1); + break; + case REG_C_2: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_2]; + DR_CALC_SET_HDR(fc, registers, register_c_2); + break; + case REG_C_3: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_3]; + DR_CALC_SET_HDR(fc, registers, register_c_3); + break; + case REG_C_4: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_4]; + DR_CALC_SET_HDR(fc, registers, register_c_4); + break; + case REG_C_5: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_5]; + DR_CALC_SET_HDR(fc, registers, register_c_5); + break; + case REG_C_6: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_6]; + DR_CALC_SET_HDR(fc, registers, register_c_6); + break; + case REG_C_7: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_7]; + DR_CALC_SET_HDR(fc, registers, register_c_7); + break; + case REG_A: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_A]; + DR_CALC_SET_HDR(fc, metadata, general_purpose); + break; + case REG_B: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_B]; + DR_CALC_SET_HDR(fc, metadata, metadata_to_cqe); + break; + default: + rte_errno = ENOTSUP; + return NULL; + } + + return fc; +} + +static int +mlx5dr_definer_conv_item_tag(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_tag *m = item->mask; + const struct rte_flow_item_tag *v = item->spec; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m || !v) + return 0; + + if (item->type == RTE_FLOW_ITEM_TYPE_TAG) + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_TAG, v->index); + else + reg = (int)v->index; + + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item tag"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tag_set; + return 0; +} + +static int +mlx5dr_definer_conv_item_metadata(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_meta *m = item->mask; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m) + return 0; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_META, -1); + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item metadata"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_metadata_set; + return 0; +} + +static int +mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gre *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (inner) { + DR_LOG(ERR, "Inner GRE item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (!m) + return 0; + + if (m->c_rsvd0_ver) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_C_VER]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_c_ver_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_gre, c_rsvd0_ver); + fc->bit_off = __mlx5_dw_bit_off(header_gre, c_rsvd0_ver); + } + + if (m->protocol) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_PROTOCOL]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_protocol_type_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->byte_off += MLX5_BYTE_OFF(header_gre, gre_protocol); + fc->bit_mask = __mlx5_mask(header_gre, gre_protocol); + fc->bit_off = __mlx5_dw_bit_off(header_gre, gre_protocol); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gre_opt(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gre_opt *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, false)]; + if(!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, false); + } + } + + if (!m) + return 0; + + if (m->checksum_rsvd.checksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_CHECKSUM]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_checksum_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_1); + } + + if (m->key.key) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_KEY]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_key_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_2); + } + + if (m->sequence.sequence) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_SEQ]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_seq_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_3); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gre_key(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const rte_be32_t *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!cd->relaxed) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_KEY_PRESENT]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ones_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_gre, gre_k_present); + fc->bit_off = __mlx5_dw_bit_off(header_gre, gre_k_present); + + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, false)]; + if(!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, false); + } + } + + if (!m) + return 0; + + if (*m) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_KEY]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_key_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_2); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_integrity(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_integrity *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->packet_ok || m->l2_ok || m->l2_crc_ok || m->l3_len_ok) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->l3_ok || m->ipv4_csum_ok || m->l4_ok || m->l4_csum_ok) { + fc = &cd->fc[DR_CALC_FNAME(INTEGRITY, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_integrity_set; + DR_CALC_SET_HDR(fc, oks1, oks1_bits); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_conntrack(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_conntrack *m = item->mask; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m) + return 0; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_CONNTRACK, -1); + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item conntrack"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_conntrack_mask; + fc->tag_set = &mlx5dr_definer_conntrack_tag; + + return 0; +} + +static int +mlx5dr_definer_conv_item_icmp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_icmp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on outer L4 type ICMP */ + if (!cd->relaxed) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_IP_PROTOCOL_O]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type, inner); + } + + if (!m) + return 0; + + if (m->hdr.icmp_type || m->hdr.icmp_code || m->hdr.icmp_cksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW1]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_dw1_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw1); + } + + if (m->hdr.icmp_ident || m->hdr.icmp_seq_nb) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW2]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_dw2_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw2); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_icmp6(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_icmp6 *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on outer L4 type ICMP6 */ + if (!cd->relaxed) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_IP_PROTOCOL_O]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type, inner); + } + + if (!m) + return 0; + + if (m->type || m->code || m->checksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW1]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp6_dw1_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw1); + } + + return 0; +} + +static int +mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt, + uint8_t *hl) +{ + struct mlx5dr_definer_fc fc[MLX5DR_DEFINER_FNAME_MAX] = {{0}}; + struct mlx5dr_definer_conv_data cd = {0}; + struct rte_flow_item *items = mt->items; + uint64_t item_flags = 0; + uint32_t total = 0; + int i, j; + int ret; + + cd.fc = fc; + cd.hl = hl; + cd.caps = ctx->caps; + cd.relaxed = mt->flags & MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH; + + /* Collect all RTE fields to the field array and set header layout */ + for (i = 0; items->type != RTE_FLOW_ITEM_TYPE_END; i++, items++) { + cd.tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + + switch ((int)items->type) { + case RTE_FLOW_ITEM_TYPE_ETH: + ret = mlx5dr_definer_conv_item_eth(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + ret = mlx5dr_definer_conv_item_vlan(&cd, items, i); + item_flags |= cd.tunnel ? + (MLX5_FLOW_LAYER_INNER_VLAN | MLX5_FLOW_LAYER_INNER_L2) : + (MLX5_FLOW_LAYER_OUTER_VLAN | MLX5_FLOW_LAYER_OUTER_L2); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + ret = mlx5dr_definer_conv_item_ipv4(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + ret = mlx5dr_definer_conv_item_ipv6(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + break; + case RTE_FLOW_ITEM_TYPE_UDP: + ret = mlx5dr_definer_conv_item_udp(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; + break; + case RTE_FLOW_ITEM_TYPE_TCP: + ret = mlx5dr_definer_conv_item_tcp(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + ret = mlx5dr_definer_conv_item_gtp(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = mlx5dr_definer_conv_item_gtp_psc(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + ret = mlx5dr_definer_conv_item_port(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_REPRESENTED_PORT; + mt->vport_item_id = i; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + ret = mlx5dr_definer_conv_item_vxlan(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_VXLAN; + break; + case RTE_FLOW_ITEM_TYPE_TAG: + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + ret = mlx5dr_definer_conv_item_tag(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_TAG; + break; + case RTE_FLOW_ITEM_TYPE_META: + ret = mlx5dr_definer_conv_item_metadata(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_METADATA; + break; + case RTE_FLOW_ITEM_TYPE_GRE: + ret = mlx5dr_definer_conv_item_gre(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + ret = mlx5dr_definer_conv_item_gre_opt(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + ret = mlx5dr_definer_conv_item_gre_key(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE_KEY; + break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + ret = mlx5dr_definer_conv_item_integrity(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_INTEGRITY : + MLX5_FLOW_ITEM_OUTER_INTEGRITY; + break; + case RTE_FLOW_ITEM_TYPE_CONNTRACK: + ret = mlx5dr_definer_conv_item_conntrack(&cd, items, i); + break; + case RTE_FLOW_ITEM_TYPE_ICMP: + ret = mlx5dr_definer_conv_item_icmp(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_ICMP; + break; + case RTE_FLOW_ITEM_TYPE_ICMP6: + ret = mlx5dr_definer_conv_item_icmp6(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_ICMP6; + break; + default: + DR_LOG(ERR, "Unsupported item type %d", items->type); + rte_errno = ENOTSUP; + return rte_errno; + } + + if (ret) { + DR_LOG(ERR, "Failed processing item type: %d", items->type); + return ret; + } + } + + mt->item_flags = item_flags; + + /* Fill in headers layout and calculate total number of fields */ + for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) { + if (fc[i].tag_set) { + total++; + DR_SET(hl, -1, fc[i].byte_off, fc[i].bit_off, fc[i].bit_mask); + } + } + + mt->fc_sz = total; + mt->fc = simple_calloc(total, sizeof(*mt->fc)); + if (!mt->fc) { + DR_LOG(ERR, "Failed to allocate field copy array"); + rte_errno = ENOMEM; + return rte_errno; + } + + j = 0; + for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) { + if (fc[i].tag_set) { + memcpy(&mt->fc[j], &fc[i], sizeof(*mt->fc)); + mt->fc[j].fname = i; + j++; + } + } + + return 0; +} + +static int +mlx5dr_definer_find_byte_in_tag(struct mlx5dr_definer *definer, + uint32_t hl_byte_off, + uint32_t *tag_byte_off) +{ + uint8_t byte_offset; + int i; + + /* Add offset since each DW covers multiple BYTEs */ + byte_offset = hl_byte_off % DW_SIZE; + for (i = 0; i < DW_SELECTORS; i++) { + if (definer->dw_selector[i] == hl_byte_off / DW_SIZE) { + *tag_byte_off = byte_offset + DW_SIZE * (DW_SELECTORS - i - 1); + return 0; + } + } + + /* Add offset to skip DWs in definer */ + byte_offset = DW_SIZE * DW_SELECTORS; + /* Iterate in reverse since the code uses bytes from 7 -> 0 */ + for (i = BYTE_SELECTORS; i-- > 0 ;) { + if (definer->byte_selector[i] == hl_byte_off) { + *tag_byte_off = byte_offset + (BYTE_SELECTORS - i - 1); + return 0; + } + } + + /* The hl byte offset must be part of the definer */ + DR_LOG(INFO, "Failed to map to definer, HL byte [%d] not found", byte_offset); + rte_errno = EINVAL; + return rte_errno; +} + +static int +mlx5dr_definer_fc_bind(struct mlx5dr_definer *definer, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz) +{ + uint32_t tag_offset = 0; + int ret, byte_diff; + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + /* Map header layout byte offset to byte offset in tag */ + ret = mlx5dr_definer_find_byte_in_tag(definer, fc->byte_off, &tag_offset); + if (ret) + return ret; + + /* Move setter based on the location in the definer */ + byte_diff = fc->byte_off % DW_SIZE - tag_offset % DW_SIZE; + fc->bit_off = fc->bit_off + byte_diff * BITS_IN_BYTE; + + /* Update offset in headers layout to offset in tag */ + fc->byte_off = tag_offset; + fc++; + } + + return 0; +} + +static bool +mlx5dr_definer_best_hl_fit_recu(struct mlx5dr_definer_sel_ctrl *ctrl, + uint32_t cur_dw, + uint32_t *data) +{ + uint8_t bytes_set; + int byte_idx; + bool ret; + int i; + + /* Reached end, nothing left to do */ + if (cur_dw == MLX5_ST_SZ_DW(definer_hl)) + return true; + + /* No data set, can skip to next DW */ + while (!*data) { + cur_dw++; + data++; + + /* Reached end, nothing left to do */ + if (cur_dw == MLX5_ST_SZ_DW(definer_hl)) + return true; + } + + /* Used all DW selectors and Byte selectors, no possible solution */ + if (ctrl->allowed_full_dw == ctrl->used_full_dw && + ctrl->allowed_lim_dw == ctrl->used_lim_dw && + ctrl->allowed_bytes == ctrl->used_bytes) + return false; + + /* Try to use limited DW selectors */ + if (ctrl->allowed_lim_dw > ctrl->used_lim_dw && cur_dw < 64) { + ctrl->lim_dw_selector[ctrl->used_lim_dw++] = cur_dw; + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + ctrl->lim_dw_selector[--ctrl->used_lim_dw] = 0; + } + + /* Try to use DW selectors */ + if (ctrl->allowed_full_dw > ctrl->used_full_dw) { + ctrl->full_dw_selector[ctrl->used_full_dw++] = cur_dw; + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + ctrl->full_dw_selector[--ctrl->used_full_dw] = 0; + } + + /* No byte selector for offset bigger than 255 */ + if (cur_dw * DW_SIZE > 255) + return false; + + bytes_set = !!(0x000000ff & *data) + + !!(0x0000ff00 & *data) + + !!(0x00ff0000 & *data) + + !!(0xff000000 & *data); + + /* Check if there are enough byte selectors left */ + if (bytes_set + ctrl->used_bytes > ctrl->allowed_bytes) + return false; + + /* Try to use Byte selectors */ + for (i = 0; i < DW_SIZE; i++) + if ((0xff000000 >> (i * BITS_IN_BYTE)) & rte_be_to_cpu_32(*data)) { + /* Use byte selectors high to low */ + byte_idx = ctrl->allowed_bytes - ctrl->used_bytes - 1; + ctrl->byte_selector[byte_idx] = cur_dw * DW_SIZE + i; + ctrl->used_bytes++; + } + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + for (i = 0; i < DW_SIZE; i++) + if ((0xff << (i * BITS_IN_BYTE)) & rte_be_to_cpu_32(*data)) { + ctrl->used_bytes--; + byte_idx = ctrl->allowed_bytes - ctrl->used_bytes - 1; + ctrl->byte_selector[byte_idx] = 0; + } + + return false; +} + +static void +mlx5dr_definer_apply_sel_ctrl(struct mlx5dr_definer_sel_ctrl *ctrl, + struct mlx5dr_definer *definer) +{ + memcpy(definer->byte_selector, ctrl->byte_selector, ctrl->allowed_bytes); + memcpy(definer->dw_selector, ctrl->full_dw_selector, ctrl->allowed_full_dw); + memcpy(definer->dw_selector + ctrl->allowed_full_dw, + ctrl->lim_dw_selector, + ctrl->allowed_lim_dw); +} + +static int +mlx5dr_definer_find_best_hl_fit(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt, + uint8_t *hl) +{ + struct mlx5dr_definer_sel_ctrl ctrl = {0}; + bool found; + + /* Try to create a match definer */ + ctrl.allowed_full_dw = DW_SELECTORS_MATCH; + ctrl.allowed_lim_dw = 0; + ctrl.allowed_bytes = BYTE_SELECTORS; + + found = mlx5dr_definer_best_hl_fit_recu(&ctrl, 0, (uint32_t *)hl); + if (found) { + mlx5dr_definer_apply_sel_ctrl(&ctrl, mt->definer); + mt->definer->type = MLX5DR_DEFINER_TYPE_MATCH; + return 0; + } + + /* Try to create a full/limited jumbo definer */ + ctrl.allowed_full_dw = ctx->caps->full_dw_jumbo_support ? DW_SELECTORS : + DW_SELECTORS_MATCH; + ctrl.allowed_lim_dw = ctx->caps->full_dw_jumbo_support ? 0 : + DW_SELECTORS_LIMITED; + ctrl.allowed_bytes = BYTE_SELECTORS; + + found = mlx5dr_definer_best_hl_fit_recu(&ctrl, 0, (uint32_t *)hl); + if (found) { + mlx5dr_definer_apply_sel_ctrl(&ctrl, mt->definer); + mt->definer->type = MLX5DR_DEFINER_TYPE_JUMBO; + return 0; + } + + DR_LOG(ERR, "Unable to find supporting match/jumbo definer combination"); + rte_errno = ENOTSUP; + return rte_errno; +} + +static void +mlx5dr_definer_create_tag_mask(struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag) +{ + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + if (fc->tag_mask_set) + fc->tag_mask_set(fc, items[fc->item_idx].mask, tag); + else + fc->tag_set(fc, items[fc->item_idx].mask, tag); + fc++; + } +} + +void mlx5dr_definer_create_tag(const struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag) +{ + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + fc->tag_set(fc, items[fc->item_idx].spec, tag); + fc++; + } +} + +int mlx5dr_definer_get_id(struct mlx5dr_definer *definer) +{ + return definer->obj->id; +} + +int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b) +{ + int i; + + if (definer_a->type != definer_b->type) + return 1; + + for (i = 0; i < BYTE_SELECTORS; i++) + if (definer_a->byte_selector[i] != definer_b->byte_selector[i]) + return 1; + + for (i = 0; i < DW_SELECTORS; i++) + if (definer_a->dw_selector[i] != definer_b->dw_selector[i]) + return 1; + + for (i = 0; i < MLX5DR_JUMBO_TAG_SZ; i++) + if (definer_a->mask.jumbo[i] != definer_b->mask.jumbo[i]) + return 1; + + return 0; +} + +int mlx5dr_definer_get(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt) +{ + struct mlx5dr_cmd_definer_create_attr def_attr = {0}; + struct ibv_context *ibv_ctx = ctx->ibv_ctx; + uint8_t *hl; + int ret; + + if (mt->refcount++) + return 0; + + mt->definer = simple_calloc(1, sizeof(*mt->definer)); + if (!mt->definer) { + DR_LOG(ERR, "Failed to allocate memory for definer"); + rte_errno = ENOMEM; + goto dec_refcount; + } + + /* Header layout (hl) holds full bit mask per field */ + hl = simple_calloc(1, MLX5_ST_SZ_BYTES(definer_hl)); + if (!hl) { + DR_LOG(ERR, "Failed to allocate memory for header layout"); + rte_errno = ENOMEM; + goto free_definer; + } + + /* Convert items to hl and allocate the field copy array (fc) */ + ret = mlx5dr_definer_conv_items_to_hl(ctx, mt, hl); + if (ret) { + DR_LOG(ERR, "Failed to convert items to hl"); + goto free_hl; + } + + /* Find the definer for given header layout */ + ret = mlx5dr_definer_find_best_hl_fit(ctx, mt, hl); + if (ret) { + DR_LOG(ERR, "Failed to create definer from header layout"); + goto free_field_copy; + } + + /* Align field copy array based on the new definer */ + ret = mlx5dr_definer_fc_bind(mt->definer, + mt->fc, + mt->fc_sz); + if (ret) { + DR_LOG(ERR, "Failed to bind field copy to definer"); + goto free_field_copy; + } + + /* Create the tag mask used for definer creation */ + mlx5dr_definer_create_tag_mask(mt->items, + mt->fc, + mt->fc_sz, + mt->definer->mask.jumbo); + + /* Create definer based on the bitmask tag */ + def_attr.match_mask = mt->definer->mask.jumbo; + def_attr.dw_selector = mt->definer->dw_selector; + def_attr.byte_selector = mt->definer->byte_selector; + mt->definer->obj = mlx5dr_cmd_definer_create(ibv_ctx, &def_attr); + if (!mt->definer->obj) + goto free_field_copy; + + simple_free(hl); + + return 0; + +free_field_copy: + simple_free(mt->fc); +free_hl: + simple_free(hl); +free_definer: + simple_free(mt->definer); +dec_refcount: + mt->refcount--; + + return rte_errno; +} + +void mlx5dr_definer_put(struct mlx5dr_match_template *mt) +{ + if (--mt->refcount) + return; + + simple_free(mt->fc); + mlx5dr_cmd_destroy_obj(mt->definer->obj); + simple_free(mt->definer); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h new file mode 100644 index 0000000000..09a3f40568 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_definer.h @@ -0,0 +1,582 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#ifndef MLX5DR_DEFINER_H_ +#define MLX5DR_DEFINER_H_ + +/* Selectors based on match TAG */ +#define DW_SELECTORS_MATCH 6 +#define DW_SELECTORS_LIMITED 3 +#define DW_SELECTORS 9 +#define BYTE_SELECTORS 8 + +enum mlx5dr_definer_fname { + MLX5DR_DEFINER_FNAME_ETH_SMAC_48_16_O, + MLX5DR_DEFINER_FNAME_ETH_SMAC_48_16_I, + MLX5DR_DEFINER_FNAME_ETH_SMAC_15_0_O, + MLX5DR_DEFINER_FNAME_ETH_SMAC_15_0_I, + MLX5DR_DEFINER_FNAME_ETH_DMAC_48_16_O, + MLX5DR_DEFINER_FNAME_ETH_DMAC_48_16_I, + MLX5DR_DEFINER_FNAME_ETH_DMAC_15_0_O, + MLX5DR_DEFINER_FNAME_ETH_DMAC_15_0_I, + MLX5DR_DEFINER_FNAME_ETH_TYPE_O, + MLX5DR_DEFINER_FNAME_ETH_TYPE_I, + MLX5DR_DEFINER_FNAME_VLAN_TYPE_O, + MLX5DR_DEFINER_FNAME_VLAN_TYPE_I, + MLX5DR_DEFINER_FNAME_VLAN_TCI_O, + MLX5DR_DEFINER_FNAME_VLAN_TCI_I, + MLX5DR_DEFINER_FNAME_IPV4_IHL_O, + MLX5DR_DEFINER_FNAME_IPV4_IHL_I, + MLX5DR_DEFINER_FNAME_IP_TTL_O, + MLX5DR_DEFINER_FNAME_IP_TTL_I, + MLX5DR_DEFINER_FNAME_IPV4_DST_O, + MLX5DR_DEFINER_FNAME_IPV4_DST_I, + MLX5DR_DEFINER_FNAME_IPV4_SRC_O, + MLX5DR_DEFINER_FNAME_IPV4_SRC_I, + MLX5DR_DEFINER_FNAME_IP_VERSION_O, + MLX5DR_DEFINER_FNAME_IP_VERSION_I, + MLX5DR_DEFINER_FNAME_IP_FRAG_O, + MLX5DR_DEFINER_FNAME_IP_FRAG_I, + MLX5DR_DEFINER_FNAME_IPV6_PAYLOAD_LEN_O, + MLX5DR_DEFINER_FNAME_IPV6_PAYLOAD_LEN_I, + MLX5DR_DEFINER_FNAME_IP_TOS_O, + MLX5DR_DEFINER_FNAME_IP_TOS_I, + MLX5DR_DEFINER_FNAME_IPV6_FLOW_LABEL_O, + MLX5DR_DEFINER_FNAME_IPV6_FLOW_LABEL_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_127_96_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_95_64_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_63_32_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_31_0_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_127_96_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_95_64_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_63_32_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_31_0_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_127_96_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_95_64_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_63_32_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_31_0_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_127_96_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_95_64_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_63_32_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_31_0_I, + MLX5DR_DEFINER_FNAME_IP_PROTOCOL_O, + MLX5DR_DEFINER_FNAME_IP_PROTOCOL_I, + MLX5DR_DEFINER_FNAME_L4_SPORT_O, + MLX5DR_DEFINER_FNAME_L4_SPORT_I, + MLX5DR_DEFINER_FNAME_L4_DPORT_O, + MLX5DR_DEFINER_FNAME_L4_DPORT_I, + MLX5DR_DEFINER_FNAME_TCP_FLAGS_I, + MLX5DR_DEFINER_FNAME_TCP_FLAGS_O, + MLX5DR_DEFINER_FNAME_GTP_TEID, + MLX5DR_DEFINER_FNAME_GTP_MSG_TYPE, + MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG, + MLX5DR_DEFINER_FNAME_GTP_NEXT_EXT_HDR, + MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_PDU, + MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_QFI, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_0, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_1, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_2, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_3, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_4, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_5, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_6, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_7, + MLX5DR_DEFINER_FNAME_VPORT_REG_C_0, + MLX5DR_DEFINER_FNAME_VXLAN_FLAGS, + MLX5DR_DEFINER_FNAME_VXLAN_VNI, + MLX5DR_DEFINER_FNAME_SOURCE_QP, + MLX5DR_DEFINER_FNAME_REG_0, + MLX5DR_DEFINER_FNAME_REG_1, + MLX5DR_DEFINER_FNAME_REG_2, + MLX5DR_DEFINER_FNAME_REG_3, + MLX5DR_DEFINER_FNAME_REG_4, + MLX5DR_DEFINER_FNAME_REG_5, + MLX5DR_DEFINER_FNAME_REG_6, + MLX5DR_DEFINER_FNAME_REG_7, + MLX5DR_DEFINER_FNAME_REG_A, + MLX5DR_DEFINER_FNAME_REG_B, + MLX5DR_DEFINER_FNAME_GRE_KEY_PRESENT, + MLX5DR_DEFINER_FNAME_GRE_C_VER, + MLX5DR_DEFINER_FNAME_GRE_PROTOCOL, + MLX5DR_DEFINER_FNAME_GRE_OPT_KEY, + MLX5DR_DEFINER_FNAME_GRE_OPT_SEQ, + MLX5DR_DEFINER_FNAME_GRE_OPT_CHECKSUM, + MLX5DR_DEFINER_FNAME_INTEGRITY_O, + MLX5DR_DEFINER_FNAME_INTEGRITY_I, + MLX5DR_DEFINER_FNAME_ICMP_DW1, + MLX5DR_DEFINER_FNAME_ICMP_DW2, + MLX5DR_DEFINER_FNAME_MAX, +}; + +enum mlx5dr_definer_type { + MLX5DR_DEFINER_TYPE_MATCH, + MLX5DR_DEFINER_TYPE_JUMBO, +}; + +struct mlx5dr_definer_fc { + uint8_t item_idx; + uint32_t byte_off; + int bit_off; + uint32_t bit_mask; + enum mlx5dr_definer_fname fname; + void (*tag_set)(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag); + void (*tag_mask_set)(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag); +}; + +struct mlx5_ifc_definer_hl_eth_l2_bits { + u8 dmac_47_16[0x20]; + u8 dmac_15_0[0x10]; + u8 l3_ethertype[0x10]; + u8 reserved_at_40[0x1]; + u8 sx_sniffer[0x1]; + u8 functional_lb[0x1]; + u8 ip_fragmented[0x1]; + u8 qp_type[0x2]; + u8 encap_type[0x2]; + u8 port_number[0x2]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 tci[0x10]; /* contains first_priority[0x3] + first_cfi[0x1] + first_vlan_id[0xc] */ + u8 l4_type[0x4]; + u8 reserved_at_64[0x2]; + u8 ipsec_layer[0x2]; + u8 l2_type[0x2]; + u8 force_lb[0x1]; + u8 l2_ok[0x1]; + u8 l3_ok[0x1]; + u8 l4_ok[0x1]; + u8 second_vlan_qualifier[0x2]; + u8 second_priority[0x3]; + u8 second_cfi[0x1]; + u8 second_vlan_id[0xc]; +}; + +struct mlx5_ifc_definer_hl_eth_l2_src_bits { + u8 smac_47_16[0x20]; + u8 smac_15_0[0x10]; + u8 loopback_syndrome[0x8]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 ip_fragmented[0x1]; + u8 functional_lb[0x1]; +}; + +struct mlx5_ifc_definer_hl_ib_l2_bits { + u8 sx_sniffer[0x1]; + u8 force_lb[0x1]; + u8 functional_lb[0x1]; + u8 reserved_at_3[0x3]; + u8 port_number[0x2]; + u8 sl[0x4]; + u8 qp_type[0x2]; + u8 lnh[0x2]; + u8 dlid[0x10]; + u8 vl[0x4]; + u8 lrh_packet_length[0xc]; + u8 slid[0x10]; +}; + +struct mlx5_ifc_definer_hl_eth_l3_bits { + u8 ip_version[0x4]; + u8 ihl[0x4]; + union { + u8 tos[0x8]; + struct { + u8 dscp[0x6]; + u8 ecn[0x2]; + }; + }; + u8 time_to_live_hop_limit[0x8]; + u8 protocol_next_header[0x8]; + u8 identification[0x10]; + u8 flags[0x3]; + u8 fragment_offset[0xd]; + u8 ipv4_total_length[0x10]; + u8 checksum[0x10]; + u8 reserved_at_60[0xc]; + u8 flow_label[0x14]; + u8 packet_length[0x10]; + u8 ipv6_payload_length[0x10]; +}; + +struct mlx5_ifc_definer_hl_eth_l4_bits { + u8 source_port[0x10]; + u8 destination_port[0x10]; + u8 data_offset[0x4]; + u8 l4_ok[0x1]; + u8 l3_ok[0x1]; + u8 ip_fragmented[0x1]; + u8 tcp_ns[0x1]; + union { + u8 tcp_flags[0x8]; + struct { + u8 tcp_cwr[0x1]; + u8 tcp_ece[0x1]; + u8 tcp_urg[0x1]; + u8 tcp_ack[0x1]; + u8 tcp_psh[0x1]; + u8 tcp_rst[0x1]; + u8 tcp_syn[0x1]; + u8 tcp_fin[0x1]; + }; + }; + u8 first_fragment[0x1]; + u8 reserved_at_31[0xf]; +}; + +struct mlx5_ifc_definer_hl_src_qp_gvmi_bits { + u8 loopback_syndrome[0x8]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 reserved_at_e[0x1]; + u8 functional_lb[0x1]; + u8 source_gvmi[0x10]; + u8 force_lb[0x1]; + u8 ip_fragmented[0x1]; + u8 source_is_requestor[0x1]; + u8 reserved_at_23[0x5]; + u8 source_qp[0x18]; +}; + +struct mlx5_ifc_definer_hl_ib_l4_bits { + u8 opcode[0x8]; + u8 qp[0x18]; + u8 se[0x1]; + u8 migreq[0x1]; + u8 ackreq[0x1]; + u8 fecn[0x1]; + u8 becn[0x1]; + u8 bth[0x1]; + u8 deth[0x1]; + u8 dcceth[0x1]; + u8 reserved_at_28[0x2]; + u8 pad_count[0x2]; + u8 tver[0x4]; + u8 p_key[0x10]; + u8 reserved_at_40[0x8]; + u8 deth_source_qp[0x18]; +}; + +enum mlx5dr_integrity_ok1_bits { + MLX5DR_DEFINER_OKS1_FIRST_L4_OK = 24, + MLX5DR_DEFINER_OKS1_FIRST_L3_OK = 25, + MLX5DR_DEFINER_OKS1_SECOND_L4_OK = 26, + MLX5DR_DEFINER_OKS1_SECOND_L3_OK = 27, + MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK = 28, + MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK = 29, + MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK = 30, + MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK = 31, +}; + +struct mlx5_ifc_definer_hl_oks1_bits { + union { + u8 oks1_bits[0x20]; + struct { + u8 second_ipv4_checksum_ok[0x1]; + u8 second_l4_checksum_ok[0x1]; + u8 first_ipv4_checksum_ok[0x1]; + u8 first_l4_checksum_ok[0x1]; + u8 second_l3_ok[0x1]; + u8 second_l4_ok[0x1]; + u8 first_l3_ok[0x1]; + u8 first_l4_ok[0x1]; + u8 flex_parser7_steering_ok[0x1]; + u8 flex_parser6_steering_ok[0x1]; + u8 flex_parser5_steering_ok[0x1]; + u8 flex_parser4_steering_ok[0x1]; + u8 flex_parser3_steering_ok[0x1]; + u8 flex_parser2_steering_ok[0x1]; + u8 flex_parser1_steering_ok[0x1]; + u8 flex_parser0_steering_ok[0x1]; + u8 second_ipv6_extension_header_vld[0x1]; + u8 first_ipv6_extension_header_vld[0x1]; + u8 l3_tunneling_ok[0x1]; + u8 l2_tunneling_ok[0x1]; + u8 second_tcp_ok[0x1]; + u8 second_udp_ok[0x1]; + u8 second_ipv4_ok[0x1]; + u8 second_ipv6_ok[0x1]; + u8 second_l2_ok[0x1]; + u8 vxlan_ok[0x1]; + u8 gre_ok[0x1]; + u8 first_tcp_ok[0x1]; + u8 first_udp_ok[0x1]; + u8 first_ipv4_ok[0x1]; + u8 first_ipv6_ok[0x1]; + u8 first_l2_ok[0x1]; + }; + }; +}; + +struct mlx5_ifc_definer_hl_oks2_bits { + u8 reserved_at_0[0xa]; + u8 second_mpls_ok[0x1]; + u8 second_mpls4_s_bit[0x1]; + u8 second_mpls4_qualifier[0x1]; + u8 second_mpls3_s_bit[0x1]; + u8 second_mpls3_qualifier[0x1]; + u8 second_mpls2_s_bit[0x1]; + u8 second_mpls2_qualifier[0x1]; + u8 second_mpls1_s_bit[0x1]; + u8 second_mpls1_qualifier[0x1]; + u8 second_mpls0_s_bit[0x1]; + u8 second_mpls0_qualifier[0x1]; + u8 first_mpls_ok[0x1]; + u8 first_mpls4_s_bit[0x1]; + u8 first_mpls4_qualifier[0x1]; + u8 first_mpls3_s_bit[0x1]; + u8 first_mpls3_qualifier[0x1]; + u8 first_mpls2_s_bit[0x1]; + u8 first_mpls2_qualifier[0x1]; + u8 first_mpls1_s_bit[0x1]; + u8 first_mpls1_qualifier[0x1]; + u8 first_mpls0_s_bit[0x1]; + u8 first_mpls0_qualifier[0x1]; +}; + +struct mlx5_ifc_definer_hl_voq_bits { + u8 reserved_at_0[0x18]; + u8 ecn_ok[0x1]; + u8 congestion[0x1]; + u8 profile[0x2]; + u8 internal_prio[0x4]; +}; + +struct mlx5_ifc_definer_hl_ipv4_src_dst_bits { + u8 source_address[0x20]; + u8 destination_address[0x20]; +}; + +struct mlx5_ifc_definer_hl_ipv6_addr_bits { + u8 ipv6_address_127_96[0x20]; + u8 ipv6_address_95_64[0x20]; + u8 ipv6_address_63_32[0x20]; + u8 ipv6_address_31_0[0x20]; +}; + +struct mlx5_ifc_definer_tcp_icmp_header_bits { + union { + struct { + u8 icmp_dw1[0x20]; + u8 icmp_dw2[0x20]; + u8 icmp_dw3[0x20]; + }; + struct { + u8 tcp_seq[0x20]; + u8 tcp_ack[0x20]; + u8 tcp_win_urg[0x20]; + }; + }; +}; + +struct mlx5_ifc_definer_hl_tunnel_header_bits { + u8 tunnel_header_0[0x20]; + u8 tunnel_header_1[0x20]; + u8 tunnel_header_2[0x20]; + u8 tunnel_header_3[0x20]; +}; + +struct mlx5_ifc_definer_hl_metadata_bits { + u8 metadata_to_cqe[0x20]; + u8 general_purpose[0x20]; + u8 acomulated_hash[0x20]; +}; + +struct mlx5_ifc_definer_hl_flex_parser_bits { + u8 flex_parser_7[0x20]; + u8 flex_parser_6[0x20]; + u8 flex_parser_5[0x20]; + u8 flex_parser_4[0x20]; + u8 flex_parser_3[0x20]; + u8 flex_parser_2[0x20]; + u8 flex_parser_1[0x20]; + u8 flex_parser_0[0x20]; +}; + +struct mlx5_ifc_definer_hl_registers_bits { + u8 register_c_10[0x20]; + u8 register_c_11[0x20]; + u8 register_c_8[0x20]; + u8 register_c_9[0x20]; + u8 register_c_6[0x20]; + u8 register_c_7[0x20]; + u8 register_c_4[0x20]; + u8 register_c_5[0x20]; + u8 register_c_2[0x20]; + u8 register_c_3[0x20]; + u8 register_c_0[0x20]; + u8 register_c_1[0x20]; +}; + +struct mlx5_ifc_definer_hl_bits { + struct mlx5_ifc_definer_hl_eth_l2_bits eth_l2_outer; + struct mlx5_ifc_definer_hl_eth_l2_bits eth_l2_inner; + struct mlx5_ifc_definer_hl_eth_l2_src_bits eth_l2_src_outer; + struct mlx5_ifc_definer_hl_eth_l2_src_bits eth_l2_src_inner; + struct mlx5_ifc_definer_hl_ib_l2_bits ib_l2; + struct mlx5_ifc_definer_hl_eth_l3_bits eth_l3_outer; + struct mlx5_ifc_definer_hl_eth_l3_bits eth_l3_inner; + struct mlx5_ifc_definer_hl_eth_l4_bits eth_l4_outer; + struct mlx5_ifc_definer_hl_eth_l4_bits eth_l4_inner; + struct mlx5_ifc_definer_hl_src_qp_gvmi_bits source_qp_gvmi; + struct mlx5_ifc_definer_hl_ib_l4_bits ib_l4; + struct mlx5_ifc_definer_hl_oks1_bits oks1; + struct mlx5_ifc_definer_hl_oks2_bits oks2; + struct mlx5_ifc_definer_hl_voq_bits voq; + u8 reserved_at_480[0x380]; + struct mlx5_ifc_definer_hl_ipv4_src_dst_bits ipv4_src_dest_outer; + struct mlx5_ifc_definer_hl_ipv4_src_dst_bits ipv4_src_dest_inner; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_dst_outer; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_dst_inner; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_src_outer; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_src_inner; + u8 unsupported_dest_ib_l3[0x80]; + u8 unsupported_source_ib_l3[0x80]; + u8 reserved_at_b80[0x40]; + // struct x udp_misc_outer; 0x20 + // struct x udp_misc_inner; 0x20 + struct mlx5_ifc_definer_tcp_icmp_header_bits tcp_icmp; + struct mlx5_ifc_definer_hl_tunnel_header_bits tunnel_header; + u8 reserved_at_ca0[0x2c0]; + // struct x mpls_outer; 0xa0 + // struct x mpls_inner; 0xa0 + // struct x config_headers_outer; 0x80 + // struct x config_headers_inner; 0x80 + // struct x random_number; 0x20 + // struct x ipsec; 0x60 + struct mlx5_ifc_definer_hl_metadata_bits metadata; + u8 reserved_at_fc0[0x80]; + // struct x utc_timestamp; 0x40 + // struct x free_running_timestamp; 0x40 + struct mlx5_ifc_definer_hl_flex_parser_bits flex_parser; + struct mlx5_ifc_definer_hl_registers_bits registers; + // struct x ib_l3_extended; + // struct x rwh; + // struct x dcceth; + // struct x dceth; + // /.autodirect/swgwork/maayang/repo_1/golan_fw/include/ + // tamar_g_cr_no_aligned_expose__descsteering_headers_layout_desc_adb.h +}; + +enum mlx5dr_definer_gtp { + MLX5DR_DEFINER_GTP_EXT_HDR_BIT = 0x04, +}; + +struct mlx5_ifc_header_gtp_bits { + u8 version[0x3]; + u8 proto_type[0x1]; + u8 reserved1[0x1]; + u8 ext_hdr_flag[0x1]; + u8 seq_num_flag[0x1]; + u8 pdu_flag[0x1]; + u8 msg_type[0x8]; + u8 msg_len[0x8]; + u8 teid[0x20]; +}; + +struct mlx5_ifc_header_opt_gtp_bits { + u8 seq_num[0x10]; + u8 pdu_num[0x8]; + u8 next_ext_hdr_type[0x8]; +}; + +struct mlx5_ifc_header_gtp_psc_bits { + u8 len[0x8]; + u8 pdu_type[0x4]; + u8 flags[0x4]; + u8 qfi[0x8]; + u8 reserved2[0x8]; +}; + +struct mlx5_ifc_header_ipv6_vtc_bits { + u8 version[0x4]; + union { + u8 tos[0x8]; + struct { + u8 dscp[0x6]; + u8 ecn[0x2]; + }; + }; + u8 flow_label[0x14]; +}; + +struct mlx5_ifc_header_vxlan_bits { + u8 flags[0x8]; + u8 reserved1[0x18]; + u8 vni[0x18]; + u8 reserved2[0x8]; +}; + +struct mlx5_ifc_header_gre_bits { + union { + u8 c_rsvd0_ver[0x10]; + struct { + u8 gre_c_present[0x1]; + u8 reserved_at_1[0x1]; + u8 gre_k_present[0x1]; + u8 gre_s_present[0x1]; + u8 reserved_at_4[0x9]; + u8 version[0x3]; + }; + }; + u8 gre_protocol[0x10]; + u8 checksum[0x10]; + u8 reserved_at_30[0x10]; +}; + +struct mlx5_ifc_header_icmp_bits { + union { + u8 icmp_dw1[0x20]; + struct { + u8 cksum[0x10]; + u8 code[0x8]; + u8 type[0x8]; + }; + }; + union { + u8 icmp_dw2[0x20]; + struct { + u8 seq_nb[0x10]; + u8 ident[0x10]; + }; + }; +}; + +struct mlx5dr_definer { + enum mlx5dr_definer_type type; + uint8_t dw_selector[DW_SELECTORS]; + uint8_t byte_selector[BYTE_SELECTORS]; + struct mlx5dr_rule_match_tag mask; + struct mlx5dr_devx_obj *obj; +}; + +static inline bool +mlx5dr_definer_is_jumbo(struct mlx5dr_definer *definer) +{ + return (definer->type == MLX5DR_DEFINER_TYPE_JUMBO); +} + +void mlx5dr_definer_create_tag(const struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag); + +int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b); + +int mlx5dr_definer_get_id(struct mlx5dr_definer *definer); + +int mlx5dr_definer_get(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt); + +void mlx5dr_definer_put(struct mlx5dr_match_template *mt); + +#endif -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 13/19] net/mlx5/hws: Add HWS context object 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (11 preceding siblings ...) 2022-09-22 19:03 ` [v1 12/19] net/mlx5/hws: Add HWS definer layer Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 14/19] net/mlx5/hws: Add HWS table object Alex Vesker ` (10 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Context is the first mlx5dr object created, all sub object: table, matcher, rule, action are created using the context. The context holds the capabilities and send queues used for configuring the offloads to the HW. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_context.c | 222 ++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_context.h | 40 +++++ 2 files changed, 262 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.h diff --git a/drivers/net/mlx5/hws/mlx5dr_context.c b/drivers/net/mlx5/hws/mlx5dr_context.c new file mode 100644 index 0000000000..c0cc1bebc5 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_context.c @@ -0,0 +1,222 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#include "mlx5dr_internal.h" + +static int mlx5dr_context_pools_init(struct mlx5dr_context *ctx) +{ + struct mlx5dr_pool_attr pool_attr = {0}; + uint8_t max_log_sz; + int i; + + if (mlx5dr_pat_init_pattern_cache(&ctx->pattern_cache)) + return rte_errno; + + /* Create an STC pool per FT type */ + pool_attr.pool_type = MLX5DR_POOL_TYPE_STC; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_STC_POOL; + max_log_sz = RTE_MIN(MLX5DR_POOL_STC_LOG_SZ, ctx->caps->stc_alloc_log_max); + pool_attr.alloc_log_sz = RTE_MAX(max_log_sz, ctx->caps->stc_alloc_log_gran); + + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) { + pool_attr.table_type = i; + ctx->stc_pool[i] = mlx5dr_pool_create(ctx, &pool_attr); + if (!ctx->stc_pool[i]) { + DR_LOG(ERR, "Failed to allocate STC pool [%d]" ,i); + goto free_stc_pools; + } + } + + return 0; + +free_stc_pools: + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) + if (ctx->stc_pool[i]) + mlx5dr_pool_destroy(ctx->stc_pool[i]); + + mlx5dr_pat_uninit_pattern_cache(ctx->pattern_cache); + + return rte_errno; +} + +static void mlx5dr_context_pools_uninit(struct mlx5dr_context *ctx) +{ + int i; + + mlx5dr_pat_uninit_pattern_cache(ctx->pattern_cache); + + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) { + if (ctx->stc_pool[i]) + mlx5dr_pool_destroy(ctx->stc_pool[i]); + } +} + +static int mlx5dr_context_init_pd(struct mlx5dr_context *ctx, + struct ibv_pd *pd) +{ + struct mlx5dv_pd mlx5_pd = {0}; + struct mlx5dv_obj obj; + int ret; + + if (pd) { + ctx->pd = pd; + } else { + ctx->pd = mlx5_glue->alloc_pd(ctx->ibv_ctx); + if (!ctx->pd) { + DR_LOG(ERR, "Failed to allocate PD"); + rte_errno = errno; + return rte_errno; + } + ctx->flags |= MLX5DR_CONTEXT_FLAG_PRIVATE_PD; + } + + obj.pd.in = ctx->pd; + obj.pd.out = &mlx5_pd; + + ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD); + if (ret) + goto free_private_pd; + + ctx->pd_num = mlx5_pd.pdn; + + return 0; + +free_private_pd: + if (ctx->flags & MLX5DR_CONTEXT_FLAG_PRIVATE_PD) + mlx5_glue->dealloc_pd(ctx->pd); + + return ret; +} + +static int mlx5dr_context_uninit_pd(struct mlx5dr_context *ctx) +{ + if (ctx->flags & MLX5DR_CONTEXT_FLAG_PRIVATE_PD) + return mlx5_glue->dealloc_pd(ctx->pd); + + return 0; +} + +static void mlx5dr_context_check_hws_supp(struct mlx5dr_context *ctx) +{ + struct mlx5dr_cmd_query_caps *caps = ctx->caps; + + /* HWS not supported on device / FW */ + if (!caps->wqe_based_update){ + DR_LOG(INFO, "Required HWS WQE based insertion cap not supported"); + return; + } + + /* Current solution requires all rules to set reparse bit */ + if ((!caps->nic_ft.reparse || !caps->fdb_ft.reparse) || + !IS_BIT_SET(caps->rtc_reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS)) { + DR_LOG(INFO, "Required HWS reparse cap not supported"); + return; + } + + /* FW/HW must support 8DW STE */ + if (!IS_BIT_SET(caps->ste_format, MLX5_IFC_RTC_STE_FORMAT_8DW)) { + DR_LOG(INFO, "Required HWS STE format not supported"); + return; + } + + /* All rules are add by hash */ + if (!IS_BIT_SET(caps->rtc_index_mode, MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH)) { + DR_LOG(INFO, "Required HWS RTC index mode not supported"); + return; + } + + /* All rules are add by hash */ + if (!IS_BIT_SET(caps->definer_format_sup, MLX5_IFC_DEFINER_FORMAT_ID_SELECT)) { + DR_LOG(INFO, "Required HWS Dynamic definer not supported"); + return; + } + + ctx->flags |= MLX5DR_CONTEXT_FLAG_HWS_SUPPORT; +} + +static int mlx5dr_context_init_hws(struct mlx5dr_context *ctx, + struct mlx5dr_context_attr *attr) +{ + int ret; + + mlx5dr_context_check_hws_supp(ctx); + + if (!(ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) + return 0; + + ret = mlx5dr_context_init_pd(ctx, attr->pd); + if (ret) + return ret; + + ret = mlx5dr_context_pools_init(ctx); + if (ret) + goto uninit_pd; + + ret = mlx5dr_send_queues_open(ctx, attr->queues, attr->queue_size); + if (ret) + goto pools_uninit; + + return 0; + +pools_uninit: + mlx5dr_context_pools_uninit(ctx); +uninit_pd: + mlx5dr_context_uninit_pd(ctx); + return ret; +} + +static void mlx5dr_context_uninit_hws(struct mlx5dr_context *ctx) +{ + if (!(ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) + return; + + mlx5dr_send_queues_close(ctx); + mlx5dr_context_pools_uninit(ctx); + mlx5dr_context_uninit_pd(ctx); +} + +struct mlx5dr_context *mlx5dr_context_open(struct ibv_context *ibv_ctx, + struct mlx5dr_context_attr *attr) +{ + struct mlx5dr_context *ctx; + int ret; + + ctx = simple_calloc(1, sizeof(*ctx)); + if (!ctx) { + rte_errno = ENOMEM; + return NULL; + } + + ctx->ibv_ctx = ibv_ctx; + pthread_spin_init(&ctx->ctrl_lock, PTHREAD_PROCESS_PRIVATE); + + ctx->caps = simple_calloc(1, sizeof(*ctx->caps)); + if (!ctx->caps) + goto free_ctx; + + ret = mlx5dr_cmd_query_caps(ibv_ctx, ctx->caps); + if (ret) + goto free_caps; + + ret = mlx5dr_context_init_hws(ctx, attr); + if (ret) + goto free_caps; + + return ctx; + +free_caps: + simple_free(ctx->caps); +free_ctx: + simple_free(ctx); + return NULL; +} + +int mlx5dr_context_close(struct mlx5dr_context *ctx) +{ + mlx5dr_context_uninit_hws(ctx); + simple_free(ctx->caps); + pthread_spin_destroy(&ctx->ctrl_lock); + simple_free(ctx); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_context.h b/drivers/net/mlx5/hws/mlx5dr_context.h new file mode 100644 index 0000000000..98146aaa6d --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_context.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#ifndef MLX5DR_CONTEXT_H_ +#define MLX5DR_CONTEXT_H_ + +enum mlx5dr_context_flags { + MLX5DR_CONTEXT_FLAG_HWS_SUPPORT = 1 << 0, + MLX5DR_CONTEXT_FLAG_PRIVATE_PD = 1 << 1, +}; + +enum mlx5dr_context_shared_stc_type { + MLX5DR_CONTEXT_SHARED_STC_DECAP = 0, + MLX5DR_CONTEXT_SHARED_STC_POP = 1, + MLX5DR_CONTEXT_SHARED_STC_MAX = 2, +}; + +struct mlx5dr_context_common_res { + struct mlx5dr_action_default_stc *default_stc; + struct mlx5dr_action_shared_stc *shared_stc[MLX5DR_CONTEXT_SHARED_STC_MAX]; + struct mlx5dr_cmd_forward_tbl *default_miss; +}; + +struct mlx5dr_context { + struct ibv_context *ibv_ctx; + struct mlx5dr_cmd_query_caps *caps; + struct ibv_pd *pd; + uint32_t pd_num; + struct mlx5dr_pool *stc_pool[MLX5DR_TABLE_TYPE_MAX]; + struct mlx5dr_context_common_res common_res[MLX5DR_TABLE_TYPE_MAX]; + struct mlx5dr_pattern_cache *pattern_cache; + pthread_spinlock_t ctrl_lock; + enum mlx5dr_context_flags flags; + struct mlx5dr_send_engine *send_queue; + size_t queues; + LIST_HEAD(table_head, mlx5dr_table) head; +}; + +#endif -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 14/19] net/mlx5/hws: Add HWS table object 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (12 preceding siblings ...) 2022-09-22 19:03 ` [v1 13/19] net/mlx5/hws: Add HWS context object Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 15/19] net/mlx5/hws: Add HWS matcher object Alex Vesker ` (9 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad; +Cc: dev, orika HWS table resides under the context object, each context can have multiple tables with different steering types RX/TX/FDB. The table is not only a logical object but it is also represented in the HW, packets can be steered to the table and from there to other tables. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_table.c | 248 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_table.h | 44 +++++ 2 files changed, 292 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.h diff --git a/drivers/net/mlx5/hws/mlx5dr_table.c b/drivers/net/mlx5/hws/mlx5dr_table.c new file mode 100644 index 0000000000..171c244491 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_table.c @@ -0,0 +1,248 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#include "mlx5dr_internal.h" + +static void mlx5dr_table_init_next_ft_attr(struct mlx5dr_table *tbl, + struct mlx5dr_cmd_ft_create_attr *ft_attr) +{ + ft_attr->type = tbl->fw_ft_type; + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) + ft_attr->level = tbl->ctx->caps->fdb_ft.max_level - 1; + else + ft_attr->level = tbl->ctx->caps->nic_ft.max_level - 1; + ft_attr->rtc_valid = true; +} + +/* call this under ctx->ctrl_lock */ +static int +mlx5dr_table_up_default_fdb_miss_tbl(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_ft_create_attr ft_attr = {0}; + struct mlx5dr_cmd_forward_tbl *default_miss; + struct mlx5dr_context *ctx = tbl->ctx; + uint8_t tbl_type = tbl->type; + uint32_t vport; + + if (tbl->type != MLX5DR_TABLE_TYPE_FDB) + return 0; + + if (ctx->common_res[tbl_type].default_miss) { + ctx->common_res[tbl_type].default_miss->refcount++; + return 0; + } + + ft_attr.type = tbl->fw_ft_type; + ft_attr.level = tbl->ctx->caps->fdb_ft.max_level; /* The last level */ + ft_attr.rtc_valid = false; + + assert(ctx->caps->eswitch_manager); + vport = ctx->caps->eswitch_manager_vport_number; + + default_miss = mlx5dr_cmd_miss_ft_create(ctx->ibv_ctx, &ft_attr, vport); + if (!default_miss) { + DR_LOG(ERR, "Failed to default miss table type: 0x%x", tbl_type); + return rte_errno; + } + + ctx->common_res[tbl_type].default_miss = default_miss; + ctx->common_res[tbl_type].default_miss->refcount++; + return 0; +} + +/* called under pthread_spin_lock(&ctx->ctrl_lock) */ +static void mlx5dr_table_down_default_fdb_miss_tbl(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_forward_tbl *default_miss; + struct mlx5dr_context *ctx = tbl->ctx; + uint8_t tbl_type = tbl->type; + + if (tbl->type != MLX5DR_TABLE_TYPE_FDB) + return; + + default_miss = ctx->common_res[tbl_type].default_miss; + if (--default_miss->refcount) + return; + + mlx5dr_cmd_miss_ft_destroy(default_miss); + + simple_free(default_miss); + ctx->common_res[tbl_type].default_miss = NULL; +} + +static int +mlx5dr_table_connect_to_default_miss_tbl(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + int ret; + + assert(tbl->type == MLX5DR_TABLE_TYPE_FDB); + + mlx5dr_cmd_set_attr_connect_miss_tbl(tbl->ctx, + tbl->fw_ft_type, + tbl->type, + &ft_attr); + + /* Connect to next */ + ret = mlx5dr_cmd_flow_table_modify(ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect FT to default FDB FT"); + return errno; + } + + return 0; +} + +struct mlx5dr_devx_obj * +mlx5dr_table_create_default_ft(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_ft_create_attr ft_attr = {0}; + struct mlx5dr_devx_obj *ft_obj; + int ret; + + mlx5dr_table_init_next_ft_attr(tbl, &ft_attr); + + ft_obj = mlx5dr_cmd_flow_table_create(tbl->ctx->ibv_ctx, &ft_attr); + if (ft_obj && tbl->type == MLX5DR_TABLE_TYPE_FDB) { + /* take/create ref over the default miss */ + ret = mlx5dr_table_up_default_fdb_miss_tbl(tbl); + if (ret) { + DR_LOG(ERR, "Failed to get default fdb miss"); + goto free_ft_obj; + } + ret = mlx5dr_table_connect_to_default_miss_tbl(tbl, ft_obj); + if (ret) { + DR_LOG(ERR, "Failed connecting to default miss tbl"); + goto down_miss_tbl; + } + } + + return ft_obj; + +down_miss_tbl: + mlx5dr_table_down_default_fdb_miss_tbl(tbl); +free_ft_obj: + mlx5dr_cmd_destroy_obj(ft_obj); + return NULL; +} + +void mlx5dr_table_destroy_default_ft(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft_obj) +{ + mlx5dr_table_down_default_fdb_miss_tbl(tbl); + mlx5dr_cmd_destroy_obj(ft_obj); +} + +static int mlx5dr_table_init(struct mlx5dr_table *tbl) +{ + struct mlx5dr_context *ctx = tbl->ctx; + int ret; + + if (mlx5dr_table_is_root(tbl)) + return 0; + + if (!(tbl->ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) { + DR_LOG(ERR, "HWS not supported, cannot create mlx5dr_table"); + rte_errno = EOPNOTSUPP; + return rte_errno; + } + + switch (tbl->type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + tbl->fw_ft_type = FS_FT_NIC_RX; + break; + case MLX5DR_TABLE_TYPE_NIC_TX: + tbl->fw_ft_type = FS_FT_NIC_TX; + break; + case MLX5DR_TABLE_TYPE_FDB: + tbl->fw_ft_type = FS_FT_FDB; + break; + default: + assert(0); + break; + } + + pthread_spin_lock(&ctx->ctrl_lock); + tbl->ft = mlx5dr_table_create_default_ft(tbl); + if (!tbl->ft) { + DR_LOG(ERR, "Failed to create flow table devx object"); + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; + } + + ret = mlx5dr_action_get_default_stc(ctx, tbl->type); + if (ret) + goto tbl_destroy; + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +tbl_destroy: + mlx5dr_table_destroy_default_ft(tbl, tbl->ft); + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void mlx5dr_table_uninit(struct mlx5dr_table *tbl) +{ + if (mlx5dr_table_is_root(tbl)) + return; + pthread_spin_lock(&tbl->ctx->ctrl_lock); + mlx5dr_action_put_default_stc(tbl->ctx, tbl->type); + mlx5dr_table_destroy_default_ft(tbl, tbl->ft); + pthread_spin_unlock(&tbl->ctx->ctrl_lock); +} + +struct mlx5dr_table *mlx5dr_table_create(struct mlx5dr_context *ctx, + struct mlx5dr_table_attr *attr) +{ + struct mlx5dr_table *tbl; + int ret; + + if (attr->type > MLX5DR_TABLE_TYPE_FDB) { + DR_LOG(ERR, "Invalid table type %d", attr->type); + return NULL; + } + + tbl = simple_malloc(sizeof(*tbl)); + if (!tbl) { + rte_errno = ENOMEM; + return NULL; + } + + tbl->ctx = ctx; + tbl->type = attr->type; + tbl->level = attr->level; + LIST_INIT(&tbl->head); + + ret = mlx5dr_table_init(tbl); + if (ret) { + DR_LOG(ERR, "Failed to initialise table"); + goto free_tbl; + } + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_INSERT_HEAD(&ctx->head, tbl, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + return tbl; + +free_tbl: + simple_free(tbl); + return NULL; +} + +int mlx5dr_table_destroy(struct mlx5dr_table *tbl) +{ + struct mlx5dr_context *ctx = tbl->ctx; + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_REMOVE(tbl, next); + pthread_spin_unlock(&ctx->ctrl_lock); + mlx5dr_table_uninit(tbl); + simple_free(tbl); + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_table.h b/drivers/net/mlx5/hws/mlx5dr_table.h new file mode 100644 index 0000000000..b0c39b0e69 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_table.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#ifndef MLX5DR_TABLE_H_ +#define MLX5DR_TABLE_H_ + +#define MLX5DR_ROOT_LEVEL 0 + +struct mlx5dr_table { + struct mlx5dr_context *ctx; + struct mlx5dr_devx_obj *ft; + enum mlx5dr_table_type type; + uint32_t fw_ft_type; + uint32_t level; + LIST_HEAD(matcher_head, mlx5dr_matcher) head; + LIST_ENTRY(mlx5dr_table) next; +}; + +static inline +uint32_t mlx5dr_table_get_res_fw_ft_type(enum mlx5dr_table_type tbl_type, + bool is_mirror) +{ + if (tbl_type == MLX5DR_TABLE_TYPE_NIC_RX) + return FS_FT_NIC_RX; + else if (tbl_type == MLX5DR_TABLE_TYPE_NIC_TX) + return FS_FT_NIC_TX; + else if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + return is_mirror ? FS_FT_FDB_TX : FS_FT_FDB_RX; + + assert(0); + return 0; +} + +static inline bool mlx5dr_table_is_root(struct mlx5dr_table *tbl) +{ + return (tbl->level == MLX5DR_ROOT_LEVEL); +} + +struct mlx5dr_devx_obj *mlx5dr_table_create_default_ft(struct mlx5dr_table *tbl); + +void mlx5dr_table_destroy_default_ft(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft_obj); +#endif -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 15/19] net/mlx5/hws: Add HWS matcher object 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (13 preceding siblings ...) 2022-09-22 19:03 ` [v1 14/19] net/mlx5/hws: Add HWS table object Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 16/19] net/mlx5/hws: Add HWS rule object Alex Vesker ` (8 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad; +Cc: dev, orika HWS matcher resides under the table object, each table can have multiple chained matcher with different attributes. Each matcher represents a combination of match and action templates. Each matcher can contain multiple configurations based on the templates. Packets are steered from the table to the matcher and from there to other objects. The matcher allows efficent HW packet field matching and action execution based on the configuration done to it. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_matcher.c | 920 ++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_matcher.h | 76 +++ 2 files changed, 996 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.h diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c new file mode 100644 index 0000000000..f9c8248ef3 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -0,0 +1,920 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#include "mlx5dr_internal.h" + +static bool mlx5dr_matcher_requires_col_tbl(uint8_t log_num_of_rules) +{ + /* Collision table concatenation is done only for large rule tables */ + return log_num_of_rules > MLX5DR_MATCHER_ASSURED_RULES_TH; +} + +static uint8_t mlx5dr_matcher_rules_to_tbl_depth(uint8_t log_num_of_rules) +{ + if (mlx5dr_matcher_requires_col_tbl(log_num_of_rules)) + return MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH; + + /* For small rule tables we use a single deep table to assure insertion */ + return RTE_MIN(log_num_of_rules, MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH); +} + +static int mlx5dr_matcher_create_end_ft(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_table *tbl = matcher->tbl; + + matcher->end_ft = mlx5dr_table_create_default_ft(tbl); + if (!matcher->end_ft) { + DR_LOG(ERR, "Failed to create matcher end flow table"); + return rte_errno; + } + return 0; +} + +static void mlx5dr_matcher_destroy_end_ft(struct mlx5dr_matcher *matcher) +{ + mlx5dr_table_destroy_default_ft(matcher->tbl, matcher->end_ft); +} + +static int mlx5dr_matcher_connect(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_matcher *prev = NULL; + struct mlx5dr_matcher *next = NULL; + struct mlx5dr_matcher *tmp_matcher; + struct mlx5dr_devx_obj *ft; + int ret; + + /* Connect lists */ + if (LIST_EMPTY(&tbl->head)) { + LIST_INSERT_HEAD(&tbl->head, matcher, next); + goto connect; + } + + LIST_FOREACH(tmp_matcher, &tbl->head, next) { + if (tmp_matcher->attr.priority > matcher->attr.priority) { + next = tmp_matcher; + break; + } + prev = tmp_matcher; + } + + if (next) + LIST_INSERT_BEFORE(next, matcher, next); + else + LIST_INSERT_AFTER(prev, matcher, next); + +connect: + ft_attr.modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID; + ft_attr.type = tbl->fw_ft_type; + + /* Connect to next */ + if (next) { + if (next->match_ste.rtc_0) + ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; + if (next->match_ste.rtc_1) + ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + + ret = mlx5dr_cmd_flow_table_modify(matcher->end_ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect new matcher to next RTC"); + goto remove_from_list; + } + } + + /* Connect to previous */ + ft = prev ? prev->end_ft : tbl->ft; + + if (matcher->match_ste.rtc_0) + ft_attr.rtc_id_0 = matcher->match_ste.rtc_0->id; + if (matcher->match_ste.rtc_1) + ft_attr.rtc_id_1 = matcher->match_ste.rtc_1->id; + + ret = mlx5dr_cmd_flow_table_modify(ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect new matcher to previous FT"); + goto remove_from_list; + } + + return 0; + +remove_from_list: + LIST_REMOVE(matcher, next); + return ret; +} + +static int mlx5dr_matcher_disconnect(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_matcher *tmp_matcher; + struct mlx5dr_devx_obj *prev_ft; + struct mlx5dr_matcher *next; + int ret; + + prev_ft = matcher->tbl->ft; + LIST_FOREACH(tmp_matcher, &tbl->head, next) { + if (tmp_matcher == matcher) + break; + + prev_ft = tmp_matcher->end_ft; + } + + next = matcher->next.le_next; + + ft_attr.modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID; + ft_attr.type = matcher->tbl->fw_ft_type; + + /* Connect previous end FT to next RTC if exists */ + if (next) { + if (next->match_ste.rtc_0) + ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; + if (next->match_ste.rtc_1) + ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + } else { /* last matcher is removed, point prev to the default miss */ + mlx5dr_cmd_set_attr_connect_miss_tbl(tbl->ctx, + tbl->fw_ft_type, + tbl->type, + &ft_attr); + } + + ret = mlx5dr_cmd_flow_table_modify(prev_ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to disconnect matcher"); + return ret; + } + + LIST_REMOVE(matcher, next); + + return 0; +} + +static void mlx5dr_matcher_set_rtc_attr_sz(struct mlx5dr_matcher *matcher, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr, + bool is_match_rtc, + bool is_mirror) +{ + struct mlx5dr_pool_chunk *ste = &matcher->action_ste.ste; + + if ((matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_VPORT && !is_mirror) || + (matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_WIRE && is_mirror)) { + /* Optimize FDB RTC */ + rtc_attr->log_size = 0; + rtc_attr->log_depth = 0; + } else { + /* Keep original values */ + rtc_attr->log_size = is_match_rtc ? matcher->attr.table.sz_row_log : ste->order; + rtc_attr->log_depth = is_match_rtc ? matcher->attr.table.sz_col_log : 0; + } +} + +static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, + bool is_match_rtc) +{ + const char *rtc_type_str = is_match_rtc ? "match" : "action"; + struct mlx5dr_cmd_rtc_create_attr rtc_attr = {0}; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_action_default_stc *default_stc; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_devx_obj **rtc_0, **rtc_1; + struct mlx5dr_pool *ste_pool, *stc_pool; + struct mlx5dr_devx_obj *devx_obj; + struct mlx5dr_pool_chunk *ste; + int ret; + + if (is_match_rtc) { + rtc_0 = &matcher->match_ste.rtc_0; + rtc_1 = &matcher->match_ste.rtc_1; + ste_pool = matcher->match_ste.pool; + ste = &matcher->match_ste.ste; + ste->order = matcher->attr.table.sz_col_log + + matcher->attr.table.sz_row_log; + rtc_attr.log_size = matcher->attr.table.sz_row_log; + rtc_attr.log_depth = matcher->attr.table.sz_col_log; + rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH; + /* The first match template is used since all share the same definer */ + rtc_attr.definer_id = mlx5dr_definer_get_id(matcher->mt[0]->definer); + rtc_attr.is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + rtc_attr.miss_ft_id = matcher->end_ft->id; + /* Match pool requires implicit allocation */ + ret = mlx5dr_pool_chunk_alloc(ste_pool, ste); + if (ret) { + DR_LOG(ERR, "Failed to allocate STE for %s RTC", rtc_type_str); + return ret; + } + } else { + rtc_0 = &matcher->action_ste.rtc_0; + rtc_1 = &matcher->action_ste.rtc_1; + ste_pool = matcher->action_ste.pool; + ste = &matcher->action_ste.ste; + ste->order = rte_log2_u32(matcher->action_ste.max_stes) + + matcher->attr.table.sz_row_log; + rtc_attr.log_size = ste->order; + rtc_attr.log_depth = 0; + rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; + /* The action STEs use the default always hit definer */ + rtc_attr.definer_id = ctx->caps->trivial_match_definer; + rtc_attr.is_jumbo = false; + rtc_attr.miss_ft_id = 0; + } + + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + + rtc_attr.pd = ctx->pd_num; + rtc_attr.ste_base = devx_obj->id; + rtc_attr.ste_offset = ste->offset; + rtc_attr.table_type = mlx5dr_table_get_res_fw_ft_type(tbl->type, false); + mlx5dr_matcher_set_rtc_attr_sz(matcher, &rtc_attr, is_match_rtc, false); + + /* STC is a single resource (devx_obj), use any STC for the ID */ + stc_pool = ctx->stc_pool[tbl->type]; + default_stc = ctx->common_res[tbl->type].default_stc; + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, &default_stc->default_hit); + rtc_attr.stc_base = devx_obj->id; + + *rtc_0 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); + if (!*rtc_0) { + DR_LOG(ERR, "Failed to create matcher %s RTC", rtc_type_str); + goto free_ste; + } + + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) { + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + rtc_attr.ste_base = devx_obj->id; + rtc_attr.table_type = mlx5dr_table_get_res_fw_ft_type(tbl->type, true); + + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, &default_stc->default_hit); + rtc_attr.stc_base = devx_obj->id; + mlx5dr_matcher_set_rtc_attr_sz(matcher, &rtc_attr, is_match_rtc, true); + + *rtc_1 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); + if (!*rtc_1) { + DR_LOG(ERR, "Failed to create peer matcher %s RTC0", rtc_type_str); + goto destroy_rtc_0; + } + } + + return 0; + +destroy_rtc_0: + mlx5dr_cmd_destroy_obj(*rtc_0); +free_ste: + if (is_match_rtc) + mlx5dr_pool_chunk_free(ste_pool, ste); + return rte_errno; +} + +static void mlx5dr_matcher_destroy_rtc(struct mlx5dr_matcher *matcher, + bool is_match_rtc) +{ + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_devx_obj *rtc_0, *rtc_1; + struct mlx5dr_pool_chunk *ste; + struct mlx5dr_pool *ste_pool; + + if (is_match_rtc) { + rtc_0 = matcher->match_ste.rtc_0; + rtc_1 = matcher->match_ste.rtc_1; + ste_pool = matcher->match_ste.pool; + ste = &matcher->match_ste.ste; + } else { + rtc_0 = matcher->action_ste.rtc_0; + rtc_1 = matcher->action_ste.rtc_1; + ste_pool = matcher->action_ste.pool; + ste = &matcher->action_ste.ste; + } + + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) + mlx5dr_cmd_destroy_obj(rtc_1); + + mlx5dr_cmd_destroy_obj(rtc_0); + if (is_match_rtc) + mlx5dr_pool_chunk_free(ste_pool, ste); +} + +static void mlx5dr_matcher_set_pool_attr(struct mlx5dr_pool_attr *attr, + struct mlx5dr_matcher *matcher) +{ + switch (matcher->attr.optimize_flow_src) { + case MLX5DR_MATCHER_FLOW_SRC_VPORT: + attr->opt_type = MLX5DR_POOL_OPTIMIZE_ORIG; + break; + case MLX5DR_MATCHER_FLOW_SRC_WIRE: + attr->opt_type = MLX5DR_POOL_OPTIMIZE_MIRROR; + break; + default: + break; + } +} + +static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher) +{ + bool is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_pool_attr pool_attr = {0}; + struct mlx5dr_context *ctx = tbl->ctx; + uint32_t required_stes; + int i, ret; + bool valid; + + for (i = 0; i < matcher->num_of_at; i++) { + struct mlx5dr_action_template *at = matcher->at[i]; + + /* Check if action combinabtion is valid */ + valid = mlx5dr_action_check_combo(at->action_type_arr, matcher->tbl->type); + if (!valid) { + DR_LOG(ERR, "Invalid combination in action template %d", i); + return rte_errno; + } + + /* Process action template to setters */ + ret = mlx5dr_action_template_process(at); + if (ret) { + DR_LOG(ERR, "Failed to process action template %d", i); + return rte_errno; + } + + required_stes = at->num_of_action_stes - (!is_jumbo || at->only_term); + matcher->action_ste.max_stes = RTE_MAX(matcher->action_ste.max_stes, required_stes); + + /* Future: Optimize reparse */ + } + + /* There are no additioanl STEs required for matcher */ + if (!matcher->action_ste.max_stes) + return 0; + + /* Allocate action STE mempool */ + pool_attr.table_type = tbl->type; + pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_STE_ACTION_POOL; + pool_attr.alloc_log_sz = rte_log2_u32(matcher->action_ste.max_stes) + + matcher->attr.table.sz_row_log; + mlx5dr_matcher_set_pool_attr(&pool_attr, matcher); + matcher->action_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); + if (!matcher->action_ste.pool) { + DR_LOG(ERR, "Failed to create action ste pool"); + return rte_errno; + } + + /* Allocate action RTC */ + ret = mlx5dr_matcher_create_rtc(matcher, false); + if (ret) { + DR_LOG(ERR, "Failed to create action RTC"); + goto free_ste_pool; + } + + /* Allocate STC for jumps to STE */ + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE; + stc_attr.ste_table.ste = matcher->action_ste.ste; + stc_attr.ste_table.ste_pool = matcher->action_ste.pool; + stc_attr.ste_table.match_definer_id = ctx->caps->trivial_match_definer; + + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl->type, + &matcher->action_ste.stc); + if (ret) { + DR_LOG(ERR, "Failed to create action jump to table STC"); + goto free_rtc; + } + + return 0; + +free_rtc: + mlx5dr_matcher_destroy_rtc(matcher, false); +free_ste_pool: + mlx5dr_pool_destroy(matcher->action_ste.pool); + return rte_errno; +} + +static void mlx5dr_matcher_unbind_at(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_table *tbl = matcher->tbl; + + if (!matcher->action_ste.max_stes) + return; + + mlx5dr_action_free_single_stc(tbl->ctx, tbl->type, &matcher->action_ste.stc); + mlx5dr_matcher_destroy_rtc(matcher, false); + mlx5dr_pool_destroy(matcher->action_ste.pool); +} + +static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_pool_attr pool_attr = {0}; + int i, created = 0; + int ret = -1; + + for (i = 0; i < matcher->num_of_mt; i++) { + /* Get a definer for each match template */ + ret = mlx5dr_definer_get(ctx, matcher->mt[i]); + if (ret) + goto definer_put; + + created++; + + /* Verify all templates produce the same definer */ + if (i == 0) + continue; + + ret = mlx5dr_definer_compare(matcher->mt[i]->definer, + matcher->mt[i-1]->definer); + if (ret) { + DR_LOG(ERR, "Match templates cannot be used on the same matcher"); + rte_errno = ENOTSUP; + goto definer_put; + } + } + + /* Create an STE pool per matcher*/ + pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_MATCHER_STE_POOL; + pool_attr.alloc_log_sz = matcher->attr.table.sz_col_log + + matcher->attr.table.sz_row_log; + pool_attr.table_type = matcher->tbl->type; + mlx5dr_matcher_set_pool_attr(&pool_attr, matcher); + + matcher->match_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); + if (!matcher->match_ste.pool) { + DR_LOG(ERR, "Failed to allocate matcher STE pool"); + goto definer_put; + } + + return 0; + +definer_put: + while (created--) + mlx5dr_definer_put(matcher->mt[created]); + + return ret; +} + +static void mlx5dr_matcher_unbind_mt(struct mlx5dr_matcher *matcher) +{ + int i; + + for (i = 0; i < matcher->num_of_mt; i++) + mlx5dr_definer_put(matcher->mt[i]); + + mlx5dr_pool_destroy(matcher->match_ste.pool); +} + +static int +mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps, + struct mlx5dr_matcher *matcher, + bool is_root) +{ + struct mlx5dr_matcher_attr *attr = &matcher->attr; + + if (matcher->tbl->type != MLX5DR_TABLE_TYPE_FDB && attr->optimize_flow_src) { + DR_LOG(ERR, "NIC domain doesn't support flow_src"); + goto not_supported; + } + + if (is_root) { + if (attr->mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) { + DR_LOG(ERR, "Root matcher supports only rule resource mode"); + goto not_supported; + } + if (attr->optimize_flow_src) { + DR_LOG(ERR, "Root matcher can't specify FDB direction"); + goto not_supported; + } + return 0; + } + + /* Convert number of rules to the required depth */ + if (attr->mode == MLX5DR_MATCHER_RESOURCE_MODE_RULE) + attr->table.sz_col_log = mlx5dr_matcher_rules_to_tbl_depth(attr->rule.num_log); + + if (attr->table.sz_col_log > caps->rtc_log_depth_max) { + DR_LOG(ERR, "Matcher depth exceeds limit %d", caps->rtc_log_depth_max); + goto not_supported; + } + + if (attr->table.sz_col_log + attr->table.sz_row_log > caps->ste_alloc_log_max) { + DR_LOG(ERR, "Total matcher size exceeds limit %d", caps->ste_alloc_log_max); + goto not_supported; + } + + if (attr->table.sz_col_log + attr->table.sz_row_log < caps->ste_alloc_log_gran) { + DR_LOG(ERR, "Total matcher size below limit %d", caps->ste_alloc_log_gran); + goto not_supported; + } + + return 0; + +not_supported: + rte_errno = EOPNOTSUPP; + return rte_errno; +} + +static int mlx5dr_matcher_create_and_connect(struct mlx5dr_matcher *matcher) +{ + int ret; + + /* Select and create the definers for current matcher */ + ret = mlx5dr_matcher_bind_mt(matcher); + if (ret) + return ret; + + /* Calculate and verify action combination */ + ret = mlx5dr_matcher_bind_at(matcher); + if (ret) + goto unbind_mt; + + /* Create matcher end flow table anchor */ + ret = mlx5dr_matcher_create_end_ft(matcher); + if (ret) + goto unbind_at; + + /* Allocate the RTC for the new matcher */ + ret = mlx5dr_matcher_create_rtc(matcher, true); + if (ret) + goto destroy_end_ft; + + /* Connect the matcher to the matcher list */ + ret = mlx5dr_matcher_connect(matcher); + if (ret) + goto destroy_rtc; + + return 0; + +destroy_rtc: + mlx5dr_matcher_destroy_rtc(matcher, true); +destroy_end_ft: + mlx5dr_matcher_destroy_end_ft(matcher); +unbind_at: + mlx5dr_matcher_unbind_at(matcher); +unbind_mt: + mlx5dr_matcher_unbind_mt(matcher); + return ret; +} + +static void mlx5dr_matcher_destroy_and_disconnect(struct mlx5dr_matcher *matcher) +{ + mlx5dr_matcher_disconnect(matcher); + mlx5dr_matcher_destroy_rtc(matcher, true); + mlx5dr_matcher_destroy_end_ft(matcher); + mlx5dr_matcher_unbind_at(matcher); + mlx5dr_matcher_unbind_mt(matcher); +} + +static int +mlx5dr_matcher_create_col_matcher(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_matcher *col_matcher; + int ret; + + if (matcher->attr.mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) + return 0; + + if (!mlx5dr_matcher_requires_col_tbl(matcher->attr.rule.num_log)) + return 0; + + col_matcher = simple_calloc(1, sizeof(*matcher)); + if (!col_matcher) { + rte_errno = ENOMEM; + return rte_errno; + } + + col_matcher->tbl = matcher->tbl; + col_matcher->num_of_mt = matcher->num_of_mt; + memcpy(col_matcher->mt, matcher->mt, matcher->num_of_mt * sizeof(*matcher->mt)); + col_matcher->num_of_at = matcher->num_of_at; + memcpy(col_matcher->at, matcher->at, matcher->num_of_at * sizeof(*matcher->at)); + + col_matcher->attr.priority = matcher->attr.priority; + col_matcher->attr.mode = MLX5DR_MATCHER_RESOURCE_MODE_HTABLE; + col_matcher->attr.optimize_flow_src = matcher->attr.optimize_flow_src; + col_matcher->attr.table.sz_row_log = matcher->attr.rule.num_log; + col_matcher->attr.table.sz_col_log = MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH; + if (col_matcher->attr.table.sz_row_log > MLX5DR_MATCHER_ASSURED_ROW_RATIO) + col_matcher->attr.table.sz_row_log -= MLX5DR_MATCHER_ASSURED_ROW_RATIO; + + ret = mlx5dr_matcher_process_attr(ctx->caps, col_matcher, false); + if (ret) + goto free_col_matcher; + + ret = mlx5dr_matcher_create_and_connect(col_matcher); + if (ret) + goto free_col_matcher; + + matcher->col_matcher = col_matcher; + + return 0; + +free_col_matcher: + simple_free(col_matcher); + DR_LOG(ERR, "Failed to create assured collision matcher"); + return ret; +} + +static void +mlx5dr_matcher_destroy_col_matcher(struct mlx5dr_matcher *matcher) +{ + if (matcher->attr.mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) + return; + + if (matcher->col_matcher) { + mlx5dr_matcher_destroy_and_disconnect(matcher->col_matcher); + simple_free(matcher->col_matcher); + } +} + +static int mlx5dr_matcher_init(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + + /* Allocate matcher resource and connect to the packet pipe */ + ret = mlx5dr_matcher_create_and_connect(matcher); + if (ret) + goto unlock_err; + + /* Create additional matcher for collision handling */ + ret = mlx5dr_matcher_create_col_matcher(matcher); + if (ret) + goto destory_and_disconnect; + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +destory_and_disconnect: + mlx5dr_matcher_destroy_and_disconnect(matcher); +unlock_err: + pthread_spin_unlock(&ctx->ctrl_lock); + return ret; +} + +static int mlx5dr_matcher_uninit(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + + pthread_spin_lock(&ctx->ctrl_lock); + mlx5dr_matcher_destroy_col_matcher(matcher); + mlx5dr_matcher_destroy_and_disconnect(matcher); + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; +} + +static int mlx5dr_matcher_init_root(struct mlx5dr_matcher *matcher) +{ + enum mlx5dr_table_type type = matcher->tbl->type; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dv_flow_matcher_attr attr = {0}; + struct mlx5dv_flow_match_parameters *mask; + struct mlx5_flow_attr flow_attr = {0}; + enum mlx5dv_flow_table_type ft_type; + struct rte_flow_error rte_error; + uint8_t match_criteria; + int ret; + + switch (type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_RX; + break; + case MLX5DR_TABLE_TYPE_NIC_TX: + ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_TX; + break; + case MLX5DR_TABLE_TYPE_FDB: + ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_FDB; + break; + default: + assert(0); + break; + } + + if (matcher->attr.priority > UINT16_MAX) { + DR_LOG(ERR, "Root matcher priority exceeds allowed limit"); + rte_errno = EINVAL; + return rte_errno; + } + + mask = simple_calloc(1, MLX5_ST_SZ_BYTES(fte_match_param) + + offsetof(struct mlx5dv_flow_match_parameters, match_buf)); + if (!mask) { + rte_errno = ENOMEM; + return rte_errno; + } + + flow_attr.tbl_type = type; + + /* On root table matcher, only a single match template is supported */ + ret = flow_dv_translate_items_hws(matcher->mt[0]->items, + &flow_attr, mask->match_buf, + MLX5_SET_MATCHER_HS_M, NULL, + &match_criteria, + &rte_error); + if (ret) { + DR_LOG(ERR, "Failed to convert items to PRM [%s]", rte_error.message); + goto free_mask; + } + + mask->match_sz = MLX5_ST_SZ_BYTES(fte_match_param); + attr.match_mask = mask; + attr.match_criteria_enable = match_criteria; + attr.ft_type = ft_type; + attr.type = IBV_FLOW_ATTR_NORMAL; + attr.priority = matcher->attr.priority; + attr.comp_mask = MLX5DV_FLOW_MATCHER_MASK_FT_TYPE; + + matcher->dv_matcher = + mlx5_glue->dv_create_flow_matcher_root(ctx->ibv_ctx, &attr); + if (!matcher->dv_matcher) { + DR_LOG(ERR, "Failed to create DV flow matcher"); + rte_errno = errno; + goto free_mask; + } + + simple_free(mask); + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_INSERT_HEAD(&matcher->tbl->head, matcher, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_mask: + simple_free(mask); + return rte_errno; +} + +static int mlx5dr_matcher_uninit_root(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_REMOVE(matcher, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + ret = mlx5_glue->dv_destroy_flow_matcher_root(matcher->dv_matcher); + if (ret) { + DR_LOG(ERR, "Failed to Destroy DV flow matcher"); + rte_errno = errno; + } + + return ret; +} + +static int +mlx5dr_matcher_check_template(uint8_t num_of_mt, uint8_t num_of_at, bool is_root) +{ + uint8_t max_num_of_mt; + + max_num_of_mt = is_root ? + MLX5DR_MATCHER_MAX_MT_ROOT : + MLX5DR_MATCHER_MAX_MT; + + if (!num_of_mt || !num_of_at) { + DR_LOG(ERR, "Number of action/match template cannot be zero"); + goto out_not_sup; + } + + if (num_of_at > MLX5DR_MATCHER_MAX_AT) { + DR_LOG(ERR, "Number of action templates exceeds limit"); + goto out_not_sup; + } + + if (num_of_mt > max_num_of_mt) { + DR_LOG(ERR, "Number of match templates exceeds limit"); + goto out_not_sup; + } + + return 0; + +out_not_sup: + rte_errno = ENOTSUP; + return rte_errno; +} + +struct mlx5dr_matcher * +mlx5dr_matcher_create(struct mlx5dr_table *tbl, + struct mlx5dr_match_template *mt[], + uint8_t num_of_mt, + struct mlx5dr_action_template *at[], + uint8_t num_of_at, + struct mlx5dr_matcher_attr *attr) +{ + bool is_root = mlx5dr_table_is_root(tbl); + struct mlx5dr_matcher *matcher; + int ret; + + ret = mlx5dr_matcher_check_template(num_of_mt, num_of_at, is_root); + if (ret) + return NULL; + + matcher = simple_calloc(1, sizeof(*matcher)); + if (!matcher) { + rte_errno = ENOMEM; + return NULL; + } + + matcher->tbl = tbl; + matcher->attr = *attr; + matcher->num_of_mt = num_of_mt; + memcpy(matcher->mt, mt, num_of_mt * sizeof(*mt)); + matcher->num_of_at = num_of_at; + memcpy(matcher->at, at, num_of_at * sizeof(*at)); + + ret = mlx5dr_matcher_process_attr(tbl->ctx->caps, matcher, is_root); + if (ret) + goto free_matcher; + + if (is_root) + ret = mlx5dr_matcher_init_root(matcher); + else + ret = mlx5dr_matcher_init(matcher); + + if (ret) { + DR_LOG(ERR, "Failed to initialise matcher: %d", ret); + goto free_matcher; + } + + return matcher; + +free_matcher: + simple_free(matcher); + return NULL; +} + +int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher) +{ + if (mlx5dr_table_is_root(matcher->tbl)) + mlx5dr_matcher_uninit_root(matcher); + else + mlx5dr_matcher_uninit(matcher); + + simple_free(matcher); + return 0; +} + +struct mlx5dr_match_template * +mlx5dr_match_template_create(const struct rte_flow_item items[], + enum mlx5dr_match_template_flags flags) +{ + struct mlx5dr_match_template *mt; + struct rte_flow_error error; + int ret, len; + + if (flags > MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH) { + DR_LOG(ERR, "Unsupported match template flag provided"); + rte_errno = EINVAL; + return NULL; + } + + mt = simple_calloc(1, sizeof(*mt)); + if (!mt) { + DR_LOG(ERR, "Failed to allocate match template"); + rte_errno = ENOMEM; + return NULL; + } + + mt->flags = flags; + + /* Duplicate the user given items */ + ret = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, NULL, 0, items, &error); + if (ret <= 0) { + DR_LOG(ERR, "Unable to process items (%s): %s", + error.message ? error.message : "unspecified", + strerror(rte_errno)); + goto free_template; + } + + len = RTE_ALIGN(ret, 16); + mt->items = simple_calloc(1, len); + if (!mt->items) { + DR_LOG(ERR, "Failed to allocate item copy"); + rte_errno = ENOMEM; + goto free_template; + } + + ret = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, mt->items, ret, items, &error); + if (ret <= 0) + goto free_dst; + + return mt; + +free_dst: + simple_free(mt->items); +free_template: + simple_free(mt); + return NULL; +} + +int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt) +{ + assert(!mt->refcount); + simple_free(mt->items); + simple_free(mt); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h new file mode 100644 index 0000000000..c5f38b9388 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h @@ -0,0 +1,76 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#ifndef MLX5DR_MATCHER_H_ +#define MLX5DR_MATCHER_H_ + +/* Max supported match template */ +#define MLX5DR_MATCHER_MAX_MT 2 +#define MLX5DR_MATCHER_MAX_MT_ROOT 1 + +/* Max supported action template */ +#define MLX5DR_MATCHER_MAX_AT 4 + +/* We calculated that concatenating a collision table to the main table with + * 3% of the main table rows will be enough resources for high insertion + * success probability. + * + * The calculation: log2( 2^x * 3 / 100) = log(2^x) + log(3/100) = x - 5.05 ~ 5 + */ +#define MLX5DR_MATCHER_ASSURED_ROW_RATIO 5 +/* Thrashold to determine if amount of rules require a collision table */ +#define MLX5DR_MATCHER_ASSURED_RULES_TH 10 +/* Required depth of an assured collision table */ +#define MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH 4 +/* Required depth of the main large table */ +#define MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH 2 + +struct mlx5dr_match_template { + struct rte_flow_item *items; + struct mlx5dr_definer *definer; + struct mlx5dr_definer_fc *fc; + uint32_t fc_sz; + uint64_t item_flags; + uint8_t vport_item_id; + enum mlx5dr_match_template_flags flags; + uint32_t refcount; +}; + +struct mlx5dr_matcher_match_ste { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_devx_obj *rtc_0; + struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_pool *pool; +}; + +struct mlx5dr_matcher_action_ste { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_pool_chunk stc; + struct mlx5dr_devx_obj *rtc_0; + struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_pool *pool; + uint8_t max_stes; +}; + +struct mlx5dr_matcher { + struct mlx5dr_table *tbl; + struct mlx5dr_matcher_attr attr; + struct mlx5dv_flow_matcher *dv_matcher; + struct mlx5dr_match_template *mt[MLX5DR_MATCHER_MAX_MT]; + uint8_t num_of_mt; + struct mlx5dr_action_template *at[MLX5DR_MATCHER_MAX_AT]; + uint8_t num_of_at; + struct mlx5dr_devx_obj *end_ft; + struct mlx5dr_matcher *col_matcher; + struct mlx5dr_matcher_match_ste match_ste; + struct mlx5dr_matcher_action_ste action_ste; + LIST_ENTRY(mlx5dr_matcher) next; +}; + +int mlx5dr_matcher_conv_items_to_prm(uint64_t *match_buf, + struct rte_flow_item *items, + uint8_t *match_criteria, + bool is_value); + +#endif -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 16/19] net/mlx5/hws: Add HWS rule object 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (14 preceding siblings ...) 2022-09-22 19:03 ` [v1 15/19] net/mlx5/hws: Add HWS matcher object Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 17/19] net/mlx5/hws: Add HWS action object Alex Vesker ` (7 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad; +Cc: dev, orika HWS rule objects reside under the matcher, each rule holds the configuration for the packet fields to match on and the set of actions to execute over the packet that has the requested fields. Rules can be created asynchronously in parallel over multiple queues to different matchers. Each rule is configured to the HW. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_rule.c | 528 +++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_rule.h | 50 +++ 2 files changed, 578 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.h diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c new file mode 100644 index 0000000000..e393080c2b --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -0,0 +1,528 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#include "mlx5dr_internal.h" + +static void mlx5dr_rule_skip(struct mlx5dr_matcher *matcher, + const struct rte_flow_item *items, + bool *skip_rx, bool *skip_tx) +{ + struct mlx5dr_match_template *mt = matcher->mt[0]; + const struct rte_flow_item_ethdev *v; + const struct flow_hw_port_info *vport; + + /* flow_src is the 1st priority */ + if (matcher->attr.optimize_flow_src) { + *skip_tx = matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_WIRE; + *skip_rx = matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_VPORT; + return; + } + + /* By default FDB rules are added to both RX and TX */ + *skip_rx = false; + *skip_tx = false; + + if (mt->item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) { + v = items[mt->vport_item_id].spec; + vport = flow_hw_conv_port_id(v->port_id); + if (unlikely(!vport)) { + DR_LOG(ERR, "Fail to map port ID %d, ignoring", v->port_id); + return; + } + + if (!vport->is_wire) + /* Match vport ID is not WIRE -> Skip RX */ + *skip_rx = true; + else + /* Match vport ID is WIRE -> Skip TX */ + *skip_tx = true; + } +} + +static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, + struct mlx5dr_rule *rule, + const struct rte_flow_item *items, + void *user_data) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_table *tbl = matcher->tbl; + bool skip_rx, skip_tx; + + dep_wqe->rule = rule; + dep_wqe->user_data = user_data; + + switch (tbl->type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + case MLX5DR_TABLE_TYPE_NIC_TX: + dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; + dep_wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0->id : 0; + dep_wqe->rtc_1 = 0; + dep_wqe->retry_rtc_1 = 0; + break; + + case MLX5DR_TABLE_TYPE_FDB: + mlx5dr_rule_skip(matcher, items, &skip_rx, &skip_tx); + + if (!skip_rx) { + dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; + dep_wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0->id : 0; + } else { + dep_wqe->rtc_0 = 0; + dep_wqe->retry_rtc_0 = 0; + } + + if (!skip_tx) { + dep_wqe->rtc_1 = matcher->match_ste.rtc_1->id; + dep_wqe->retry_rtc_1 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_1->id : 0; + } else { + dep_wqe->rtc_1 = 0; + dep_wqe->retry_rtc_1 = 0; + } + + break; + + default: + assert(false); + break; + } +} + +static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue, + struct mlx5dr_rule *rule, + bool err, + void *user_data, + enum mlx5dr_rule_status rule_status_on_succ) +{ + enum rte_flow_op_status comp_status; + + if (!err){ + comp_status = RTE_FLOW_OP_SUCCESS; + rule->status = rule_status_on_succ; + } else { + comp_status = RTE_FLOW_OP_ERROR; + rule->status = MLX5DR_RULE_STATUS_FAILED; + } + + mlx5dr_send_engine_inc_rule(queue); + mlx5dr_send_engine_gen_comp(queue, user_data, comp_status); +} + +static int mlx5dr_rule_alloc_action_ste(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + int ret; + + /* Use rule_idx for locking optimzation, otherwise allocate from pool */ + if (matcher->attr.optimize_using_rule_idx) { + rule->action_ste_idx = attr->rule_idx * matcher->action_ste.max_stes; + } else { + struct mlx5dr_pool_chunk ste = {0}; + + ste.order = rte_log2_u32(matcher->action_ste.max_stes); + ret = mlx5dr_pool_chunk_alloc(matcher->action_ste.pool, &ste); + if (ret) { + DR_LOG(ERR, "Failed to allocate STE for rule actions"); + return ret; + } + rule->action_ste_idx = ste.offset; + } + return 0; +} + +void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + + if (rule->action_ste_idx > -1 && !matcher->attr.optimize_using_rule_idx) { + struct mlx5dr_pool_chunk ste = {0}; + + /* This release is safe only when the rule match part was deleted */ + ste.order = rte_log2_u32(matcher->action_ste.max_stes); + ste.offset = rule->action_ste_idx; + mlx5dr_pool_chunk_free(matcher->action_ste.pool, &ste); + } +} + +static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr, + struct mlx5dr_actions_apply_data *apply) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_context *ctx = tbl->ctx; + + /* Init rule before reuse */ + rule->rtc_0 = 0; + rule->rtc_1 = 0; + rule->pending_wqes = 0; + rule->action_ste_idx = -1; + rule->status = MLX5DR_RULE_STATUS_CREATING; + + /* Init default send STE attributes */ + ste_attr->gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE; + ste_attr->send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr->send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr->send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + + /* Init default action apply */ + apply->tbl_type = tbl->type; + apply->common_res = &ctx->common_res[tbl->type]; + apply->jump_to_action_stc = matcher->action_ste.stc.offset; + apply->require_dep = 0; +} + +static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dr_action_template *at = rule->matcher->at[at_idx]; + struct mlx5dr_match_template *mt = rule->matcher->mt[mt_idx]; + bool is_jumbo = mlx5dr_definer_is_jumbo(mt->definer); + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + struct mlx5dr_actions_wqe_setter *setter; + struct mlx5dr_actions_apply_data apply; + struct mlx5dr_send_engine *queue; + uint8_t total_stes, action_stes; + int i, ret; + + queue = &ctx->send_queue[attr->queue_id]; + if (unlikely(mlx5dr_send_engine_err(queue))) { + rte_errno = EIO; + return rte_errno; + } + + mlx5dr_rule_create_init(rule, &ste_attr, &apply); + + /* Allocate dependent match WQE since rule might have dependent writes. + * The queued dependent WQE can be later aborted or kept as a dependency. + * dep_wqe buffers (ctrl, data) are also reused for all STE writes. + */ + dep_wqe = mlx5dr_send_add_new_dep_wqe(queue); + mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, attr->user_data); + + ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; + ste_attr.wqe_data = &dep_wqe->wqe_data; + apply.wqe_ctrl = &dep_wqe->wqe_ctrl; + apply.wqe_data = (uint32_t *)&dep_wqe->wqe_data; + apply.rule_action = rule_actions; + apply.queue = queue; + + setter = &at->setters[at->num_of_action_stes]; + total_stes = at->num_of_action_stes + (is_jumbo && !at->only_term); + action_stes = total_stes - 1; + + if (action_stes) { + /* Allocate action STEs for complex rules */ + ret = mlx5dr_rule_alloc_action_ste(rule, attr); + if (ret) { + DR_LOG(ERR, "Failed to allocate action memory %d", ret); + mlx5dr_send_abort_new_dep_wqe(queue); + return ret; + } + /* Skip RX/TX based on the dep_wqe init */ + ste_attr.rtc_0 = dep_wqe->rtc_0 ? matcher->action_ste.rtc_0->id : 0; + ste_attr.rtc_1 = dep_wqe->rtc_1 ? matcher->action_ste.rtc_1->id : 0; + /* Action STEs are written to a specific index last to first */ + ste_attr.direct_index = rule->action_ste_idx + action_stes; + apply.next_direct_idx = ste_attr.direct_index; + } else { + apply.next_direct_idx = 0; + } + + for (i = total_stes; i-- > 0;) { + mlx5dr_action_apply_setter(&apply, setter--, !i && is_jumbo); + + if (i == 0) { + /* Handle last match STE */ + mlx5dr_definer_create_tag(items, mt->fc, mt->fc_sz, + (uint8_t *)dep_wqe->wqe_data.action); + + /* Rule has dependent WQEs, match dep_wqe is queued */ + if (action_stes || apply.require_dep) + break; + + /* Rule has no dependencies, abort dep_wqe and send WQE now */ + mlx5dr_send_abort_new_dep_wqe(queue); + ste_attr.wqe_tag_is_jumbo = is_jumbo; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = dep_wqe->user_data; + ste_attr.send_attr.rule = dep_wqe->rule; + ste_attr.direct_index = 0; + ste_attr.rtc_0 = dep_wqe->rtc_0; + ste_attr.rtc_1 = dep_wqe->rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.retry_rtc_0 = dep_wqe->retry_rtc_0; + ste_attr.retry_rtc_1 = dep_wqe->retry_rtc_1; + } else { + apply.next_direct_idx = --ste_attr.direct_index; + } + + mlx5dr_send_ste(queue, &ste_attr); + } + + /* Backup TAG on the rule for deletion */ + if (is_jumbo) + memcpy(rule->tag.jumbo, dep_wqe->wqe_data.action, MLX5DR_JUMBO_TAG_SZ); + else + memcpy(rule->tag.match, dep_wqe->wqe_data.tag, MLX5DR_MATCH_TAG_SZ); + + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQEs */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + return 0; +} + +static void mlx5dr_rule_destroy_failed_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + + mlx5dr_rule_gen_comp(queue, rule, false, + attr->user_data, MLX5DR_RULE_STATUS_DELETED); + + /* Rule failed now we can safely release action STEs */ + mlx5dr_rule_free_action_ste_idx(rule); + + /* If a rule that was indicated as burst (need to trigger HW) has failed + * insertion we won't ring the HW as nothing is being written to the WQ. + * In such case update the last WQE and ring the HW with that work + */ + if (attr->burst) + return; + + mlx5dr_send_all_dep_wqe(queue); + mlx5dr_send_engine_flush_queue(queue); +} + +static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_wqe_gta_ctrl_seg wqe_ctrl = {0}; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + + /* Rule is not completed yet */ + if (rule->status == MLX5DR_RULE_STATUS_CREATING) { + rte_errno = EBUSY; + return rte_errno; + } + + /* Rule failed and doesn't require cleanup */ + if (rule->status == MLX5DR_RULE_STATUS_FAILED) { + mlx5dr_rule_destroy_failed_hws(rule, attr); + return 0; + } + + if (unlikely(mlx5dr_send_engine_err(queue))) { + mlx5dr_rule_destroy_failed_hws(rule, attr); + return 0; + } + + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQE */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + rule->status = MLX5DR_RULE_STATUS_DELETING; + + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + + ste_attr.send_attr.rule = rule; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = attr->user_data; + + ste_attr.rtc_0 = rule->rtc_0; + ste_attr.rtc_1 = rule->rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.wqe_ctrl = &wqe_ctrl; + ste_attr.wqe_tag = &rule->tag; + ste_attr.wqe_tag_is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE; + + mlx5dr_send_ste(queue, &ste_attr); + + return 0; +} + +static int mlx5dr_rule_create_root(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *rule_attr, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dv_flow_matcher *dv_matcher = rule->matcher->dv_matcher; + uint8_t num_actions = rule->matcher->at[at_idx]->num_actions; + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dv_flow_match_parameters *value; + struct mlx5_flow_attr flow_attr = {0}; + struct mlx5dv_flow_action_attr *attr; + struct rte_flow_error error; + uint8_t match_criteria; + int ret; + + attr = simple_calloc(num_actions, sizeof(*attr)); + if (!attr) { + rte_errno = ENOMEM; + return rte_errno; + } + + value = simple_calloc(1, MLX5_ST_SZ_BYTES(fte_match_param) + + offsetof(struct mlx5dv_flow_match_parameters, match_buf)); + if (!value) { + rte_errno = ENOMEM; + goto free_attr; + } + + flow_attr.tbl_type = rule->matcher->tbl->type; + + ret = flow_dv_translate_items_hws(items, &flow_attr, value->match_buf, + MLX5_SET_MATCHER_HS_V, NULL, + &match_criteria, + &error); + if (ret) { + DR_LOG(ERR, "Failed to convert items to PRM [%s]", error.message); + goto free_value; + } + + /* Convert actions to verb action attr */ + ret = mlx5dr_action_root_build_attr(rule_actions, num_actions, attr); + if (ret) + goto free_value; + + /* Create verb flow */ + value->match_sz = MLX5_ST_SZ_BYTES(fte_match_param); + rule->flow = mlx5_glue->dv_create_flow_root(dv_matcher, + value, + num_actions, + attr); + + mlx5dr_rule_gen_comp(&ctx->send_queue[rule_attr->queue_id], rule, !rule->flow, + rule_attr->user_data, MLX5DR_RULE_STATUS_CREATED); + + simple_free(value); + simple_free(attr); + + return 0; + +free_value: + simple_free(value); +free_attr: + simple_free(attr); + + return -rte_errno; +} + +static int mlx5dr_rule_destroy_root(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + int err = 0; + + if (rule->flow) + err = ibv_destroy_flow(rule->flow); + + mlx5dr_rule_gen_comp(&ctx->send_queue[attr->queue_id], rule, err, + attr->user_data, MLX5DR_RULE_STATUS_DELETED); + + return 0; +} + +int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[], + struct mlx5dr_rule_attr *attr, + struct mlx5dr_rule *rule_handle) +{ + struct mlx5dr_context *ctx; + int ret; + + rule_handle->matcher = matcher; + ctx = matcher->tbl->ctx; + + if (unlikely(!attr->user_data)) { + rte_errno = EINVAL; + return -rte_errno; + } + + /* Check if there is room in queue */ + if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + rte_errno = EBUSY; + return -rte_errno; + } + + assert(matcher->num_of_mt >= mt_idx); + assert(matcher->num_of_at >= at_idx); + + if (unlikely(mlx5dr_table_is_root(matcher->tbl))) + ret = mlx5dr_rule_create_root(rule_handle, + attr, + items, + at_idx, + rule_actions); + else + ret = mlx5dr_rule_create_hws(rule_handle, + attr, + mt_idx, + items, + at_idx, + rule_actions); + return -ret; +} + +int mlx5dr_rule_destroy(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + int ret; + + if (unlikely(!attr->user_data)) { + rte_errno = EINVAL; + return -rte_errno; + } + + /* Check if there is room in queue */ + if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + rte_errno = EBUSY; + return -rte_errno; + } + + if (unlikely(mlx5dr_table_is_root(rule->matcher->tbl))) + ret = mlx5dr_rule_destroy_root(rule, attr); + else + ret = mlx5dr_rule_destroy_hws(rule, attr); + + return -ret; +} + +size_t mlx5dr_rule_get_handle_size(void) +{ + return sizeof(struct mlx5dr_rule); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h new file mode 100644 index 0000000000..88ecfb3e6c --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_rule.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#ifndef MLX5DR_RULE_H_ +#define MLX5DR_RULE_H_ + +enum { + MLX5DR_STE_CTRL_SZ = 20, + MLX5DR_ACTIONS_SZ = 12, + MLX5DR_MATCH_TAG_SZ = 32, + MLX5DR_JUMBO_TAG_SZ = 44, +}; + +enum mlx5dr_rule_status { + MLX5DR_RULE_STATUS_UNKNOWN, + MLX5DR_RULE_STATUS_CREATING, + MLX5DR_RULE_STATUS_CREATED, + MLX5DR_RULE_STATUS_DELETING, + MLX5DR_RULE_STATUS_DELETED, + MLX5DR_RULE_STATUS_FAILING, + MLX5DR_RULE_STATUS_FAILED, +}; + +struct mlx5dr_rule_match_tag { + union { + uint8_t jumbo[MLX5DR_JUMBO_TAG_SZ]; + struct { + uint8_t reserved[MLX5DR_ACTIONS_SZ]; + uint8_t match[MLX5DR_MATCH_TAG_SZ]; + }; + }; +}; + +struct mlx5dr_rule { + struct mlx5dr_matcher *matcher; + union { + struct mlx5dr_rule_match_tag tag; + struct ibv_flow *flow; + }; + uint32_t rtc_0; /* The RTC into which the STE was inserted */ + uint32_t rtc_1; /* The RTC into which the STE was inserted */ + int action_ste_idx; /* Action STE pool ID */ + uint8_t status; /* enum mlx5dr_rule_status */ + uint8_t pending_wqes; +}; + +void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule); + +#endif -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 17/19] net/mlx5/hws: Add HWS action object 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (15 preceding siblings ...) 2022-09-22 19:03 ` [v1 16/19] net/mlx5/hws: Add HWS rule object Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 18/19] net/mlx5/hws: Add HWS debug layer Alex Vesker ` (6 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Action objects are used for executing different HW actions over packets. Each action contains the HW resources and parameters needed for action use over the HW when creating a rule. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 2217 +++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 251 +++ drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 511 ++++++ drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 76 + 4 files changed, 3055 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.h diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c new file mode 100644 index 0000000000..2977bbaf6f --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -0,0 +1,2217 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +#define MLX5DR_ACTION_METER_INIT_COLOR_OFFSET 1 + +/* This is the maximum allowed action order for each table type: + * TX: POP_VLAN, CTR, ASO_METER, AS_CT, PUSH_VLAN, MODIFY, ENCAP, Term + * RX: TAG, DECAP, POP_VLAN, CTR, ASO_METER, ASO_CT, PUSH_VLAN, MODIFY, + * ENCAP, Term + * FDB: DECAP, POP_VLAN, CTR, ASO_METER, ASO_CT, PUSH_VLAN, MODIFY, + * ENCAP, Term + */ +static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_MAX] = { + [MLX5DR_TABLE_TYPE_NIC_RX] = { + BIT(MLX5DR_ACTION_TYP_TAG), + BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_TIR) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, + [MLX5DR_TABLE_TYPE_NIC_TX] = { + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, + [MLX5DR_TABLE_TYPE_FDB] = { + BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_VPORT) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, +}; + +static int mlx5dr_action_get_shared_stc_nic(struct mlx5dr_context *ctx, + enum mlx5dr_context_shared_stc_type stc_type, + uint8_t tbl_type) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_action_shared_stc *shared_stc; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + if (ctx->common_res[tbl_type].shared_stc[stc_type]) { + rte_atomic32_add(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount, 1); + pthread_spin_unlock(&ctx->ctrl_lock); + return 0; + } + + shared_stc = simple_calloc(1, sizeof(*shared_stc)); + if (!shared_stc) { + DR_LOG(ERR, "Failed to allocate memory for shared STCs"); + rte_errno = ENOMEM; + goto unlock_and_out; + } + switch (stc_type) { + case MLX5DR_CONTEXT_SHARED_STC_DECAP: + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + stc_attr.remove_header.decap = 0; + stc_attr.remove_header.start_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + stc_attr.remove_header.end_anchor = MLX5_HEADER_ANCHOR_IPV6_IPV4; + break; + case MLX5DR_CONTEXT_SHARED_STC_POP: + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + stc_attr.remove_words.start_anchor = MLX5_HEADER_ANCHOR_FIRST_VLAN_START; + stc_attr.remove_words.num_of_words = MLX5DR_ACTION_HDR_LEN_L2_VLAN; + break; + default: + DR_LOG(ERR, "no such type : stc_type\n"); + assert(false); + rte_errno = EINVAL; + goto unlock_and_out; + } + + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &shared_stc->remove_header); + if (ret) { + DR_LOG(ERR, "Failed to allocate shared decap l2 STC"); + goto free_shared_stc; + } + + ctx->common_res[tbl_type].shared_stc[stc_type] = shared_stc; + + rte_atomic32_init(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount); + rte_atomic32_set(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount, 1); + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_shared_stc: + simple_free(shared_stc); +unlock_and_out: + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void mlx5dr_action_put_shared_stc_nic(struct mlx5dr_context *ctx, + enum mlx5dr_context_shared_stc_type stc_type, + uint8_t tbl_type) +{ + struct mlx5dr_action_shared_stc *shared_stc; + + pthread_spin_lock(&ctx->ctrl_lock); + if (!rte_atomic32_dec_and_test(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount)) { + pthread_spin_unlock(&ctx->ctrl_lock); + return; + } + + shared_stc = ctx->common_res[tbl_type].shared_stc[stc_type]; + + mlx5dr_action_free_single_stc(ctx, tbl_type, &shared_stc->remove_header); + simple_free(shared_stc); + ctx->common_res[tbl_type].shared_stc[stc_type] = NULL; + pthread_spin_unlock(&ctx->ctrl_lock); +} + +static int mlx5dr_action_get_shared_stc(struct mlx5dr_action *action, + enum mlx5dr_context_shared_stc_type stc_type) +{ + struct mlx5dr_context *ctx = action->ctx; + int ret; + + if (stc_type >= MLX5DR_CONTEXT_SHARED_STC_MAX) { + assert(false); + rte_errno = EINVAL; + return rte_errno; + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for RX shared STCs (type: %d)", + stc_type); + return ret; + } + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for TX shared STCs(type: %d)", + stc_type); + goto clean_nic_rx_stc; + } + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for FDB shared STCs (type: %d)", + stc_type); + goto clean_nic_tx_stc; + } + } + + return 0; + +clean_nic_tx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); +clean_nic_rx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + + return ret; +} + +static void mlx5dr_action_put_shared_stc(struct mlx5dr_action *action, + enum mlx5dr_context_shared_stc_type stc_type) +{ + struct mlx5dr_context *ctx = action->ctx; + + if (stc_type >= MLX5DR_CONTEXT_SHARED_STC_MAX) { + assert(false); + return; + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB); +} + +static void mlx5dr_action_print_combo(enum mlx5dr_action_type *user_actions) +{ + DR_LOG(ERR, "Invalid action_type sequence"); + while (*user_actions != MLX5DR_ACTION_TYP_LAST) { + DR_LOG(ERR, "%s", mlx5dr_debug_action_type_to_str(*user_actions)); + user_actions++; + } +} + +bool mlx5dr_action_check_combo(enum mlx5dr_action_type *user_actions, + enum mlx5dr_table_type table_type) +{ + const uint32_t *order_arr = action_order_arr[table_type]; + uint8_t order_idx = 0; + uint8_t user_idx = 0; + bool valid_combo; + + while (order_arr[order_idx] != BIT(MLX5DR_ACTION_TYP_LAST)) { + /* User action order validated move to next user action */ + if (BIT(user_actions[user_idx]) & order_arr[order_idx]) + user_idx++; + + /* Iterate to the next supported action in the order */ + order_idx++; + } + + /* Combination is valid if all user action were processed */ + valid_combo = user_actions[user_idx] == MLX5DR_ACTION_TYP_LAST; + if (!valid_combo) + mlx5dr_action_print_combo(user_actions); + + return valid_combo; +} + +int mlx5dr_action_root_build_attr(struct mlx5dr_rule_action rule_actions[], + uint32_t num_actions, + struct mlx5dv_flow_action_attr *attr) +{ + struct mlx5dr_action *action; + uint32_t i; + + for (i = 0; i < num_actions; i++) { + action = rule_actions[i].action; + + switch (action->type) { + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_TIR: + attr[i].type = MLX5DV_FLOW_ACTION_DEST_DEVX; + attr[i].obj = action->devx_obj; + break; + case MLX5DR_ACTION_TYP_TAG: + attr[i].type = MLX5DV_FLOW_ACTION_TAG; + attr[i].tag_value = rule_actions[i].tag.value; + break; +#ifdef HAVE_MLX5_DR_CREATE_ACTION_DEFAULT_MISS + case MLX5DR_ACTION_TYP_MISS: + attr[i].type = MLX5DV_FLOW_ACTION_DEFAULT_MISS; + break; +#endif + case MLX5DR_ACTION_TYP_DROP: + attr[i].type = MLX5DV_FLOW_ACTION_DROP; + break; + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + attr[i].type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; + attr[i].action = action->flow_action; + break; +#ifdef HAVE_IBV_FLOW_DEVX_COUNTERS + case MLX5DR_ACTION_TYP_CTR: + attr[i].type = MLX5DV_FLOW_ACTION_COUNTERS_DEVX; + attr[i].obj = action->devx_obj; + + if (rule_actions[i].counter.offset) { + DR_LOG(ERR, "Counter offset not supported over root"); + rte_errno = ENOTSUP; + return rte_errno; + } + break; +#endif + default: + DR_LOG(ERR, "Found unsupported action type: %d", action->type); + rte_errno = ENOTSUP; + return rte_errno; + } + } + + return 0; +} + +static bool mlx5dr_action_fixup_stc_attr(struct mlx5dr_cmd_stc_modify_attr *stc_attr, + struct mlx5dr_cmd_stc_modify_attr *fixup_stc_attr, + uint32_t table_type, + bool is_mirror) +{ + struct mlx5dr_devx_obj *devx_obj; + bool use_fixup = false; + uint32_t fw_tbl_type; + + fw_tbl_type = mlx5dr_table_get_res_fw_ft_type(table_type, is_mirror); + + switch (stc_attr->action_type) { + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE: + if (!is_mirror) + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_attr->ste_table.ste_pool, + &stc_attr->ste_table.ste); + else + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_attr->ste_table.ste_pool, + &stc_attr->ste_table.ste); + + *fixup_stc_attr = *stc_attr; + fixup_stc_attr->ste_table.ste_obj_id = devx_obj->id; + use_fixup = true; + break; + + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT: + if (stc_attr->vport.vport_num != WIRE_PORT) + break; + + if (fw_tbl_type == FS_FT_FDB_RX) { + /*The FW doesn't allow to go back to wire in the RX side, so change it to DROP*/ + fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + fixup_stc_attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + fixup_stc_attr->stc_offset = stc_attr->stc_offset; + } else if (fw_tbl_type == FS_FT_FDB_TX) { + /*The FW doesn't allow to go to wire in the TX by JUMP_TO_VPORT*/ + fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK; + fixup_stc_attr->action_offset = stc_attr->action_offset; + fixup_stc_attr->stc_offset = stc_attr->stc_offset; + fixup_stc_attr->vport.vport_num = 0; + fixup_stc_attr->vport.esw_owner_vhca_id = stc_attr->vport.esw_owner_vhca_id; + } + use_fixup = true; + break; + + default: + break; + } + + return use_fixup; +} + +int mlx5dr_action_alloc_single_stc(struct mlx5dr_context *ctx, + struct mlx5dr_cmd_stc_modify_attr *stc_attr, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc) +{ + struct mlx5dr_cmd_stc_modify_attr cleanup_stc_attr = {0}; + struct mlx5dr_pool *stc_pool = ctx->stc_pool[table_type]; + struct mlx5dr_cmd_stc_modify_attr fixup_stc_attr = {0}; + struct mlx5dr_devx_obj *devx_obj_0; + bool use_fixup; + int ret; + + ret = mlx5dr_pool_chunk_alloc(stc_pool, stc); + if (ret) { + DR_LOG(ERR, "Failed to allocate single action STC"); + return ret; + } + + stc_attr->stc_offset = stc->offset; + devx_obj_0 = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, stc); + + /* according to table/action limitation change the stc_attr */ + use_fixup = mlx5dr_action_fixup_stc_attr(stc_attr, &fixup_stc_attr, table_type, false); + ret = mlx5dr_cmd_stc_modify(devx_obj_0, use_fixup ? &fixup_stc_attr : stc_attr); + if (ret) { + DR_LOG(ERR, "Failed to modify STC action_type %d tbl_type %d", + stc_attr->action_type, table_type); + goto free_chunk; + } + + /* Modify the FDB peer */ + if (table_type == MLX5DR_TABLE_TYPE_FDB) { + struct mlx5dr_devx_obj *devx_obj_1; + + devx_obj_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, stc); + + use_fixup = mlx5dr_action_fixup_stc_attr(stc_attr, &fixup_stc_attr, + table_type, true); + ret = mlx5dr_cmd_stc_modify(devx_obj_1, use_fixup ? &fixup_stc_attr : stc_attr); + if (ret) { + DR_LOG(ERR, "Failed to modify peer STC action_type %d tbl_type %d", + stc_attr->action_type, table_type); + goto clean_devx_obj_0; + } + } + + return 0; + +clean_devx_obj_0: + cleanup_stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + cleanup_stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + cleanup_stc_attr.stc_offset = stc->offset; + mlx5dr_cmd_stc_modify(devx_obj_0, &cleanup_stc_attr); +free_chunk: + mlx5dr_pool_chunk_free(stc_pool, stc); + return rte_errno; +} + +void mlx5dr_action_free_single_stc(struct mlx5dr_context *ctx, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc) +{ + struct mlx5dr_pool *stc_pool = ctx->stc_pool[table_type]; + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_devx_obj *devx_obj; + + /* Modify the STC not to point to an object */ + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + stc_attr.stc_offset = stc->offset; + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, stc); + mlx5dr_cmd_stc_modify(devx_obj, &stc_attr); + + if (table_type == MLX5DR_TABLE_TYPE_FDB) { + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, stc); + mlx5dr_cmd_stc_modify(devx_obj, &stc_attr); + } + + mlx5dr_pool_chunk_free(stc_pool, stc); +} + +static uint32_t mlx5dr_action_get_mh_stc_type(__be64 pattern) +{ + uint8_t action_type = MLX5_GET(set_action_in, &pattern, action_type); + + switch (action_type) { + case MLX5_MODIFICATION_TYPE_SET: + return MLX5_IFC_STC_ACTION_TYPE_SET; + case MLX5_MODIFICATION_TYPE_ADD: + return MLX5_IFC_STC_ACTION_TYPE_ADD; + case MLX5_MODIFICATION_TYPE_COPY: + return MLX5_IFC_STC_ACTION_TYPE_COPY; + default: + assert(false); + DR_LOG(ERR, "Unsupported action type: 0x%x\n", action_type); + rte_errno = ENOTSUP; + return MLX5_IFC_STC_ACTION_TYPE_NOP; + } +} + +static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, + struct mlx5dr_devx_obj *obj, + struct mlx5dr_cmd_stc_modify_attr *attr) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TAG: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_TAG; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + break; + case MLX5DR_ACTION_TYP_DROP: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + break; + case MLX5DR_ACTION_TYP_MISS: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ALLOW; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + /* TODO Need to support default miss for FDB */ + break; + case MLX5DR_ACTION_TYP_CTR: + attr->id = obj->id; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_COUNTER; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW0; + break; + case MLX5DR_ACTION_TYP_TIR: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->dest_tir_num = obj->id; + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + if (action->modify_header.num_of_actions == 1) { + attr->modify_action.data = action->modify_header.single_action; + attr->action_type = mlx5dr_action_get_mh_stc_type(attr->modify_action.data); + + if (attr->action_type == MLX5_IFC_STC_ACTION_TYPE_ADD || + attr->action_type == MLX5_IFC_STC_ACTION_TYPE_SET) + MLX5_SET(set_action_in, &attr->modify_action.data, data, 0); + } else { + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST; + attr->modify_header.arg_id = action->modify_header.arg_obj->id; + attr->modify_header.pattern_id = action->modify_header.pattern_obj->id; + } + break; + case MLX5DR_ACTION_TYP_FT: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->dest_table_id = obj->id; + break; + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + attr->remove_header.decap = 1; + attr->remove_header.start_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->remove_header.end_anchor = MLX5_HEADER_ANCHOR_INNER_MAC; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.arg_id = action->reformat.arg_obj->id; + attr->insert_header.header_size = action->reformat.header_size; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.arg_id = action->reformat.arg_obj->id; + attr->insert_header.header_size = action->reformat.header_size; + break; + case MLX5DR_ACTION_TYP_ASO_METER: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ASO; + attr->aso.aso_type = ASO_OPC_MOD_POLICER; + attr->aso.devx_obj_id = obj->id; + attr->aso.return_reg_id = action->aso.return_reg_id; + break; + case MLX5DR_ACTION_TYP_ASO_CT: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ASO; + attr->aso.aso_type = ASO_OPC_MOD_CONNECTION_TRACKING; + attr->aso.devx_obj_id = obj->id; + attr->aso.return_reg_id = action->aso.return_reg_id; + break; + case MLX5DR_ACTION_TYP_VPORT: + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT; + attr->vport.vport_num = action->vport.vport_num; + attr->vport.esw_owner_vhca_id = action->vport.esw_owner_vhca_id; + break; + case MLX5DR_ACTION_TYP_POP_VLAN: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + attr->remove_words.start_anchor = MLX5_HEADER_ANCHOR_FIRST_VLAN_START; + attr->remove_words.num_of_words = MLX5DR_ACTION_HDR_LEN_L2_VLAN / 2; + break; + case MLX5DR_ACTION_TYP_PUSH_VLAN: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 0; + attr->insert_header.is_inline = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.insert_offset = MLX5DR_ACTION_HDR_LEN_L2_MACS; + attr->insert_header.header_size = MLX5DR_ACTION_HDR_LEN_L2_VLAN; + break; + default: + DR_LOG(ERR, "Invalid action type %d", action->type); + assert(false); + } +} + +static int +mlx5dr_action_create_stcs(struct mlx5dr_action *action, + struct mlx5dr_devx_obj *obj) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_context *ctx = action->ctx; + int ret; + + mlx5dr_action_fill_stc_attr(action, obj, &stc_attr); + + /* Block unsupported parallel devx obj modify over the same base */ + pthread_spin_lock(&ctx->ctrl_lock); + + /* Allocate STC for RX */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); + if (ret) + goto out_err; + } + + /* Allocate STC for TX */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); + if (ret) + goto free_nic_rx_stc; + } + + /* Allocate STC for FDB */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_FDB, + &action->stc[MLX5DR_TABLE_TYPE_FDB]); + if (ret) + goto free_nic_tx_stc; + } + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_nic_tx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_free_single_stc(ctx, + MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); +free_nic_rx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_free_single_stc(ctx, + MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); +out_err: + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void +mlx5dr_action_destroy_stcs(struct mlx5dr_action *action) +{ + struct mlx5dr_context *ctx = action->ctx; + + /* Block unsupported parallel devx obj modify over the same base */ + pthread_spin_lock(&ctx->ctrl_lock); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_FDB, + &action->stc[MLX5DR_TABLE_TYPE_FDB]); + + pthread_spin_unlock(&ctx->ctrl_lock); +} + +static bool +mlx5dr_action_is_root_flags(uint32_t flags) +{ + return flags & (MLX5DR_ACTION_FLAG_ROOT_RX | + MLX5DR_ACTION_FLAG_ROOT_TX | + MLX5DR_ACTION_FLAG_ROOT_FDB); +} + +static bool +mlx5dr_action_is_hws_flags(uint32_t flags) +{ + return flags & (MLX5DR_ACTION_FLAG_HWS_RX | + MLX5DR_ACTION_FLAG_HWS_TX | + MLX5DR_ACTION_FLAG_HWS_FDB); +} + +static struct mlx5dr_action * +mlx5dr_action_create_generic(struct mlx5dr_context *ctx, + uint32_t flags, + enum mlx5dr_action_type action_type) +{ + struct mlx5dr_action *action; + + if (!mlx5dr_action_is_root_flags(flags) && + !mlx5dr_action_is_hws_flags(flags)) { + DR_LOG(ERR, "Action flags must specify root or non root (HWS)"); + rte_errno = ENOTSUP; + return NULL; + } + + action = simple_calloc(1, sizeof(*action)); + if (!action) { + DR_LOG(ERR, "Failed to allocate memory for action [%d]", action_type); + rte_errno = ENOMEM; + return NULL; + } + + action->ctx = ctx; + action->flags = flags; + action->type = action_type; + + return action; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx, + struct mlx5dr_table *tbl, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_table_is_root(tbl)) { + DR_LOG(ERR, "Root table cannot be set as destination"); + rte_errno = ENOTSUP; + return NULL; + } + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_FT); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = tbl->ft->obj; + } else { + ret = mlx5dr_action_create_stcs(action, tbl->ft); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *obj, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_TIR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = obj->obj; + } else { + ret = mlx5dr_action_create_stcs(action, obj); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_drop(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_DROP); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_default_miss(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_MISS); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_tag(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_TAG); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static struct mlx5dr_action * +mlx5dr_action_create_aso(struct mlx5dr_context *ctx, + enum mlx5dr_action_type action_type, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "ASO action cannot be used over root table"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + action->aso.devx_obj = devx_obj; + action->aso.return_reg_id = return_reg_id; + + ret = mlx5dr_action_create_stcs(action, devx_obj); + if (ret) + goto free_action; + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_aso_meter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + return mlx5dr_action_create_aso(ctx, MLX5DR_ACTION_TYP_ASO_METER, + devx_obj, return_reg_id, flags); +} + +struct mlx5dr_action * +mlx5dr_action_create_aso_ct(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + return mlx5dr_action_create_aso(ctx, MLX5DR_ACTION_TYP_ASO_CT, + devx_obj, return_reg_id, flags); +} + +struct mlx5dr_action * +mlx5dr_action_create_counter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *obj, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_CTR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = obj->obj; + } else { + ret = mlx5dr_action_create_stcs(action, obj); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static int mlx5dr_action_create_dest_vport_hws(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint32_t ib_port_num) +{ + struct mlx5dr_cmd_query_vport_caps vport_caps = {0}; + int ret; + + ret = mlx5dr_cmd_query_ib_port(ctx->ibv_ctx, &vport_caps, ib_port_num); + if (ret) { + DR_LOG(ERR, "Failed quering port %d\n", ib_port_num); + return ret; + } + action->vport.vport_num = vport_caps.vport_num; + action->vport.esw_owner_vhca_id = vport_caps.esw_owner_vhca_id; + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret){ + DR_LOG(ERR, "Failed creating stc for port %d\n", ib_port_num); + return ret; + } + + return 0; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_vport(struct mlx5dr_context *ctx, + uint32_t ib_port_num, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (!(flags & MLX5DR_ACTION_FLAG_HWS_FDB)) { + DR_LOG(ERR, "Vport action is supported for FDB only\n"); + rte_errno = EINVAL; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_VPORT); + if (!action) + return NULL; + + ret = mlx5dr_action_create_dest_vport_hws(ctx, action, ib_port_num); + if (ret) { + DR_LOG(ERR, "Failed to create vport action HWS\n"); + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_push_vlan(struct mlx5dr_context *ctx, uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "push vlan action not supported for root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_PUSH_VLAN); + if (!action) + return NULL; + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed creating stc for push vlan\n"); + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_pop_vlan(struct mlx5dr_context *ctx, uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "pop vlan action not supported for root"); + rte_errno = ENOTSUP; + return NULL; + } + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_POP_VLAN); + if (!action) + return NULL; + + /* Optimization: get shared stc in case 2 pops will be needed */ + ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); + if (ret) { + DR_LOG(ERR, "Failed to create remove stc for reformat"); + goto free_action; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed creating stc for pop vlan\n"); + goto free_shared; + } + + return action; + +free_shared: + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); +free_action: + simple_free(action); + return NULL; +} + +static int +mlx5dr_action_conv_reformat_type_to_action(uint32_t reformat_type, + uint32_t *action_type) +{ + switch (reformat_type) { + case MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2: + *action_type = MLX5DR_ACTION_TYP_TNL_L2_TO_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2: + *action_type = MLX5DR_ACTION_TYP_L2_TO_TNL_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_TNL_L3_TO_L2: + *action_type = MLX5DR_ACTION_TYP_TNL_L3_TO_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3: + *action_type = MLX5DR_ACTION_TYP_L2_TO_TNL_L3; + break; + default: + DR_LOG(ERR, "Invalid reformat type requested"); + rte_errno = ENOTSUP; + return rte_errno; + } + return 0; +} + +static void +mlx5dr_action_conv_reformat_to_verbs(uint32_t action_type, + uint32_t *verb_reformat_type) +{ + switch (action_type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TUNNEL_TO_L2; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TO_L2_TUNNEL; + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L3_TUNNEL_TO_L2; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TO_L3_TUNNEL; + break; + } +} + +static void +mlx5dr_action_conv_flags_to_ft_type(uint32_t flags, uint32_t *ft_type) +{ + if (flags & MLX5DR_ACTION_FLAG_ROOT_RX) { + *ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_RX; + } else if (flags & MLX5DR_ACTION_FLAG_ROOT_TX) { + *ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_TX; + } else if (flags & MLX5DR_ACTION_FLAG_ROOT_FDB) { + *ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_FDB; + } +} + +static int +mlx5dr_action_create_reformat_root(struct mlx5dr_action *action, + size_t data_sz, + void *data) +{ + enum mlx5dv_flow_table_type ft_type = 0; /*fix compilation warn*/ + uint32_t verb_reformat_type = 0; + + /* Convert action to FT type and verbs reformat type */ + mlx5dr_action_conv_flags_to_ft_type(action->flags, &ft_type); + mlx5dr_action_conv_reformat_to_verbs(action->type, &verb_reformat_type); + + /* Create the reformat type for root table */ + action->flow_action = + mlx5_glue->dv_create_flow_action_packet_reformat_root(action->ctx->ibv_ctx, + data_sz, + data, + verb_reformat_type, + ft_type); + if (!action->flow_action) { + rte_errno = errno; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_action_handle_reformat_args(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + uint32_t args_log_size; + int ret; + + if (data_sz % 2 != 0) { + DR_LOG(ERR, "Data size should be multiply of 2"); + rte_errno = EINVAL; + return rte_errno; + } + action->reformat.header_size = data_sz; + + args_log_size = mlx5dr_arg_data_size_to_arg_log_size(data_sz); + if (args_log_size >= MLX5DR_ARG_CHUNK_SIZE_MAX) { + DR_LOG(ERR, "Data size is bigger than supported"); + rte_errno = EINVAL; + return rte_errno; + } + args_log_size += bulk_size; + + if (!mlx5dr_arg_is_valid_arg_request_size(ctx, args_log_size)) { + DR_LOG(ERR, "Arg size %d does not fit FW requests", + args_log_size); + rte_errno = EINVAL; + return rte_errno; + } + + action->reformat.arg_obj = mlx5dr_cmd_arg_create(ctx->ibv_ctx, + args_log_size, + ctx->pd_num); + if (!action->reformat.arg_obj) { + DR_LOG(ERR, "Failed to create arg for reformat"); + return rte_errno; + } + + /* when INLINE need to write the arg data */ + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->reformat.arg_obj->id, + data, + data_sz); + if (ret) { + DR_LOG(ERR, "Failed to write inline arg for reformat"); + goto free_arg; + } + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static int mlx5dr_action_handle_l2_to_tunnel_l2(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + ret = mlx5dr_action_handle_reformat_args(ctx, data_sz, data, bulk_size, + action); + if (ret) { + DR_LOG(ERR, "Failed to create args for reformat"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed to create stc for reformat"); + goto free_arg; + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static int mlx5dr_action_get_shared_stc_offset(struct mlx5dr_context_common_res *common_res, + enum mlx5dr_context_shared_stc_type stc_type) +{ + return common_res->shared_stc[stc_type]->remove_header.offset; +} + +static int mlx5dr_action_handle_l2_to_tunnel_l3(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + ret = mlx5dr_action_handle_reformat_args(ctx, data_sz, data, bulk_size, + action); + if (ret) { + DR_LOG(ERR, "Failed to create args for reformat"); + return ret; + } + + /* the action is remove-l2-header + insert-l3-header */ + ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); + if (ret) { + DR_LOG(ERR, "Failed to create remove stc for reformat"); + goto free_arg; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed to create insert stc for reformat"); + goto down_shared; + } + + return 0; + +down_shared: + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static void mlx5dr_action_prepare_decap_l3_actions(size_t data_sz, + uint8_t *mh_data, + int *num_of_actions) +{ + int actions; + uint32_t i; + + /* Remove L2L3 outer headers */ + MLX5_SET(stc_ste_param_remove, mh_data, action_type, + MLX5_MODIFICATION_TYPE_REMOVE); + MLX5_SET(stc_ste_param_remove, mh_data, decap, 0x1); + MLX5_SET(stc_ste_param_remove, mh_data, remove_start_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + MLX5_SET(stc_ste_param_remove, mh_data, remove_end_anchor, + MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4); + mh_data += MLX5DR_ACTION_DOUBLE_SIZE; /* assume every action is 2 dw */ + actions = 1; + + /* Add the new header using inline action 4Byte at a time, the header + * is added in reversed order to the beginning of the packet to avoid + * incorrect parsing by the HW. Since header is 14B or 18B an extra + * two bytes are padded and later removed. + */ + for (i = 0; i < data_sz / MLX5DR_ACTION_INLINE_DATA_SIZE + 1; i++) { + MLX5_SET(stc_ste_param_insert, mh_data, action_type, + MLX5_MODIFICATION_TYPE_INSERT); + MLX5_SET(stc_ste_param_insert, mh_data, inline_data, 0x1); + MLX5_SET(stc_ste_param_insert, mh_data, insert_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + MLX5_SET(stc_ste_param_insert, mh_data, insert_size, 2); + mh_data += MLX5DR_ACTION_DOUBLE_SIZE; + actions++; + } + + /* Remove first 2 extra bytes */ + MLX5_SET(stc_ste_param_remove_words, mh_data, action_type, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS); + MLX5_SET(stc_ste_param_remove_words, mh_data, remove_start_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + /* The hardware expects here size in words (2 bytes) */ + MLX5_SET(stc_ste_param_remove_words, mh_data, remove_size, 1); + actions++; + + *num_of_actions = actions; +} + +static int +mlx5dr_action_handle_tunnel_l3_to_l2(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + uint8_t mh_data[MLX5DR_ACTION_REFORMAT_DATA_SIZE] = {0}; + int num_of_actions; + int mh_data_size; + int ret; + + if (data_sz != MLX5DR_ACTION_HDR_LEN_L2 && + data_sz != MLX5DR_ACTION_HDR_LEN_L2_W_VLAN) { + DR_LOG(ERR, "data size is not supported for decap-l3\n"); + rte_errno = EINVAL; + return rte_errno; + } + + mlx5dr_action_prepare_decap_l3_actions(data_sz, mh_data, &num_of_actions); + + mh_data_size = num_of_actions * MLX5DR_MODIFY_ACTION_SIZE; + + ret = mlx5dr_pat_arg_create_modify_header(ctx, action, mh_data_size, + (__be64 *) mh_data, bulk_size); + if (ret) { + DR_LOG(ERR, "Failed allocating modify-header for decap-l3\n"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_mh_obj; + + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + mlx5dr_action_prepare_decap_l3_data(data, mh_data, num_of_actions); + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->modify_header.arg_obj->id, + (uint8_t *)mh_data, + num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); + if (ret) { + DR_LOG(ERR, "failed writing INLINE arg decap_l3"); + goto clean_stc; + } + } + + return 0; + +clean_stc: + mlx5dr_action_destroy_stcs(action); +free_mh_obj: + mlx5dr_pat_arg_destroy_modify_header(ctx, action); + return ret; +} + +static int +mlx5dr_action_create_reformat_hws(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret = ENOTSUP; + + switch (action->type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + ret = mlx5dr_action_create_stcs(action, NULL); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + ret = mlx5dr_action_handle_l2_to_tunnel_l2(ctx, data_sz, data, bulk_size, action); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + ret = mlx5dr_action_handle_l2_to_tunnel_l3(ctx, data_sz, data, bulk_size, action); + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + ret = mlx5dr_action_handle_tunnel_l3_to_l2(ctx, data_sz, data, bulk_size, action); + break; + + default: + assert(false); + rte_errno = ENOTSUP; + return rte_errno; + } + + return ret; +} + +struct mlx5dr_action * +mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, + enum mlx5dr_action_reformat_type reformat_type, + size_t data_sz, + void *inline_data, + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + uint32_t action_type; + int ret; + + ret = mlx5dr_action_conv_reformat_type_to_action(reformat_type, &action_type); + if (ret) + return NULL; + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + if (log_bulk_size) { + DR_LOG(ERR, "Bulk reformat not supported over root"); + rte_errno = ENOTSUP; + goto free_action; + } + + ret = mlx5dr_action_create_reformat_root(action, data_sz, inline_data); + if (ret) + goto free_action; + + return action; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Reformat flags don't fit HWS (flags: %x0x)\n", + flags); + rte_errno = EINVAL; + goto free_action; + } + + ret = mlx5dr_action_create_reformat_hws(ctx, data_sz, inline_data, log_bulk_size, action); + if (ret) { + DR_LOG(ERR, "Failed to create reformat.\n"); + rte_errno = EINVAL; + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static int +mlx5dr_action_create_modify_header_root(struct mlx5dr_action *action, + size_t actions_sz, + __be64 *actions) +{ + enum mlx5dv_flow_table_type ft_type = 0; + + mlx5dr_action_conv_flags_to_ft_type(action->flags, &ft_type); + + action->flow_action = + mlx5_glue->dv_create_flow_action_modify_header_root(action->ctx->ibv_ctx, + actions_sz, + (uint64_t *)actions, + ft_type); + if (!action->flow_action) { + rte_errno = errno; + return rte_errno; + } + + return 0; +} + +struct mlx5dr_action * +mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, + size_t pattern_sz, + __be64 pattern[], + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_MODIFY_HDR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + if (log_bulk_size) { + DR_LOG(ERR, "Bulk modify-header not supported over root"); + rte_errno = ENOTSUP; + goto free_action; + } + ret = mlx5dr_action_create_modify_header_root(action, pattern_sz, pattern); + if (ret) + goto free_action; + + return action; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Flags don't fit hws (flags: %x0x, log_bulk_size: %d)\n", + flags, log_bulk_size); + rte_errno = EINVAL; + goto free_action; + } + + if (pattern_sz / MLX5DR_MODIFY_ACTION_SIZE == 1) { + /* Optimize single modiy action to be used inline */ + action->modify_header.single_action = pattern[0]; + action->modify_header.num_of_actions = 1; + action->modify_header.single_action_type = + MLX5_GET(set_action_in, pattern, action_type); + } else { + /* Use multi action pattern and argument */ + ret = mlx5dr_pat_arg_create_modify_header(ctx, action, pattern_sz, + pattern, log_bulk_size); + if (ret) { + DR_LOG(ERR, "Failed allocating modify-header\n"); + goto free_action; + } + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_mh_obj; + + return action; + +free_mh_obj: + if (action->modify_header.num_of_actions > 1) + mlx5dr_pat_arg_destroy_modify_header(ctx, action); +free_action: + simple_free(action); + return NULL; +} + +static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TIR: + case MLX5DR_ACTION_TYP_MISS: + case MLX5DR_ACTION_TYP_TAG: + case MLX5DR_ACTION_TYP_DROP: + case MLX5DR_ACTION_TYP_CTR: + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_ASO_METER: + case MLX5DR_ACTION_TYP_ASO_CT: + case MLX5DR_ACTION_TYP_PUSH_VLAN: + mlx5dr_action_destroy_stcs(action); + break; + case MLX5DR_ACTION_TYP_POP_VLAN: + mlx5dr_action_destroy_stcs(action); + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + mlx5dr_action_destroy_stcs(action); + if (action->modify_header.num_of_actions > 1) + mlx5dr_pat_arg_destroy_modify_header(action->ctx, action); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + mlx5dr_action_destroy_stcs(action); + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + mlx5dr_action_destroy_stcs(action); + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + break; + } +} + +static void mlx5dr_action_destroy_root(struct mlx5dr_action *action) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + ibv_destroy_flow_action(action->flow_action); + break; + } +} + +int mlx5dr_action_destroy(struct mlx5dr_action *action) +{ + if (mlx5dr_action_is_root_flags(action->flags)) + mlx5dr_action_destroy_root(action); + else + mlx5dr_action_destroy_hws(action); + + simple_free(action); + return 0; +} + +/* called under pthread_spin_lock(&ctx->ctrl_lock) */ +int mlx5dr_action_get_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_action_default_stc *default_stc; + int ret; + + if (ctx->common_res[tbl_type].default_stc) { + ctx->common_res[tbl_type].default_stc->refcount++; + return 0; + } + + default_stc = simple_calloc(1, sizeof(*default_stc)); + if (!default_stc) { + DR_LOG(ERR, "Failed to allocate memory for default STCs"); + rte_errno = ENOMEM; + return rte_errno; + } + + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_NOP; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW0; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_ctr); + if (ret) { + DR_LOG(ERR, "Failed to allocate default counter STC"); + goto free_default_stc; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw5); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW5 STC"); + goto free_nop_ctr; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW6; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw6); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW6 STC"); + goto free_nop_dw5; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW7; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw7); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW7 STC"); + goto free_nop_dw6; + } + + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_ALLOW; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->default_hit); + if (ret) { + DR_LOG(ERR, "Failed to allocate default allow STC"); + goto free_nop_dw7; + } + + ctx->common_res[tbl_type].default_stc = default_stc; + ctx->common_res[tbl_type].default_stc->refcount++; + + return 0; + +free_nop_dw7: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw7); +free_nop_dw6: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw6); +free_nop_dw5: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw5); +free_nop_ctr: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_ctr); +free_default_stc: + simple_free(default_stc); + return rte_errno; +} + +void mlx5dr_action_put_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type) +{ + struct mlx5dr_action_default_stc *default_stc; + + default_stc = ctx->common_res[tbl_type].default_stc; + + default_stc = ctx->common_res[tbl_type].default_stc; + if (--default_stc->refcount) + return; + + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->default_hit); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw7); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw6); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw5); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_ctr); + simple_free(default_stc); + ctx->common_res[tbl_type].default_stc = NULL; +} + +static void mlx5dr_action_modify_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions) +{ + mlx5dr_arg_write(queue, NULL, arg_idx, arg_data, + num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); +} + +void +mlx5dr_action_prepare_decap_l3_data(uint8_t *src, uint8_t *dst, + uint16_t num_of_actions) +{ + uint8_t *e_src; + int i; + + /* num_of_actions = remove l3l2 + 4/5 inserts + remove extra 2 bytes + * copy from end of src to the start of dst. + * move to the end, 2 is the leftover from 14B or 18B + */ + if (num_of_actions == DECAP_L3_NUM_ACTIONS_W_NO_VLAN) + e_src = src + MLX5DR_ACTION_HDR_LEN_L2; + else + e_src = src + MLX5DR_ACTION_HDR_LEN_L2_W_VLAN; + + /* move dst over the first remove action + zero data */ + dst += MLX5DR_ACTION_DOUBLE_SIZE; + /* move dst over the first insert ctrl action */ + dst += MLX5DR_ACTION_DOUBLE_SIZE / 2; + /* actions: + * no vlan: r_h-insert_4b-insert_4b-insert_4b-insert_4b-remove_2b. + * with vlan: r_h-insert_4b-insert_4b-insert_4b-insert_4b-insert_4b-remove_2b. + * the loop is without the last insertion. + */ + for (i = 0; i < num_of_actions - 3; i++) { + e_src -= MLX5DR_ACTION_INLINE_DATA_SIZE; + memcpy(dst, e_src, MLX5DR_ACTION_INLINE_DATA_SIZE); /* data */ + dst += MLX5DR_ACTION_DOUBLE_SIZE; + } + /* copy the last 2 bytes after a gap of 2 bytes which will be removed */ + e_src -= MLX5DR_ACTION_INLINE_DATA_SIZE / 2; + dst += MLX5DR_ACTION_INLINE_DATA_SIZE / 2; + memcpy(dst, e_src, 2); +} + +static struct mlx5dr_actions_wqe_setter * +mlx5dr_action_setter_find_first(struct mlx5dr_actions_wqe_setter *setter, + uint8_t req_flags) +{ + /* Use a new setter if requested flags are taken */ + while (setter->flags & req_flags) + setter++; + + /* Use current setter in required flags are not used */ + return setter; +} + +static void +mlx5dr_action_apply_stc(struct mlx5dr_actions_apply_data *apply, + enum mlx5dr_action_stc_idx stc_idx, + uint8_t action_idx) +{ + struct mlx5dr_action *action = apply->rule_action[action_idx].action; + + apply->wqe_ctrl->stc_ix[stc_idx] = + htobe32(action->stc[apply->tbl_type].offset); +} + +static void +mlx5dr_action_setter_push_vlan(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_double]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = rule_action->push_vlan.vlan_hdr; + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; +} + +static void +mlx5dr_action_setter_modify_header(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + uint32_t arg_sz, arg_idx; + uint8_t *single_action; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + + if (action->modify_header.num_of_actions == 1) { + if (action->modify_header.single_action_type == + MLX5_MODIFICATION_TYPE_COPY) { + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + return; + } + + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) + single_action = (uint8_t *)&action->modify_header.single_action; + else + single_action = rule_action->modify_header.data; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = + *(__be32 *)MLX5_ADDR_OF(set_action_in, single_action, data); + } else { + /* Argument offset multiple with number of args per these actions */ + arg_sz = mlx5dr_arg_get_arg_size(action->modify_header.num_of_actions); + arg_idx = rule_action->modify_header.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + if (!(action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_action_modify_write(apply->queue, + action->modify_header.arg_obj->id + arg_idx, + rule_action->modify_header.data, + action->modify_header.num_of_actions); + } + } +} + +static void +mlx5dr_action_setter_insert_ptr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + uint32_t arg_idx, arg_sz; + + rule_action = &apply->rule_action[setter->idx_double]; + + /* Argument offset multiple on args required for header size */ + arg_sz = mlx5dr_arg_data_size_to_arg_size(rule_action->action->reformat.header_size); + arg_idx = rule_action->reformat.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + if (!(rule_action->action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_arg_write(apply->queue, NULL, + rule_action->action->reformat.arg_obj->id + arg_idx, + rule_action->reformat.data, + rule_action->action->reformat.header_size); + } +} + +static void +mlx5dr_action_setter_tnl_l3_to_l2(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + uint32_t arg_sz, arg_idx; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + + /* Argument offset multiple on args required for num of actions */ + arg_sz = mlx5dr_arg_get_arg_size(action->modify_header.num_of_actions); + arg_idx = rule_action->reformat.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + if (!(action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_arg_decapl3_write(apply->queue, + action->modify_header.arg_obj->id + arg_idx, + rule_action->reformat.data, + action->modify_header.num_of_actions); + } +} + +static void +mlx5dr_action_setter_aso(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + uint32_t exe_aso_ctrl; + uint32_t offset; + + rule_action = &apply->rule_action[setter->idx_double]; + + switch(rule_action->action->type){ + case MLX5DR_ACTION_TYP_ASO_METER: + /* exe_aso_ctrl format: + * [STC only and reserved bits 29b][init_color 2b][meter_id 1b] + */ + offset = rule_action->aso_meter.offset / MLX5_ASO_METER_NUM_PER_OBJ; + exe_aso_ctrl = rule_action->aso_meter.offset % MLX5_ASO_METER_NUM_PER_OBJ; + exe_aso_ctrl |= rule_action->aso_meter.init_color << + MLX5DR_ACTION_METER_INIT_COLOR_OFFSET; + break; + case MLX5DR_ACTION_TYP_ASO_CT: + /* exe_aso_ctrl CT format: + * [STC only and reserved bits 31b][direction 1b] + */ + offset = rule_action->aso_ct.offset / MLX5_ASO_CT_NUM_PER_OBJ; + exe_aso_ctrl = rule_action->aso_ct.direction; + break; + default: + DR_LOG(ERR, "Unsupported ASO action type: %d", rule_action->action->type); + rte_errno = ENOTSUP; + return; + } + + /* aso_object_offset format: [24B] */ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = htobe32(offset); + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(exe_aso_ctrl); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; +} + +static void +mlx5dr_action_setter_tag(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_single]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = htobe32(rule_action->tag.value); + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW5, setter->idx_single); +} + +static void +mlx5dr_action_setter_ctrl_ctr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_ctr]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW0] = htobe32(rule_action->counter.offset); + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_CTRL, setter->idx_ctr); +} + +static void +mlx5dr_action_setter_single(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW5, setter->idx_single); +} + +static void +mlx5dr_action_setter_single_double_pop(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(mlx5dr_action_get_shared_stc_offset(apply->common_res, + MLX5DR_CONTEXT_SHARED_STC_POP)); +} + +static void +mlx5dr_action_setter_hit(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = 0; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_HIT, setter->idx_hit); +} + +static void +mlx5dr_action_setter_default_hit(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_HIT] = + htobe32(apply->common_res->default_stc->default_hit.offset); +} + +static void +mlx5dr_action_setter_hit_next_action(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = htobe32(apply->next_direct_idx << 6); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_HIT] = htobe32(apply->jump_to_action_stc); +} + +static void +mlx5dr_action_setter_common_decap(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(mlx5dr_action_get_shared_stc_offset(apply->common_res, + MLX5DR_CONTEXT_SHARED_STC_DECAP)); +} + +int mlx5dr_action_template_process(struct mlx5dr_action_template *at) +{ + struct mlx5dr_actions_wqe_setter *start_setter = at->setters + 1; + enum mlx5dr_action_type *action_type = at->action_type_arr; + struct mlx5dr_actions_wqe_setter *setter = at->setters; + struct mlx5dr_actions_wqe_setter *pop_setter = NULL; + struct mlx5dr_actions_wqe_setter *last_setter; + int i; + + /* Note: Given action combination must be valid */ + + /* Check if action were already processed */ + if (at->num_of_action_stes) + return 0; + + for (i = 0; i < MLX5DR_ACTION_MAX_STE; i++) + setter[i].set_hit = &mlx5dr_action_setter_hit_next_action; + + /* The same action template setters can be used with jumbo or match + * STE, to support both cases we reseve the first setter for cases + * with jumbo STE to allow jump to the first action STE. + * This extra setter can be reduced in some cases on rule creation. + */ + last_setter = setter = start_setter; + + for (i = 0; i < at->num_actions; i++) { + switch (action_type[i]) { + case MLX5DR_ACTION_TYP_DROP: + case MLX5DR_ACTION_TYP_TIR: + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_VPORT: + case MLX5DR_ACTION_TYP_MISS: + /* Hit action */ + last_setter->flags |= ASF_HIT; + last_setter->set_hit = &mlx5dr_action_setter_hit; + last_setter->idx_hit = i; + break; + + case MLX5DR_ACTION_TYP_POP_VLAN: + /* Single remove header to header */ + if (pop_setter) { /* we have 2 pops, use the shared */ + pop_setter->set_single = &mlx5dr_action_setter_single_double_pop; + break; + } + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY); + setter->flags |= ASF_SINGLE1 | ASF_REPARSE | ASF_REMOVE; + setter->set_single = &mlx5dr_action_setter_single; + setter->idx_single = i; + pop_setter = setter; + break; + + case MLX5DR_ACTION_TYP_PUSH_VLAN: + /* Double insert inline */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_REPARSE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_push_vlan; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_MODIFY_HDR: + /* Double modify header list */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_modify_header; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_ASO_METER: + case MLX5DR_ACTION_TYP_ASO_CT: + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE); + setter->flags |= ASF_DOUBLE; + setter->set_double = &mlx5dr_action_setter_aso; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + /* Single remove header to header */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY); + setter->flags |= ASF_SINGLE1 | ASF_REMOVE | ASF_REPARSE; + setter->set_single = &mlx5dr_action_setter_single; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + /* Double insert header with pointer */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE); + setter->flags |= ASF_DOUBLE | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_insert_ptr; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + /* Single remove + Double insert header with pointer */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_DOUBLE); + setter->flags |= ASF_SINGLE1 | ASF_DOUBLE | ASF_REPARSE | ASF_REMOVE; + setter->set_double = &mlx5dr_action_setter_insert_ptr; + setter->idx_double = i; + setter->set_single = &mlx5dr_action_setter_common_decap; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + /* Double modify header list with remove and push inline */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_tnl_l3_to_l2; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_TAG: + /* Single TAG action, search for any room from the start */ + setter = mlx5dr_action_setter_find_first(start_setter, ASF_SINGLE1); + setter->flags |= ASF_SINGLE1; + setter->set_single = &mlx5dr_action_setter_tag; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_CTR: + /* Control counter action + * TODO: Current counter executed first. Support is needed + * for single ation counter action which is done last. + * Example: Decap + CTR + */ + setter = mlx5dr_action_setter_find_first(start_setter, ASF_CTR); + setter->flags |= ASF_CTR; + setter->set_ctr = &mlx5dr_action_setter_ctrl_ctr; + setter->idx_ctr = i; + break; + + default: + DR_LOG(ERR, "Unsupported action type: %d", action_type[i]); + rte_errno = ENOTSUP; + assert(false); + return rte_errno; + } + + last_setter = RTE_MAX(setter, last_setter); + } + + /* Set default hit on the last STE if no hit action provided */ + if (!(last_setter->flags & ASF_HIT)) + last_setter->set_hit = &mlx5dr_action_setter_default_hit; + + at->num_of_action_stes = last_setter - start_setter + 1; + + /* Check if action template doesn't require any action DWs */ + at->only_term = (at->num_of_action_stes == 1) && + !(last_setter->flags & ~(ASF_CTR | ASF_HIT)); + + return 0; +} + +struct mlx5dr_action_template * +mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[]) +{ + struct mlx5dr_action_template *at; + uint8_t num_actions = 0; + int i; + + at = simple_calloc(1, sizeof(*at)); + if (!at) { + DR_LOG(ERR, "Failed to allocate action template"); + rte_errno = ENOMEM; + return NULL; + } + + while (action_type[num_actions++] != MLX5DR_ACTION_TYP_LAST); + + at->num_actions = num_actions - 1; + at->action_type_arr = simple_calloc(num_actions, sizeof(*action_type)); + if (!at->action_type_arr) { + DR_LOG(ERR, "Failed to allocate action type array"); + rte_errno = ENOMEM; + goto free_at; + } + + for (i = 0; i < num_actions; i++) + at->action_type_arr[i] = action_type[i]; + + return at; + +free_at: + simple_free(at); + return NULL; +} + +int mlx5dr_action_template_destroy(struct mlx5dr_action_template *at) +{ + simple_free(at->action_type_arr); + simple_free(at); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h new file mode 100644 index 0000000000..cdf281c17c --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -0,0 +1,251 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_ACTION_H_ +#define MLX5DR_ACTION_H_ + +/* Max number of STEs needed for a rule (including match) */ +#define MLX5DR_ACTION_MAX_STE 7 + +enum mlx5dr_action_stc_idx { + MLX5DR_ACTION_STC_IDX_CTRL = 0, + MLX5DR_ACTION_STC_IDX_HIT = 1, + MLX5DR_ACTION_STC_IDX_DW5 = 2, + MLX5DR_ACTION_STC_IDX_DW6 = 3, + MLX5DR_ACTION_STC_IDX_DW7 = 4, + MLX5DR_ACTION_STC_IDX_MAX = 5, + /* STC Jumvo STE combo: CTR, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_JUMBO_STE = 1, + /* STC combo1: CTR, SINGLE, DOUBLE, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_COMBO1 = 3, + /* STC combo2: CTR, 3 x SINGLE, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_COMBO2 = 4, +}; + +enum mlx5dr_action_offset { + MLX5DR_ACTION_OFFSET_DW0 = 0, + MLX5DR_ACTION_OFFSET_DW5 = 5, + MLX5DR_ACTION_OFFSET_DW6 = 6, + MLX5DR_ACTION_OFFSET_DW7 = 7, + MLX5DR_ACTION_OFFSET_HIT = 3, + MLX5DR_ACTION_OFFSET_HIT_LSB = 4, +}; + +enum { + MLX5DR_ACTION_DOUBLE_SIZE = 8, + MLX5DR_ACTION_INLINE_DATA_SIZE = 4, + MLX5DR_ACTION_HDR_LEN_L2_MACS = 12, + MLX5DR_ACTION_HDR_LEN_L2_VLAN = 4, + MLX5DR_ACTION_HDR_LEN_L2_ETHER = 2, + MLX5DR_ACTION_HDR_LEN_L2 = (MLX5DR_ACTION_HDR_LEN_L2_MACS + MLX5DR_ACTION_HDR_LEN_L2_ETHER), + MLX5DR_ACTION_HDR_LEN_L2_W_VLAN = (MLX5DR_ACTION_HDR_LEN_L2 + MLX5DR_ACTION_HDR_LEN_L2_VLAN), + MLX5DR_ACTION_REFORMAT_DATA_SIZE = 64, + DECAP_L3_NUM_ACTIONS_W_NO_VLAN = 6, + DECAP_L3_NUM_ACTIONS_W_VLAN = 7, +}; + +enum mlx5dr_action_setter_flag { + ASF_SINGLE1 = 1 << 0, + ASF_SINGLE2 = 1 << 1, + ASF_SINGLE3 = 1 << 2, + ASF_DOUBLE = ASF_SINGLE2 | ASF_SINGLE3, + ASF_REPARSE = 1 << 3, + ASF_REMOVE = 1 << 4, + ASF_MODIFY = 1 << 5, + ASF_CTR = 1 << 6, + ASF_HIT = 1 << 7, +}; + +struct mlx5dr_action_default_stc { + struct mlx5dr_pool_chunk nop_ctr; + struct mlx5dr_pool_chunk nop_dw5; + struct mlx5dr_pool_chunk nop_dw6; + struct mlx5dr_pool_chunk nop_dw7; + struct mlx5dr_pool_chunk default_hit; + uint32_t refcount; +}; + +struct mlx5dr_action_shared_stc { + struct mlx5dr_pool_chunk remove_header; + rte_atomic32_t refcount; +}; + +struct mlx5dr_actions_apply_data { + struct mlx5dr_send_engine *queue; + struct mlx5dr_rule_action *rule_action; + uint32_t *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + uint32_t jump_to_action_stc; + struct mlx5dr_context_common_res *common_res; + enum mlx5dr_table_type tbl_type; + uint32_t next_direct_idx; + uint8_t require_dep; +}; + +struct mlx5dr_actions_wqe_setter; + +typedef void (*mlx5dr_action_setter_fp) + (struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter); + +struct mlx5dr_actions_wqe_setter { + mlx5dr_action_setter_fp set_single; + mlx5dr_action_setter_fp set_double; + mlx5dr_action_setter_fp set_hit; + mlx5dr_action_setter_fp set_ctr; + uint8_t idx_single; + uint8_t idx_double; + uint8_t idx_ctr; + uint8_t idx_hit; + uint8_t flags; +}; + +struct mlx5dr_action_template { + struct mlx5dr_actions_wqe_setter setters[MLX5DR_ACTION_MAX_STE]; + enum mlx5dr_action_type *action_type_arr; + uint8_t num_of_action_stes; + uint8_t num_actions; + uint8_t only_term; +}; + +struct mlx5dr_action { + uint8_t type; + uint8_t flags; + struct mlx5dr_context *ctx; + union { + struct { + struct mlx5dr_pool_chunk stc[MLX5DR_TABLE_TYPE_MAX]; + union { + struct { + struct mlx5dr_devx_obj *pattern_obj; + struct mlx5dr_devx_obj *arg_obj; + __be64 single_action; + uint8_t single_action_type; + uint16_t num_of_actions; + } modify_header; + struct { + struct mlx5dr_devx_obj *arg_obj; + uint32_t header_size; + } reformat; + struct { + struct mlx5dr_devx_obj *devx_obj; + uint8_t return_reg_id; + } aso; + struct { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + } vport; + }; + }; + + struct ibv_flow_action *flow_action; + struct mlx5dv_devx_obj *devx_obj; + struct ibv_qp *qp; + }; +}; + +int mlx5dr_action_root_build_attr(struct mlx5dr_rule_action rule_actions[], + uint32_t num_actions, + struct mlx5dv_flow_action_attr *attr); + +int mlx5dr_action_get_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type); + +void mlx5dr_action_put_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type); + +void mlx5dr_action_prepare_decap_l3_data(uint8_t *src, uint8_t *dst, + uint16_t num_of_actions); + +int mlx5dr_action_template_process(struct mlx5dr_action_template *at); + +bool mlx5dr_action_check_combo(enum mlx5dr_action_type *user_actions, + enum mlx5dr_table_type table_type); + +int mlx5dr_action_alloc_single_stc(struct mlx5dr_context *ctx, + struct mlx5dr_cmd_stc_modify_attr *stc_attr, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc); + +void mlx5dr_action_free_single_stc(struct mlx5dr_context *ctx, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc); + +static inline void +mlx5dr_action_setter_default_single(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(apply->common_res->default_stc->nop_dw5.offset); +} + +static inline void +mlx5dr_action_setter_default_double(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = + htobe32(apply->common_res->default_stc->nop_dw6.offset); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = + htobe32(apply->common_res->default_stc->nop_dw7.offset); +} + +static inline void +mlx5dr_action_setter_default_ctr(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW0] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_CTRL] = + htobe32(apply->common_res->default_stc->nop_ctr.offset); +} + +static inline void +mlx5dr_action_apply_setter(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter, + bool is_jumbo) +{ + uint8_t num_of_actions; + + /* Set control counter */ + if (setter->flags & ASF_CTR) + setter->set_ctr(apply, setter); + else + mlx5dr_action_setter_default_ctr(apply, setter); + + /* Set single and double on match */ + if (!is_jumbo) { + if (setter->flags & ASF_SINGLE1) + setter->set_single(apply, setter); + else + mlx5dr_action_setter_default_single(apply, setter); + + if (setter->flags & ASF_DOUBLE) + setter->set_double(apply, setter); + else + mlx5dr_action_setter_default_double(apply, setter); + + num_of_actions = setter->flags & ASF_DOUBLE ? + MLX5DR_ACTION_STC_IDX_LAST_COMBO1 : + MLX5DR_ACTION_STC_IDX_LAST_COMBO2; + } else { + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + num_of_actions = MLX5DR_ACTION_STC_IDX_LAST_JUMBO_STE; + } + + /* Set next/final hit action */ + setter->set_hit(apply, setter); + + /* Set number of actions */ + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_CTRL] |= + htobe32(num_of_actions << 29); +} + +#endif diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.c b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c new file mode 100644 index 0000000000..40c0269ef3 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c @@ -0,0 +1,511 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#include "mlx5dr_internal.h" + + +/* it returns the roundup of log2(data_size) */ +enum mlx5dr_arg_chunk_size +mlx5dr_arg_data_size_to_arg_log_size(uint16_t data_size) +{ + if (data_size <= MLX5DR_ARG_DATA_SIZE) + return MLX5DR_ARG_CHUNK_SIZE_1; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 2) + return MLX5DR_ARG_CHUNK_SIZE_2; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 4) + return MLX5DR_ARG_CHUNK_SIZE_3; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 8) + return MLX5DR_ARG_CHUNK_SIZE_4; + + return MLX5DR_ARG_CHUNK_SIZE_MAX; +} + +uint32_t mlx5dr_arg_data_size_to_arg_size(uint16_t data_size) +{ + return BIT(mlx5dr_arg_data_size_to_arg_log_size(data_size)); +} + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_get_arg_log_size(uint16_t num_of_actions) +{ + return mlx5dr_arg_data_size_to_arg_log_size(num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); +} + +uint32_t mlx5dr_arg_get_arg_size(uint16_t num_of_actions) +{ + return BIT(mlx5dr_arg_get_arg_log_size(num_of_actions)); +} + +/* cache and cache element handling */ +int mlx5dr_pat_init_pattern_cache(struct mlx5dr_pattern_cache **cache) +{ + struct mlx5dr_pattern_cache *new_cache; + + new_cache = simple_calloc(1, sizeof(*new_cache)); + if (!new_cache) { + rte_errno = ENOMEM; + return rte_errno; + } + LIST_INIT(&new_cache->head); + pthread_spin_init(&new_cache->lock, PTHREAD_PROCESS_PRIVATE); + + *cache = new_cache; + + return 0; +} + +void mlx5dr_pat_uninit_pattern_cache(struct mlx5dr_pattern_cache *cache) +{ + simple_free(cache); +} + +static bool mlx5dr_pat_compare_pattern(enum mlx5dr_action_type cur_type, + int cur_num_of_actions, + __be64 cur_actions[], + enum mlx5dr_action_type type, + int num_of_actions, + __be64 actions[]) +{ + int i; + + if ((cur_num_of_actions != num_of_actions) || (cur_type != type)) + return false; + + /* all decap-l3 look the same, only change is the num of actions */ + if (type == MLX5DR_ACTION_TYP_TNL_L3_TO_L2) + return true; + + for (i = 0; i < num_of_actions; i++) { + u8 action_id = + MLX5_GET(set_action_in, &actions[i], action_type); + + if (action_id == MLX5_MODIFICATION_TYPE_COPY) { + if (actions[i] != cur_actions[i]) + return false; + } else { /* compare just the control, not the values */ + if ((__be32)actions[i] != + (__be32)cur_actions[i]) + return false; + } + } + + return true; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_find_cached_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pat; + + LIST_FOREACH(cached_pat, &cache->head, next) { + if (mlx5dr_pat_compare_pattern(cached_pat->type, + cached_pat->mh_data.num_of_actions, + (__be64 *)cached_pat->mh_data.data, + action->type, + num_of_actions, + actions)) + return cached_pat; + } + + return NULL; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_get_existing_cached_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + cached_pattern = mlx5dr_pat_find_cached_pattern(cache, action, num_of_actions, actions); + if (cached_pattern) { + /* LRU: move it to be first in the list */ + LIST_REMOVE(cached_pattern, next); + LIST_INSERT_HEAD(&cache->head, cached_pattern, next); + rte_atomic32_add(&cached_pattern->refcount, 1); + } + + return cached_pattern; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_get_cached_pattern_by_action(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + LIST_FOREACH(cached_pattern, &cache->head, next) { + if (cached_pattern->mh_data.pattern_obj->id == action->modify_header.pattern_obj->id) + return cached_pattern; + } + + return NULL; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_add_pattern_to_cache(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_devx_obj *pattern_obj, + enum mlx5dr_action_type type, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + cached_pattern = simple_calloc(1, sizeof(*cached_pattern)); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to allocate cached_pattern"); + rte_errno = ENOMEM; + return NULL; + } + + cached_pattern->type = type; + cached_pattern->mh_data.num_of_actions = num_of_actions; + cached_pattern->mh_data.pattern_obj = pattern_obj; + cached_pattern->mh_data.data = + simple_malloc(num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); + if (!cached_pattern->mh_data.data) { + DR_LOG(ERR, "Failed to allocate mh_data.data"); + rte_errno = ENOMEM; + goto free_cached_obj; + } + + memcpy(cached_pattern->mh_data.data, actions, + num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); + + LIST_INSERT_HEAD(&cache->head, cached_pattern, next); + + rte_atomic32_init(&cached_pattern->refcount); + rte_atomic32_set(&cached_pattern->refcount, 1); + + return cached_pattern; + +free_cached_obj: + simple_free(cached_pattern); + return NULL; +} + +static void +mlx5dr_pat_remove_pattern(struct mlx5dr_pat_cached_pattern *cached_pattern) +{ + LIST_REMOVE(cached_pattern, next); + simple_free(cached_pattern->mh_data.data); + simple_free(cached_pattern); +} + +static void +mlx5dr_pat_put_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + pthread_spin_lock(&cache->lock); + cached_pattern = mlx5dr_pat_get_cached_pattern_by_action(cache, action); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to find pattern according to action with pt"); + assert(false); + goto out; + } + + if (!rte_atomic32_dec_and_test(&cached_pattern->refcount)) + goto out; + + mlx5dr_pat_remove_pattern(cached_pattern); + +out: + pthread_spin_unlock(&cache->lock); +} + +static int mlx5dr_pat_get_pattern(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint16_t num_of_actions, + size_t pattern_sz, + __be64 *pattern) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + int ret = 0; + + pthread_spin_lock(&ctx->pattern_cache->lock); + + cached_pattern = mlx5dr_pat_get_existing_cached_pattern(ctx->pattern_cache, + action, + num_of_actions, + pattern); + if (cached_pattern) { + action->modify_header.pattern_obj = cached_pattern->mh_data.pattern_obj; + goto out_unlock; + } + + action->modify_header.pattern_obj = + mlx5dr_cmd_header_modify_pattern_create(ctx->ibv_ctx, + pattern_sz, + (uint8_t *)pattern); + if (!action->modify_header.pattern_obj) { + DR_LOG(ERR, "Failed to create pattern FW object"); + + ret = rte_errno; + goto out_unlock; + } + + cached_pattern = + mlx5dr_pat_add_pattern_to_cache(ctx->pattern_cache, + action->modify_header.pattern_obj, + action->type, + num_of_actions, + pattern); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to add pattern to cache"); + ret = rte_errno; + goto clean_pattern; + } + +out_unlock: + pthread_spin_unlock(&ctx->pattern_cache->lock); + return ret; + +clean_pattern: + mlx5dr_cmd_destroy_obj(action->modify_header.pattern_obj); + pthread_spin_unlock(&ctx->pattern_cache->lock); + return ret; +} + +static void +mlx5d_arg_init_send_attr(struct mlx5dr_send_engine_post_attr *send_attr, + void *comp_data, + uint32_t arg_idx) +{ + send_attr->opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + send_attr->opmod = MLX5DR_WQE_GTA_OPMOD_MOD_ARG; + send_attr->len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + send_attr->id = arg_idx; + send_attr->user_data = comp_data; +} + +void mlx5dr_arg_decapl3_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_arg *wqe_arg; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + size_t wqe_len; + + mlx5d_arg_init_send_attr(&send_attr, NULL, arg_idx); + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + mlx5dr_action_prepare_decap_l3_data(arg_data, (uint8_t *) wqe_arg, + num_of_actions); + mlx5dr_send_engine_post_end(&ctrl, &send_attr); +} + +static int +mlx5dr_arg_poll_for_comp(struct mlx5dr_context *ctx, uint16_t queue_id) +{ + struct rte_flow_op_result comp[1]; + int ret; + + while (true) { + ret = mlx5dr_send_queue_poll(ctx, queue_id, comp, 1); + if (ret) { + if (ret < 0) { + DR_LOG(ERR, "Failed mlx5dr_send_queue_poll"); + } else if (comp[0].status == RTE_FLOW_OP_ERROR) { + DR_LOG(ERR, "Got comp with error"); + rte_errno = ENOENT; + } + break; + } + } + return (ret == 1 ? 0 : ret); +} + +void mlx5dr_arg_write(struct mlx5dr_send_engine *queue, + void *comp_data, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_arg *wqe_arg; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + int i, full_iter, leftover; + size_t wqe_len; + + mlx5d_arg_init_send_attr(&send_attr, comp_data, arg_idx); + + /* Each WQE can hold 64B of data, it might require multiple iteration */ + full_iter = data_size / MLX5DR_ARG_DATA_SIZE; + leftover = data_size & (MLX5DR_ARG_DATA_SIZE - 1); + + for (i = 0; i < full_iter; i++) { + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + memcpy(wqe_arg, arg_data, wqe_len); + send_attr.id = arg_idx++; + mlx5dr_send_engine_post_end(&ctrl, &send_attr); + + /* Move to next argument data */ + arg_data += MLX5DR_ARG_DATA_SIZE; + } + + if (leftover) { + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); // TODO OPT: GTA ctrl might be ignored in case of arg + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + memcpy(wqe_arg, arg_data, leftover); + send_attr.id = arg_idx; + mlx5dr_send_engine_post_end(&ctrl, &send_attr); + } +} + +int mlx5dr_arg_write_inline_arg_data(struct mlx5dr_context *ctx, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size) +{ + struct mlx5dr_send_engine *queue; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + + /* get the control queue */ + queue = &ctx->send_queue[ctx->queues - 1]; + + mlx5dr_arg_write(queue, arg_data, arg_idx, arg_data, data_size); + + mlx5dr_send_engine_flush_queue(queue); + + /* poll for completion */ + ret = mlx5dr_arg_poll_for_comp(ctx, ctx->queues - 1); + if (ret) + DR_LOG(ERR, "Failed to get completions for shared action"); + + pthread_spin_unlock(&ctx->ctrl_lock); + + return ret; +} + +bool mlx5dr_arg_is_valid_arg_request_size(struct mlx5dr_context *ctx, + uint32_t arg_size) +{ + if (arg_size < ctx->caps->log_header_modify_argument_granularity || + arg_size > ctx->caps->log_header_modify_argument_max_alloc) { + return false; + } + return true; +} + +static int +mlx5dr_arg_create_modify_header_arg(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *pattern, + uint32_t bulk_size) +{ + uint32_t flags = action->flags; + uint16_t args_log_size; + int ret = 0; + + /* alloc bulk of args */ + args_log_size = mlx5dr_arg_get_arg_log_size(num_of_actions); + if (args_log_size >= MLX5DR_ARG_CHUNK_SIZE_MAX) { + DR_LOG(ERR, "exceed number of allowed actions %u", + num_of_actions); + rte_errno = EINVAL; + return rte_errno; + } + + if (!mlx5dr_arg_is_valid_arg_request_size(ctx, args_log_size + bulk_size)) { + DR_LOG(ERR, "arg size %d does not fit FW capability", + args_log_size + bulk_size); + rte_errno = EINVAL; + return rte_errno; + } + + action->modify_header.arg_obj = + mlx5dr_cmd_arg_create(ctx->ibv_ctx, args_log_size + bulk_size, + ctx->pd_num); + if (!action->modify_header.arg_obj) { + DR_LOG(ERR, "failed allocating arg in order: %d", + args_log_size + bulk_size); + return rte_errno; + } + + /* when INLINE need to write the arg data */ + if (flags & MLX5DR_ACTION_FLAG_SHARED) + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->modify_header.arg_obj->id, + (uint8_t *)pattern, + num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); + if (ret) { + DR_LOG(ERR, "failed writing INLINE arg in order: %d", + args_log_size + bulk_size); + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + return rte_errno; + } + + return 0; +} + +int mlx5dr_pat_arg_create_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + size_t pattern_sz, + __be64 pattern[], + uint32_t bulk_size) +{ + uint16_t num_of_actions; + int ret; + + num_of_actions = pattern_sz / MLX5DR_MODIFY_ACTION_SIZE; + if (num_of_actions == 0) { + DR_LOG(ERR, "Invalid number of actions %u\n", num_of_actions); + rte_errno = EINVAL; + return rte_errno; + } + + action->modify_header.num_of_actions = num_of_actions; + + ret = mlx5dr_arg_create_modify_header_arg(ctx, action, + num_of_actions, + pattern, + bulk_size); + if (ret) { + DR_LOG(ERR, "Failed to allocate arg"); + return ret; + } + + ret = mlx5dr_pat_get_pattern(ctx, action, num_of_actions, pattern_sz, + pattern); + if (ret) { + DR_LOG(ERR, "Failed to allocate pattern"); + goto free_arg; + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + return rte_errno; +} + +void mlx5dr_pat_arg_destroy_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action) +{ + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + mlx5dr_pat_put_pattern(ctx->pattern_cache, action); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.h b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h new file mode 100644 index 0000000000..dd6ffd1cd3 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h @@ -0,0 +1,76 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ +#ifndef MLX5DR_PAT_ARG_H_ +#define MLX5DR_PAT_ARG_H_ + +/* modify-header arg pool */ +enum mlx5dr_arg_chunk_size { + MLX5DR_ARG_CHUNK_SIZE_1, + MLX5DR_ARG_CHUNK_SIZE_MIN = MLX5DR_ARG_CHUNK_SIZE_1, /* keep updated when changing */ + MLX5DR_ARG_CHUNK_SIZE_2, + MLX5DR_ARG_CHUNK_SIZE_3, + MLX5DR_ARG_CHUNK_SIZE_4, + MLX5DR_ARG_CHUNK_SIZE_MAX, +}; + +enum { + MLX5DR_MODIFY_ACTION_SIZE = 8, + MLX5DR_ARG_DATA_SIZE = 64, +}; + +struct mlx5dr_pattern_cache { + pthread_spinlock_t lock; /* protect pattern list */ + LIST_HEAD(pattern_head, mlx5dr_pat_cached_pattern) head; +}; + +struct mlx5dr_pat_cached_pattern { + enum mlx5dr_action_type type; + struct { + struct mlx5dr_devx_obj *pattern_obj; + struct dr_icm_chunk *chunk; + uint8_t *data; + uint16_t num_of_actions; + } mh_data; + rte_atomic32_t refcount; + LIST_ENTRY(mlx5dr_pat_cached_pattern) next; +}; + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_get_arg_log_size(uint16_t num_of_actions); + +uint32_t mlx5dr_arg_get_arg_size(uint16_t num_of_actions); + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_data_size_to_arg_log_size(uint16_t data_size); + +uint32_t mlx5dr_arg_data_size_to_arg_size(uint16_t data_size); + +int mlx5dr_pat_init_pattern_cache(struct mlx5dr_pattern_cache **cache); + +void mlx5dr_pat_uninit_pattern_cache(struct mlx5dr_pattern_cache *cache); + +int mlx5dr_pat_arg_create_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + size_t pattern_sz, + __be64 pattern[], + uint32_t bulk_size); + +void mlx5dr_pat_arg_destroy_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action); +bool mlx5dr_arg_is_valid_arg_request_size(struct mlx5dr_context *ctx, + uint32_t arg_size); +void mlx5dr_arg_write(struct mlx5dr_send_engine *queue, + void *comp_data, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size); +void mlx5dr_arg_decapl3_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions); +int mlx5dr_arg_write_inline_arg_data(struct mlx5dr_context *ctx, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size); +#endif -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 18/19] net/mlx5/hws: Add HWS debug layer 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (16 preceding siblings ...) 2022-09-22 19:03 ` [v1 17/19] net/mlx5/hws: Add HWS action object Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-09-22 19:03 ` [v1 19/19] net/mlx5/hws: Enable HWS Alex Vesker ` (5 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad Cc: dev, orika, Hamdan Igbaria The debug layer is used to generate a debug CSV file containing details of the context, table, matcher, rules and other useful debug information. Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_debug.c | 459 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_debug.h | 28 ++ 2 files changed, 487 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.h diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c new file mode 100644 index 0000000000..9e807e4de3 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -0,0 +1,459 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#include "mlx5dr_internal.h" + +const char *mlx5dr_debug_action_type_str[] = { + [MLX5DR_ACTION_TYP_LAST] = "LAST", + [MLX5DR_ACTION_TYP_TNL_L2_TO_L2] = "TNL_L2_TO_L2", + [MLX5DR_ACTION_TYP_L2_TO_TNL_L2] = "L2_TO_TNL_L2", + [MLX5DR_ACTION_TYP_TNL_L3_TO_L2] = "TNL_L3_TO_L2", + [MLX5DR_ACTION_TYP_L2_TO_TNL_L3] = "L2_TO_TNL_L3", + [MLX5DR_ACTION_TYP_DROP] = "DROP", + [MLX5DR_ACTION_TYP_TIR] = "TIR", + [MLX5DR_ACTION_TYP_FT] = "FT", + [MLX5DR_ACTION_TYP_CTR] = "CTR", + [MLX5DR_ACTION_TYP_TAG] = "TAG", + [MLX5DR_ACTION_TYP_MODIFY_HDR] = "MODIFY_HDR", + [MLX5DR_ACTION_TYP_VPORT] = "VPORT", + [MLX5DR_ACTION_TYP_MISS] = "DEFAULT_MISS", + [MLX5DR_ACTION_TYP_POP_VLAN] = "POP_VLAN", + [MLX5DR_ACTION_TYP_PUSH_VLAN] = "PUSH_VLAN", + [MLX5DR_ACTION_TYP_ASO_METER] = "ASO_METER", + [MLX5DR_ACTION_TYP_ASO_CT] = "ASO_CT", +}; + +static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX, + "Missing mlx5dr_debug_action_type_str"); + +const char *mlx5dr_debug_action_type_to_str(enum mlx5dr_action_type action_type) +{ + return mlx5dr_debug_action_type_str[action_type]; +} + +static int mlx5dr_debug_dump_matcher_template_definer(FILE *f, + struct mlx5dr_match_template *mt) +{ + struct mlx5dr_definer *definer = mt->definer; + int i, ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,", + MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_DEFINER, + (uint64_t)(uintptr_t)definer, + (uint64_t)(uintptr_t)mt, + definer->obj->id, + definer->type); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (i = 0; i < DW_SELECTORS; i++) { + ret = fprintf(f, "0x%x%s", definer->dw_selector[i], + (i == DW_SELECTORS - 1) ? "," : "-"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + for (i = 0; i < BYTE_SELECTORS; i++) { + ret = fprintf(f, "0x%x%s", definer->byte_selector[i], + (i == BYTE_SELECTORS - 1) ? "," : "-"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + for (i = 0; i < MLX5DR_JUMBO_TAG_SZ; i++) { + ret = fprintf(f, "%02x", definer->mask.jumbo[i]); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + ret = fprintf(f, "\n"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int +mlx5dr_debug_dump_matcher_match_template(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + int i, ret; + + for (i = 0; i < matcher->num_of_mt; i++) { + struct mlx5dr_match_template *mt = matcher->mt[i]; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_MATCHER_MATCH_TEMPLATE, + (uint64_t)(uintptr_t)mt, + (uint64_t)(uintptr_t)matcher, + is_root ? 0 : mt->fc_sz, + mt->flags); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + if (!is_root) { + ret = mlx5dr_debug_dump_matcher_template_definer(f, mt); + if (ret) + return ret; + } + } + + return 0; +} + +static int +mlx5dr_debug_dump_matcher_action_template(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + enum mlx5dr_action_type action_type; + int i, j, ret; + + for (i = 0; i < matcher->num_of_at; i++) { + struct mlx5dr_action_template *at = matcher->at[i]; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,%d", + MLX5DR_DEBUG_RES_TYPE_MATCHER_ACTION_TEMPLATE, + (uint64_t)(uintptr_t)at, + (uint64_t)(uintptr_t)matcher, + at->only_term ? 0 : 1, + is_root ? 0 : at->num_of_action_stes, + at->num_actions); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (j = 0; j < at->num_actions; j++) { + action_type = at->action_type_arr[j]; + ret = fprintf(f, ",%s", mlx5dr_debug_action_type_to_str(action_type)); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + fprintf(f, "\n"); + } + + return 0; +} + +static int mlx5dr_debug_dump_matcher_attr(FILE *f, struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_matcher_attr *attr = &matcher->attr; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_MATCHER_ATTR, + (uint64_t)(uintptr_t)matcher, + attr->priority, + attr->mode, + attr->table.sz_row_log, + attr->table.sz_col_log, + attr->optimize_using_rule_idx, + attr->optimize_flow_src); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_matcher(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + enum mlx5dr_table_type tbl_type = matcher->tbl->type; + struct mlx5dr_devx_obj *ste_0, *ste_1 = NULL; + struct mlx5dr_pool_chunk *ste; + struct mlx5dr_pool *ste_pool; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,0x%" PRIx64, + MLX5DR_DEBUG_RES_TYPE_MATCHER, + (uint64_t)(uintptr_t)matcher, + (uint64_t)(uintptr_t)matcher->tbl, + matcher->num_of_mt, + is_root ? 0 : matcher->end_ft->id, + matcher->col_matcher ? (uint64_t)(uintptr_t)matcher->col_matcher : 0); + if (ret < 0) + goto out_err; + + ste = &matcher->match_ste.ste; + ste_pool = matcher->match_ste.pool; + if (ste_pool) { + ste_0 = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + ste_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + } else { + ste_0 = NULL; + ste_1 = NULL; + } + + ret = fprintf(f, ",%d,%d,%d,%d", + matcher->match_ste.rtc_0 ? matcher->match_ste.rtc_0->id : 0, + ste_0 ? (int)ste_0->id : -1, + matcher->match_ste.rtc_1 ? matcher->match_ste.rtc_1->id : 0, + ste_1 ? (int)ste_1->id : -1); + if (ret < 0) + goto out_err; + + ste = &matcher->action_ste.ste; + ste_pool = matcher->action_ste.pool; + if (ste_pool) { + ste_0 = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + ste_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + } else { + ste_0 = NULL; + ste_1 = NULL; + } + + ret = fprintf(f, ",%d,%d,%d,%d\n", + matcher->action_ste.rtc_0 ? matcher->action_ste.rtc_0->id : 0, + ste_0 ? (int)ste_0->id : -1, + matcher->action_ste.rtc_1 ? matcher->action_ste.rtc_1->id : 0, + ste_1 ? (int)ste_1->id : -1); + if (ret < 0) + goto out_err; + + ret = mlx5dr_debug_dump_matcher_attr(f, matcher); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_matcher_match_template(f, matcher); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_matcher_action_template(f, matcher); + if (ret) + return ret; + + return 0; + +out_err: + rte_errno = EINVAL; + return rte_errno; +} + +static int mlx5dr_debug_dump_table(FILE *f, struct mlx5dr_table *tbl) +{ + bool is_root = tbl->level == MLX5DR_ROOT_LEVEL; + struct mlx5dr_matcher *matcher; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_TABLE, + (uint64_t)(uintptr_t)tbl, + (uint64_t)(uintptr_t)tbl->ctx, + is_root ? 0 : tbl->ft->id, + tbl->type, + is_root ? 0 : tbl->fw_ft_type, + tbl->level); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + LIST_FOREACH(matcher, &tbl->head, next) { + ret = mlx5dr_debug_dump_matcher(f, matcher); + if (ret) + return ret; + } + + return 0; +} + +static int mlx5dr_debug_dump_context_send_engine(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_send_engine *send_queue; + int ret, i, j; + + for (i = 0; i < (int)ctx->queues; i++) { + send_queue = &ctx->send_queue[i]; + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_ENGINE, + (uint64_t)(uintptr_t)ctx, + i, + send_queue->used_entries, + send_queue->th_entries, + send_queue->rings, + send_queue->num_entries, + send_queue->err, + send_queue->completed.ci, + send_queue->completed.pi, + send_queue->completed.mask); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (j = 0; j < MLX5DR_NUM_SEND_RINGS; j++) { + struct mlx5dr_send_ring *send_ring = &send_queue->send_ring[j]; + struct mlx5dr_send_ring_cq *cq = &send_ring->send_cq; + struct mlx5dr_send_ring_sq *sq = &send_ring->send_sq; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_RING, + (uint64_t)(uintptr_t)ctx, + j, + i, + cq->cqn, + cq->cons_index, + cq->ncqe_mask, + cq->buf_sz, + cq->ncqe, + cq->cqe_log_sz, + cq->poll_wqe, + cq->cqe_sz, + sq->sqn, + sq->obj->id, + sq->cur_post, + sq->buf_mask); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + } + + return 0; +} + +static int mlx5dr_debug_dump_context_caps(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_cmd_query_caps *caps = ctx->caps; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%s,%d,%d,%d,%d,", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_CAPS, + (uint64_t)(uintptr_t)ctx, + caps->fw_ver, + caps->wqe_based_update, + caps->ste_format, + caps->ste_alloc_log_max, + caps->log_header_modify_argument_max_alloc); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + ret = fprintf(f, "%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + caps->flex_protocols, + caps->rtc_reparse_mode, + caps->rtc_index_mode, + caps->ste_alloc_log_gran, + caps->stc_alloc_log_max, + caps->stc_alloc_log_gran, + caps->rtc_log_depth_max, + caps->format_select_gtpu_dw_0, + caps->format_select_gtpu_dw_1, + caps->format_select_gtpu_dw_2, + caps->format_select_gtpu_ext_dw_0, + caps->nic_ft.max_level, + caps->nic_ft.reparse, + caps->fdb_ft.max_level, + caps->fdb_ft.reparse, + caps->log_header_modify_argument_granularity); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_context_attr(FILE *f, struct mlx5dr_context *ctx) +{ + int ret; + + ret = fprintf(f, "%u,0x%" PRIx64 ",%d,%zu,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_ATTR, + (uint64_t)(uintptr_t)ctx, + ctx->pd_num, + ctx->queues, + ctx->send_queue->num_entries); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_context_info(FILE *f, struct mlx5dr_context *ctx) +{ + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%s,%s\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT, + (uint64_t)(uintptr_t)ctx, + ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT, + mlx5_glue->get_device_name(ctx->ibv_ctx->device), + DEBUG_VERSION); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + ret = mlx5dr_debug_dump_context_attr(f, ctx); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_context_caps(f, ctx); + if (ret) + return ret; + + return 0; +} + +static int mlx5dr_debug_dump_context(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_table *tbl; + int ret; + + ret = mlx5dr_debug_dump_context_info(f, ctx); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_context_send_engine(f, ctx); + if (ret) + return ret; + + LIST_FOREACH(tbl, &ctx->head, next) { + ret = mlx5dr_debug_dump_table(f, tbl); + if (ret) + return ret; + } + + return 0; +} + +int mlx5dr_debug_dump(struct mlx5dr_context *ctx, FILE *f) +{ + int ret; + + if (!f || !ctx) { + rte_errno = EINVAL; + return -rte_errno; + } + + pthread_spin_lock(&ctx->ctrl_lock); + ret = mlx5dr_debug_dump_context(f, ctx); + pthread_spin_unlock(&ctx->ctrl_lock); + + return -ret; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.h b/drivers/net/mlx5/hws/mlx5dr_debug.h new file mode 100644 index 0000000000..de8f199a1e --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_debug.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#ifndef MLX5DR_DEBUG_H_ +#define MLX5DR_DEBUG_H_ + +#define DEBUG_VERSION "1.0" + +enum mlx5dr_debug_res_type { + MLX5DR_DEBUG_RES_TYPE_CONTEXT = 4000, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_ATTR = 4001, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_CAPS = 4002, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_ENGINE = 4003, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_RING = 4004, + + MLX5DR_DEBUG_RES_TYPE_TABLE = 4100, + + MLX5DR_DEBUG_RES_TYPE_MATCHER = 4200, + MLX5DR_DEBUG_RES_TYPE_MATCHER_ATTR = 4201, + MLX5DR_DEBUG_RES_TYPE_MATCHER_MATCH_TEMPLATE = 4202, + MLX5DR_DEBUG_RES_TYPE_MATCHER_ACTION_TEMPLATE = 4204, + MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_DEFINER = 4203, +}; + +const char *mlx5dr_debug_action_type_to_str(enum mlx5dr_action_type action_type); + +#endif -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v1 19/19] net/mlx5/hws: Enable HWS 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (17 preceding siblings ...) 2022-09-22 19:03 ` [v1 18/19] net/mlx5/hws: Add HWS debug layer Alex Vesker @ 2022-09-22 19:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (4 subsequent siblings) 23 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-09-22 19:03 UTC (permalink / raw) To: valex, viacheslavo, erezsh, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Replace stub implenation of HWS with mlx5dr code. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/meson.build | 18 + drivers/net/mlx5/hws/mlx5dr.h | 594 +++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_internal.h | 93 ++++ drivers/net/mlx5/meson.build | 1 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 2 + drivers/net/mlx5/mlx5_flow_hw.c | 4 +- 7 files changed, 711 insertions(+), 2 deletions(-) create mode 100644 drivers/net/mlx5/hws/meson.build create mode 100644 drivers/net/mlx5/hws/mlx5dr.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_internal.h diff --git a/drivers/net/mlx5/hws/meson.build b/drivers/net/mlx5/hws/meson.build new file mode 100644 index 0000000000..f94798dd2d --- /dev/null +++ b/drivers/net/mlx5/hws/meson.build @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2022 NVIDIA Corporation & Affiliates + +includes += include_directories('.') +sources += files( + 'mlx5dr_context.c', + 'mlx5dr_table.c', + 'mlx5dr_matcher.c', + 'mlx5dr_rule.c', + 'mlx5dr_action.c', + 'mlx5dr_buddy.c', + 'mlx5dr_pool.c', + 'mlx5dr_cmd.c', + 'mlx5dr_send.c', + 'mlx5dr_definer.c', + 'mlx5dr_debug.c', + 'mlx5dr_pat_arg.c', +) diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h new file mode 100644 index 0000000000..d63b50eb0f --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -0,0 +1,594 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#ifndef MLX5DR_H_ +#define MLX5DR_H_ + +#include <rte_flow.h> + +struct mlx5dr_context; +struct mlx5dr_table; +struct mlx5dr_matcher; +struct mlx5dr_rule; + +enum mlx5dr_table_type { + MLX5DR_TABLE_TYPE_NIC_RX, + MLX5DR_TABLE_TYPE_NIC_TX, + MLX5DR_TABLE_TYPE_FDB, + MLX5DR_TABLE_TYPE_MAX, +}; + +enum mlx5dr_matcher_resource_mode { + /* Allocate resources based on number of rules with minimal failure probability */ + MLX5DR_MATCHER_RESOURCE_MODE_RULE, + /* Allocate fixed size hash table based on given column and rows */ + MLX5DR_MATCHER_RESOURCE_MODE_HTABLE, +}; + +enum mlx5dr_action_type { + MLX5DR_ACTION_TYP_LAST, + MLX5DR_ACTION_TYP_TNL_L2_TO_L2, + MLX5DR_ACTION_TYP_L2_TO_TNL_L2, + MLX5DR_ACTION_TYP_TNL_L3_TO_L2, + MLX5DR_ACTION_TYP_L2_TO_TNL_L3, + MLX5DR_ACTION_TYP_DROP, + MLX5DR_ACTION_TYP_TIR, + MLX5DR_ACTION_TYP_FT, + MLX5DR_ACTION_TYP_CTR, + MLX5DR_ACTION_TYP_TAG, + MLX5DR_ACTION_TYP_MODIFY_HDR, + MLX5DR_ACTION_TYP_VPORT, + MLX5DR_ACTION_TYP_MISS, + MLX5DR_ACTION_TYP_POP_VLAN, + MLX5DR_ACTION_TYP_PUSH_VLAN, + MLX5DR_ACTION_TYP_ASO_METER, + MLX5DR_ACTION_TYP_ASO_CT, + MLX5DR_ACTION_TYP_MAX, +}; + +enum mlx5dr_action_flags { + MLX5DR_ACTION_FLAG_ROOT_RX = 1 << 0, + MLX5DR_ACTION_FLAG_ROOT_TX = 1 << 1, + MLX5DR_ACTION_FLAG_ROOT_FDB = 1 << 2, + MLX5DR_ACTION_FLAG_HWS_RX = 1 << 3, + MLX5DR_ACTION_FLAG_HWS_TX = 1 << 4, + MLX5DR_ACTION_FLAG_HWS_FDB = 1 << 5, + /* Shared action can be used over a few threads, since data is written + * only once at the creation of the action. + */ + MLX5DR_ACTION_FLAG_SHARED = 1 << 6, +}; + +enum mlx5dr_action_reformat_type { + MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2, + MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2, + MLX5DR_ACTION_REFORMAT_TYPE_TNL_L3_TO_L2, + MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3, +}; + +enum mlx5dr_action_aso_meter_color { + MLX5DR_ACTION_ASO_METER_COLOR_RED = 0x0, + MLX5DR_ACTION_ASO_METER_COLOR_YELLOW = 0x1, + MLX5DR_ACTION_ASO_METER_COLOR_GREEN = 0x2, + MLX5DR_ACTION_ASO_METER_COLOR_UNDEFINED = 0x3, +}; + +enum mlx5dr_action_aso_ct_flags { + MLX5DR_ACTION_ASO_CT_DIRECTION_INITIATOR = 0 << 0, + MLX5DR_ACTION_ASO_CT_DIRECTION_RESPONDER = 1 << 0, +}; + +enum mlx5dr_match_template_flags { + /* Allow relaxed matching by skipping derived dependent match fields. */ + MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH = 1, +}; + +enum mlx5dr_send_queue_actions { + /* Start executing all pending queued rules and write to HW */ + MLX5DR_SEND_QUEUE_ACTION_DRAIN = 1 << 0, +}; + +struct mlx5dr_context_attr { + uint16_t queues; + uint16_t queue_size; + size_t initial_log_ste_memory; /* Currently not in use */ + /* Optional PD used for allocating res ources */ + struct ibv_pd *pd; +}; + +struct mlx5dr_table_attr { + enum mlx5dr_table_type type; + uint32_t level; +}; + +enum mlx5dr_matcher_flow_src { + MLX5DR_MATCHER_FLOW_SRC_ANY = 0x0, + MLX5DR_MATCHER_FLOW_SRC_WIRE = 0x1, + MLX5DR_MATCHER_FLOW_SRC_VPORT = 0x2, +}; + +struct mlx5dr_matcher_attr { + /* Processing priority inside table */ + uint32_t priority; + /* Provide all rules with unique rule_idx in num_log range to reduce locking */ + bool optimize_using_rule_idx; + /* Resource mode and corresponding size */ + enum mlx5dr_matcher_resource_mode mode; + /* Optimize insertion in case packet origin is the same for all rules */ + enum mlx5dr_matcher_flow_src optimize_flow_src; + union { + struct { + uint8_t sz_row_log; + uint8_t sz_col_log; + } table; + + struct { + uint8_t num_log; + } rule; + }; +}; + +struct mlx5dr_rule_attr { + uint16_t queue_id; + void *user_data; + /* Valid if matcher optimize_using_rule_idx is set */ + uint32_t rule_idx; + uint32_t burst:1; +}; + +struct mlx5dr_devx_obj { + struct mlx5dv_devx_obj *obj; + uint32_t id; +}; + +/* In actions that take offset, the offset is unique, and the user should not + * reuse the same index because data changing is not atomic. + */ +struct mlx5dr_rule_action { + struct mlx5dr_action *action; + union { + struct { + uint32_t value; + } tag; + + struct { + uint32_t offset; + } counter; + + struct { + uint32_t offset; + uint8_t *data; + } modify_header; + + struct { + uint32_t offset; + uint8_t *data; + } reformat; + + struct { + __be32 vlan_hdr; + } push_vlan; + + struct { + uint32_t offset; + enum mlx5dr_action_aso_meter_color init_color; + } aso_meter; + + struct { + uint32_t offset; + enum mlx5dr_action_aso_ct_flags direction; + } aso_ct; + }; +}; + +/* Open a context used for direct rule insertion using hardware steering. + * Each context can contain multiple tables of different types. + * + * @param[in] ibv_ctx + * The ibv context to used for HWS. + * @param[in] attr + * Attributes used for context open. + * @return pointer to mlx5dr_context on success NULL otherwise. + */ +struct mlx5dr_context * +mlx5dr_context_open(struct ibv_context *ibv_ctx, + struct mlx5dr_context_attr *attr); + +/* Close a context used for direct hardware steering. + * + * @param[in] ctx + * mlx5dr context to close. + * @return zero on success non zero otherwise. + */ +int mlx5dr_context_close(struct mlx5dr_context *ctx); + +/* Create a new direct rule table. Each table can contain multiple matchers. + * + * @param[in] ctx + * The context in which the new table will be opened. + * @param[in] attr + * Attributes used for table creation. + * @return pointer to mlx5dr_table on success NULL otherwise. + */ +struct mlx5dr_table * +mlx5dr_table_create(struct mlx5dr_context *ctx, + struct mlx5dr_table_attr *attr); + +/* Destroy direct rule table. + * + * @param[in] tbl + * mlx5dr table to destroy. + * @return zero on success non zero otherwise. + */ +int mlx5dr_table_destroy(struct mlx5dr_table *tbl); + +/* Create new match template based on items mask, the match template + * will be used for matcher creation. + * + * @param[in] items + * Describe the mask for template creation + * @param[in] flags + * Template creation flags + * @return pointer to mlx5dr_match_template on success NULL otherwise + */ +struct mlx5dr_match_template * +mlx5dr_match_template_create(const struct rte_flow_item items[], + enum mlx5dr_match_template_flags flags); + +/* Destroy match template. + * + * @param[in] mt + * Match template to destroy. + * @return zero on success non zero otherwise. + */ +int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt); + +/* Create new action template based on action_type array, the action template + * will be used for matcher creation. + * + * @param[in] action_type + * An array of actions based on the order of actions which will be provided + * with rule_actions to mlx5dr_rule_create. The last action is marked + * using MLX5DR_ACTION_TYP_LAST. + * @return pointer to mlx5dr_action_template on success NULL otherwise + */ +struct mlx5dr_action_template * +mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[]); + +/* Destroy action template. + * + * @param[in] at + * Action template to destroy. + * @return zero on success non zero otherwise. + */ +int mlx5dr_action_template_destroy(struct mlx5dr_action_template *at); + +/* Create a new direct rule matcher. Each matcher can contain multiple rules. + * Matchers on the table will be processed by priority. Matching fields and + * mask are described by the match template. In some cases multiple match + * templates can be used on the same matcher. + * + * @param[in] table + * The table in which the new matcher will be opened. + * @param[in] mt + * Array of match templates to be used on matcher. + * @param[in] num_of_mt + * Number of match templates in mt array. + * @param[in] at + * Array of action templates to be used on matcher. + * @param[in] num_of_at + * Number of action templates in mt array. + * @param[in] attr + * Attributes used for matcher creation. + * @return pointer to mlx5dr_matcher on success NULL otherwise. + */ +struct mlx5dr_matcher * +mlx5dr_matcher_create(struct mlx5dr_table *table, + struct mlx5dr_match_template *mt[], + uint8_t num_of_mt, + struct mlx5dr_action_template *at[], + uint8_t num_of_at, + struct mlx5dr_matcher_attr *attr); + +/* Destroy direct rule matcher. + * + * @param[in] matcher + * Matcher to destroy. + * @return zero on success non zero otherwise. + */ +int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher); + +/* Get the size of the rule handle (mlx5dr_rule) to be used on rule creation. + * + * @return size in bytes of rule handle struct. + */ +size_t mlx5dr_rule_get_handle_size(void); + +/* Enqueue create rule operation. + * + * @param[in] matcher + * The matcher in which the new rule will be created. + * @param[in] mt_idx + * Match template index to create the match with. + * @param[in] items + * The items used for the value matching. + * @param[in] rule_actions + * Rule action to be executed on match. + * @param[in] at_idx + * Action template index to apply the actions with. + * @param[in] num_of_actions + * Number of rule actions. + * @param[in] attr + * Rule creation attributes. + * @param[in, out] rule_handle + * A valid rule handle. The handle doesn't require any initialization. + * @return zero on successful enqueue non zero otherwise. + */ +int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[], + struct mlx5dr_rule_attr *attr, + struct mlx5dr_rule *rule_handle); + +/* Enqueue destroy rule operation. + * + * @param[in] rule + * The rule destruction to enqueue. + * @param[in] attr + * Rule destruction attributes. + * @return zero on successful enqueue non zero otherwise. + */ +int mlx5dr_rule_destroy(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr); + +/* Create direct rule drop action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_dest_drop(struct mlx5dr_context *ctx, + uint32_t flags); + +/* Create direct rule default miss action. + * Defaults are RX: Drop TX: Wire. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_default_miss(struct mlx5dr_context *ctx, + uint32_t flags); + +/* Create direct rule goto table action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] tbl + * Destination table. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx, + struct mlx5dr_table *tbl, + uint32_t flags); + +/* Create direct rule goto vport action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] ib_port_num + * Destination ib_port number. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_dest_vport(struct mlx5dr_context *ctx, + uint32_t ib_port_num, + uint32_t flags); + +/* Create direct rule goto TIR action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] obj + * Direct rule TIR devx object. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *obj, + uint32_t flags); + +/* Create direct rule TAG action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_tag(struct mlx5dr_context *ctx, + uint32_t flags); + +/* Create direct rule counter action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] obj + * Direct rule counter devx object. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_counter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *obj, + uint32_t flags); + +/* Create direct rule reformat action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] reformat_type + * Type of reformat. + * @param[in] data_sz + * Size in bytes of data. + * @param[in] inline_data + * Header data array in case of inline action. + * @param[in] log_bulk_size + * Number of unique values used with this pattern. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, + enum mlx5dr_action_reformat_type reformat_type, + size_t data_sz, + void *inline_data, + uint32_t log_bulk_size, + uint32_t flags); + +/* Create direct rule modify header action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] pattern_sz + * Byte size of the pattern array. + * @param[in] pattern + * PRM format modify pattern action array. + * @param[in] log_bulk_size + * Number of unique values used with this pattern. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, + size_t pattern_sz, + __be64 pattern[], + uint32_t log_bulk_size, + uint32_t flags); + +/* Create direct rule ASO flow meter action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] devx_obj + * The DEVX ASO object. + * @param[in] return_reg_c + * Copy the ASO object value into this reg_c, after a packet hits a rule with this ASO object. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_aso_meter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_c, + uint32_t flags); + +/* Create direct rule ASO CT action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] devx_obj + * The DEVX ASO object. + * @param[in] return_reg_id + * Copy the ASO object value into this reg_id, after a packet hits a rule with this ASO object. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_aso_ct(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags); + +/* Create direct rule pop vlan action. + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_pop_vlan(struct mlx5dr_context *ctx, uint32_t flags); + +/* Create direct rule push vlan action. + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_push_vlan(struct mlx5dr_context *ctx, uint32_t flags); + +/* Destroy direct rule action. + * + * @param[in] action + * The action to destroy. + * @return zero on success non zero otherwise. + */ +int mlx5dr_action_destroy(struct mlx5dr_action *action); + +/* Poll queue for rule creation and deletions completions. + * + * @param[in] ctx + * The context to which the queue belong to. + * @param[in] queue_id + * The id of the queue to poll. + * @param[in, out] res + * Completion array. + * @param[in] res_nb + * Maximum number of results to return. + * @return negative number on failure, the number of completions otherwise. + */ +int mlx5dr_send_queue_poll(struct mlx5dr_context *ctx, + uint16_t queue_id, + struct rte_flow_op_result res[], + uint32_t res_nb); + +/* Perform an action on the queue + * + * @param[in] ctx + * The context to which the queue belong to. + * @param[in] queue_id + * The id of the queue to perform the action on. + * @param[in] actions + * Actions to perform on the queue. (enum mlx5dr_send_queue_actions) + * @return zero on success non zero otherwise. + */ +int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, + uint16_t queue_id, + uint32_t actions); + +/* Dump HWS info + * + * @param[in] ctx + * The context which to dump the info from. + * @param[in] f + * The file to write the dump to. + * @return zero on success non zero otherwise. + */ +int mlx5dr_debug_dump(struct mlx5dr_context *ctx, FILE *f); + +#endif diff --git a/drivers/net/mlx5/hws/mlx5dr_internal.h b/drivers/net/mlx5/hws/mlx5dr_internal.h new file mode 100644 index 0000000000..c0cd581eac --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_internal.h @@ -0,0 +1,93 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) Copyright (c) 2022 NVIDIA Corporation 2021 NVIDIA CORPORATION. All rights reserved. Affiliates + */ + +#ifndef MLX5DR_INTERNAL_H_ +#define MLX5DR_INTERNAL_H_ + +#include <stdint.h> +#include <sys/queue.h> +/* Verbs headers do not support -pedantic. */ +#ifdef PEDANTIC +#pragma GCC diagnostic ignored "-Wpedantic" +#endif +#include <infiniband/verbs.h> +#include <infiniband/mlx5dv.h> +#ifdef PEDANTIC +#pragma GCC diagnostic error "-Wpedantic" +#endif +#include <rte_flow.h> +#include <rte_gtp.h> + +#include "mlx5_prm.h" +#include "mlx5_glue.h" +#include "mlx5_flow.h" +#include "mlx5_utils.h" +#include "mlx5_malloc.h" + +#include "mlx5dr.h" +#include "mlx5dr_pool.h" +#include "mlx5dr_context.h" +#include "mlx5dr_table.h" +#include "mlx5dr_matcher.h" +#include "mlx5dr_send.h" +#include "mlx5dr_rule.h" +#include "mlx5dr_cmd.h" +#include "mlx5dr_action.h" +#include "mlx5dr_definer.h" +#include "mlx5dr_debug.h" +#include "mlx5dr_pat_arg.h" + +#define DW_SIZE 4 +#define BITS_IN_BYTE 8 +#define BITS_IN_DW (BITS_IN_BYTE * DW_SIZE) + +#define BIT(_bit) (1ULL << (_bit)) +#define IS_BIT_SET(_value, _bit) (_value & (1ULL << (_bit))) + +#ifndef ARRAY_SIZE +#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) +#endif + +#ifdef RTE_LIBRTE_MLX5_DEBUG +/* Prevent double function name print when debug is set */ +#define DR_LOG DRV_LOG +#else +/* Print function name as part of the log */ +#define DR_LOG(level, ...) \ + DRV_LOG(level, RTE_FMT("[%s]: " RTE_FMT_HEAD(__VA_ARGS__,), __func__, RTE_FMT_TAIL(__VA_ARGS__,))) +#endif + +static inline void *simple_malloc(size_t size) +{ + return mlx5_malloc(MLX5_MEM_SYS, + size, + MLX5_MALLOC_ALIGNMENT, + SOCKET_ID_ANY); +} + +static inline void *simple_calloc(size_t nmemb, size_t size) +{ + return mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO, + nmemb * size, + MLX5_MALLOC_ALIGNMENT, + SOCKET_ID_ANY); +} + +static inline void simple_free(void *addr) +{ + mlx5_free(addr); +} + +static inline bool is_mem_zero(const uint8_t *mem, size_t size) +{ + assert(size); + return (*mem == 0) && memcmp(mem, mem + 1, size - 1) == 0; +} + +static inline uint64_t roundup_pow_of_two(uint64_t n) +{ + return n == 1 ? 1 : 1ULL << log2above(n); +} + +#endif diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index c7ddd4b65c..f9b266c900 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -71,3 +71,4 @@ endif testpmd_sources += files('mlx5_testpmd.c') subdir(exec_env) +subdir('hws') diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 05a1bad0e6..48ae2244da 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -34,6 +34,7 @@ #include "mlx5_os.h" #include "mlx5_autoconf.h" #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#include "hws/mlx5dr.h" #endif #define MLX5_SH(dev) (((struct mlx5_priv *)(dev)->data->dev_private)->sh) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index cae1a64def..1ad75fc8c6 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -17,6 +17,8 @@ #include <mlx5_prm.h> #include "mlx5.h" +#include "hws/mlx5dr.h" +#include "hws/mlx5dr_rule.h" /* E-Switch Manager port, used for rte_flow_item_port_id. */ #define MLX5_PORT_ESW_MGR UINT32_MAX diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 78c741bb91..7343d59f1f 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -1107,7 +1107,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, actions, rule_acts, &acts_num); ret = mlx5dr_rule_create(table->matcher, pattern_template_index, items, - rule_acts, acts_num, + action_template_index, rule_acts, &rule_attr, &flow->rule); if (likely(!ret)) return (struct rte_flow *)flow; @@ -1498,7 +1498,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, tbl->its[i] = item_templates[i]; } tbl->matcher = mlx5dr_matcher_create - (tbl->grp->tbl, mt, nb_item_templates, &matcher_attr); + (tbl->grp->tbl, mt, nb_item_templates, NULL, 0, &matcher_attr); if (!tbl->matcher) goto it_error; tbl->nb_item_templates = nb_item_templates; -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 00/19] net/mlx5: Add HW steering low level support 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (18 preceding siblings ...) 2022-09-22 19:03 ` [v1 19/19] net/mlx5/hws: Enable HWS Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 01/19] net/mlx5: split flow item translation Alex Vesker ` (18 more replies) 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (3 subsequent siblings) 23 siblings, 19 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm; +Cc: dev, orika Mellanox ConnetX devices supports packet matching, packet modification and redirection. These functionalities are also referred to as flow-steering. To configure a steering rule, the rule is written to the device owned memory, this memory is accessed and cached by the device when processing a packet. The highlight of this patchset is supporting HW Steering (HWS) which is the new technology supported in new ConnectX devices, HWS allows configuring steering rules directly to the HW using special HW queues with minimal CPU effort. This patchset is the internal low layer implementation for HWS used by the mlx5 PMD. The mlx5dr (direct rule) is layer that bridges between the PMD and the HW by configuring the HW offloads based on the PMD logic v2: Fix check patch and cosmetic changes Alex Vesker (13): net/mlx5: Add additional glue functions for HWS net/mlx5: Remove stub HWS support net/mlx5/hws: Add HWS command layer net/mlx5/hws: Add HWS pool and buddy net/mlx5/hws: Add HWS send layer net/mlx5/hws: Add HWS definer layer net/mlx5/hws: Add HWS context object net/mlx5/hws: Add HWS table object net/mlx5/hws: Add HWS matcher object net/mlx5/hws: Add HWS rule object net/mlx5/hws: Add HWS action object net/mlx5/hws: Add HWS debug layer net/mlx5/hws: Enable HWS Bing Zhao (2): common/mlx5: query set capability of registers net/mlx5: provide the available tag registers Dariusz Sosnowski (1): net/mlx5: add port to metadata conversion Suanming Mou (3): net/mlx5: split flow item translation net/mlx5: split flow item matcher and value translation net/mlx5: add hardware steering item translation function drivers/common/mlx5/linux/mlx5_glue.c | 121 +- drivers/common/mlx5/linux/mlx5_glue.h | 17 + drivers/common/mlx5/mlx5_devx_cmds.c | 30 + drivers/common/mlx5/mlx5_devx_cmds.h | 2 + drivers/common/mlx5/mlx5_prm.h | 653 ++++- drivers/net/mlx5/hws/meson.build | 18 + drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} | 210 +- drivers/net/mlx5/hws/mlx5dr_action.c | 2221 +++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 253 ++ drivers/net/mlx5/hws/mlx5dr_buddy.c | 201 ++ drivers/net/mlx5/hws/mlx5dr_buddy.h | 22 + drivers/net/mlx5/hws/mlx5dr_cmd.c | 949 +++++++ drivers/net/mlx5/hws/mlx5dr_cmd.h | 230 ++ drivers/net/mlx5/hws/mlx5dr_context.c | 222 ++ drivers/net/mlx5/hws/mlx5dr_context.h | 40 + drivers/net/mlx5/hws/mlx5dr_debug.c | 462 ++++ drivers/net/mlx5/hws/mlx5dr_debug.h | 28 + drivers/net/mlx5/hws/mlx5dr_definer.c | 1970 +++++++++++++ drivers/net/mlx5/hws/mlx5dr_definer.h | 577 ++++ drivers/net/mlx5/hws/mlx5dr_internal.h | 93 + drivers/net/mlx5/hws/mlx5dr_matcher.c | 922 +++++++ drivers/net/mlx5/hws/mlx5dr_matcher.h | 76 + drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 511 ++++ drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 83 + drivers/net/mlx5/hws/mlx5dr_pool.c | 672 +++++ drivers/net/mlx5/hws/mlx5dr_pool.h | 152 + drivers/net/mlx5/hws/mlx5dr_rule.c | 528 ++++ drivers/net/mlx5/hws/mlx5dr_rule.h | 50 + drivers/net/mlx5/hws/mlx5dr_send.c | 844 ++++++ drivers/net/mlx5/hws/mlx5dr_send.h | 275 ++ drivers/net/mlx5/hws/mlx5dr_table.c | 248 ++ drivers/net/mlx5/hws/mlx5dr_table.h | 44 + drivers/net/mlx5/linux/mlx5_os.c | 7 +- drivers/net/mlx5/meson.build | 2 +- drivers/net/mlx5/mlx5.c | 3 + drivers/net/mlx5/mlx5.h | 3 +- drivers/net/mlx5/mlx5_defs.h | 2 + drivers/net/mlx5/mlx5_dr.c | 383 --- drivers/net/mlx5/mlx5_flow.c | 17 + drivers/net/mlx5/mlx5_flow.h | 128 + drivers/net/mlx5/mlx5_flow_dv.c | 2599 +++++++++--------- drivers/net/mlx5/mlx5_flow_hw.c | 109 +- 42 files changed, 14297 insertions(+), 1680 deletions(-) create mode 100644 drivers/net/mlx5/hws/meson.build rename drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} (66%) create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_internal.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.h delete mode 100644 drivers/net/mlx5/mlx5_dr.c -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 01/19] net/mlx5: split flow item translation 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 02/19] net/mlx5: split flow item matcher and value translation Alex Vesker ` (17 subsequent siblings) 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> In order to share the item translation code with hardware steering mode, this commit splits flow item translation code to a dedicate function. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow_dv.c | 1915 ++++++++++++++++--------------- 1 file changed, 979 insertions(+), 936 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 91f287af5c..70a3279e2f 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -13029,8 +13029,7 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, } /** - * Fill the flow with DV spec, lock free - * (mutex should be acquired by caller). + * Translate the flow item to matcher. * * @param[in] dev * Pointer to rte_eth_dev structure. @@ -13040,8 +13039,8 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * Pointer to the flow attributes. * @param[in] items * Pointer to the list of items. - * @param[in] actions - * Pointer to the list of actions. + * @param[in] matcher + * Pointer to the flow matcher. * @param[out] error * Pointer to the error structure. * @@ -13049,1041 +13048,1086 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -flow_dv_translate(struct rte_eth_dev *dev, - struct mlx5_flow *dev_flow, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[], - struct rte_flow_error *error) +flow_dv_translate_items(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + struct mlx5_flow_dv_matcher *matcher, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_sh_config *dev_conf = &priv->sh->config; struct rte_flow *flow = dev_flow->flow; struct mlx5_flow_handle *handle = dev_flow->handle; struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); - struct mlx5_flow_rss_desc *rss_desc; + struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc; uint64_t item_flags = 0; uint64_t last_item = 0; - uint64_t action_flags = 0; - struct mlx5_flow_dv_matcher matcher = { - .mask = { - .size = sizeof(matcher.mask.buf), - }, - }; - int actions_n = 0; - bool actions_end = false; - union { - struct mlx5_flow_dv_modify_hdr_resource res; - uint8_t len[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + - sizeof(struct mlx5_modification_cmd) * - (MLX5_MAX_MODIFY_NUM + 1)]; - } mhdr_dummy; - struct mlx5_flow_dv_modify_hdr_resource *mhdr_res = &mhdr_dummy.res; - const struct rte_flow_action_count *count = NULL; - const struct rte_flow_action_age *non_shared_age = NULL; - union flow_dv_attr flow_attr = { .attr = 0 }; - uint32_t tag_be; - union mlx5_flow_tbl_key tbl_key; - uint32_t modify_action_position = UINT32_MAX; - void *match_mask = matcher.mask.buf; + void *match_mask = matcher->mask.buf; void *match_value = dev_flow->dv.value.buf; uint8_t next_protocol = 0xff; - struct rte_vlan_hdr vlan = { 0 }; - struct mlx5_flow_dv_dest_array_resource mdest_res; - struct mlx5_flow_dv_sample_resource sample_res; - void *sample_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; - const struct rte_flow_action_sample *sample = NULL; - struct mlx5_flow_sub_actions_list *sample_act; - uint32_t sample_act_pos = UINT32_MAX; - uint32_t age_act_pos = UINT32_MAX; - uint32_t num_of_dest = 0; - int tmp_actions_n = 0; - uint32_t table; - int ret = 0; - const struct mlx5_flow_tunnel *tunnel = NULL; - struct flow_grp_info grp_info = { - .external = !!dev_flow->external, - .transfer = !!attr->transfer, - .fdb_def_rule = !!priv->fdb_def_rule, - .skip_scale = dev_flow->skip_scale & - (1 << MLX5_SCALE_FLOW_GROUP_BIT), - .std_tbl_fix = true, - }; + uint16_t priority = 0; const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; const struct rte_flow_item *tunnel_item = NULL; const struct rte_flow_item *gre_item = NULL; + int ret = 0; - if (!wks) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "failed to push flow workspace"); - rss_desc = &wks->rss_desc; - memset(&mdest_res, 0, sizeof(struct mlx5_flow_dv_dest_array_resource)); - memset(&sample_res, 0, sizeof(struct mlx5_flow_dv_sample_resource)); - mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : - MLX5DV_FLOW_TABLE_TYPE_NIC_RX; - /* update normal path action resource into last index of array */ - sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM - 1]; - if (is_tunnel_offload_active(dev)) { - if (dev_flow->tunnel) { - RTE_VERIFY(dev_flow->tof_type == - MLX5_TUNNEL_OFFLOAD_MISS_RULE); - tunnel = dev_flow->tunnel; - } else { - tunnel = mlx5_get_tof(items, actions, - &dev_flow->tof_type); - dev_flow->tunnel = tunnel; - } - grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate - (dev, attr, tunnel, dev_flow->tof_type); - } - mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : - MLX5DV_FLOW_TABLE_TYPE_NIC_RX; - ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table, - &grp_info, error); - if (ret) - return ret; - dev_flow->dv.group = table; - if (attr->transfer) - mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; - /* number of actions must be set to 0 in case of dirty stack. */ - mhdr_res->actions_num = 0; - if (is_flow_tunnel_match_rule(dev_flow->tof_type)) { - /* - * do not add decap action if match rule drops packet - * HW rejects rules with decap & drop - * - * if tunnel match rule was inserted before matching tunnel set - * rule flow table used in the match rule must be registered. - * current implementation handles that in the - * flow_dv_match_register() at the function end. - */ - bool add_decap = true; - const struct rte_flow_action *ptr = actions; - - for (; ptr->type != RTE_FLOW_ACTION_TYPE_END; ptr++) { - if (ptr->type == RTE_FLOW_ACTION_TYPE_DROP) { - add_decap = false; - break; - } - } - if (add_decap) { - if (flow_dv_create_action_l2_decap(dev, dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_DECAP; - } - } - for (; !actions_end ; actions++) { - const struct rte_flow_action_queue *queue; - const struct rte_flow_action_rss *rss; - const struct rte_flow_action *action = actions; - const uint8_t *rss_key; - struct mlx5_flow_tbl_resource *tbl; - struct mlx5_aso_age_action *age_act; - struct mlx5_flow_counter *cnt_act; - uint32_t port_id = 0; - struct mlx5_flow_dv_port_id_action_resource port_id_resource; - int action_type = actions->type; - const struct rte_flow_action *found_action = NULL; - uint32_t jump_group = 0; - uint32_t owner_idx; - struct mlx5_aso_ct_action *ct; + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { + int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + int item_type = items->type; - if (!mlx5_flow_os_action_supported(action_type)) + if (!mlx5_flow_os_item_supported(item_type)) return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "action not supported"); - switch (action_type) { - case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET: - action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET; + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "item not supported"); + switch (item_type) { + case RTE_FLOW_ITEM_TYPE_ESP: + flow_dv_translate_item_esp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_ITEM_ESP; break; - case RTE_FLOW_ACTION_TYPE_VOID: + case RTE_FLOW_ITEM_TYPE_PORT_ID: + flow_dv_translate_item_port_id + (dev, match_mask, match_value, items, attr); + last_item = MLX5_FLOW_ITEM_PORT_ID; break; - case RTE_FLOW_ACTION_TYPE_PORT_ID: - case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: - if (flow_dv_translate_action_port_id(dev, action, - &port_id, error)) - return -rte_errno; - port_id_resource.port_id = port_id; - MLX5_ASSERT(!handle->rix_port_id_action); - if (flow_dv_port_id_action_resource_register - (dev, &port_id_resource, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.port_id_action->action; - action_flags |= MLX5_FLOW_ACTION_PORT_ID; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_PORT_ID; - sample_act->action_flags |= MLX5_FLOW_ACTION_PORT_ID; - num_of_dest++; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + flow_dv_translate_item_represented_port + (dev, match_mask, match_value, items, attr); + last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; break; - case RTE_FLOW_ACTION_TYPE_FLAG: - action_flags |= MLX5_FLOW_ACTION_FLAG; - wks->mark = 1; - if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { - struct rte_flow_action_mark mark = { - .id = MLX5_FLOW_MARK_DEFAULT, - }; - - if (flow_dv_convert_action_mark(dev, &mark, - mhdr_res, - error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MARK_EXT; - break; + case RTE_FLOW_ITEM_TYPE_ETH: + flow_dv_translate_item_eth(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = dev_flow->act_flags & + MLX5_FLOW_ACTION_DEFAULT_MISS && + !dev_flow->external ? + MLX5_PRIORITY_MAP_L3 : + MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + flow_dv_translate_item_vlan(dev_flow, + match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | + MLX5_FLOW_LAYER_INNER_VLAN) : + (MLX5_FLOW_LAYER_OUTER_L2 | + MLX5_FLOW_LAYER_OUTER_VLAN); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &item_flags, &tunnel); + flow_dv_translate_item_ipv4(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv4 *) + items->mask)->hdr.next_proto_id) { + next_protocol = + ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + next_protocol &= + ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; } - tag_be = mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT); - /* - * Only one FLAG or MARK is supported per device flow - * right now. So the pointer to the tag resource must be - * zero before the register process. - */ - MLX5_ASSERT(!handle->dvh.rix_tag); - if (flow_dv_tag_resource_register(dev, tag_be, - dev_flow, error)) - return -rte_errno; - MLX5_ASSERT(dev_flow->dv.tag_resource); - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.tag_resource->action; break; - case RTE_FLOW_ACTION_TYPE_MARK: - action_flags |= MLX5_FLOW_ACTION_MARK; - wks->mark = 1; - if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { - const struct rte_flow_action_mark *mark = - (const struct rte_flow_action_mark *) - actions->conf; - - if (flow_dv_convert_action_mark(dev, mark, - mhdr_res, - error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MARK_EXT; - break; + case RTE_FLOW_ITEM_TYPE_IPV6: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &item_flags, &tunnel); + flow_dv_translate_item_ipv6(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto) { + next_protocol = + ((const struct rte_flow_item_ipv6 *) + items->spec)->hdr.proto; + next_protocol &= + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; } - /* Fall-through */ - case MLX5_RTE_FLOW_ACTION_TYPE_MARK: - /* Legacy (non-extensive) MARK action. */ - tag_be = mlx5_flow_mark_set - (((const struct rte_flow_action_mark *) - (actions->conf))->id); - MLX5_ASSERT(!handle->dvh.rix_tag); - if (flow_dv_tag_resource_register(dev, tag_be, - dev_flow, error)) - return -rte_errno; - MLX5_ASSERT(dev_flow->dv.tag_resource); - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.tag_resource->action; break; - case RTE_FLOW_ACTION_TYPE_SET_META: - if (flow_dv_convert_action_set_meta - (dev, mhdr_res, attr, - (const struct rte_flow_action_set_meta *) - actions->conf, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_META; + case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: + flow_dv_translate_item_ipv6_frag_ext(match_mask, + match_value, + items, tunnel); + last_item = tunnel ? + MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : + MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header) { + next_protocol = + ((const struct rte_flow_item_ipv6_frag_ext *) + items->spec)->hdr.next_header; + next_protocol &= + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } break; - case RTE_FLOW_ACTION_TYPE_SET_TAG: - if (flow_dv_convert_action_set_tag - (dev, mhdr_res, - (const struct rte_flow_action_set_tag *) - actions->conf, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TAG; + case RTE_FLOW_ITEM_TYPE_TCP: + flow_dv_translate_item_tcp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; break; - case RTE_FLOW_ACTION_TYPE_DROP: - action_flags |= MLX5_FLOW_ACTION_DROP; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_DROP; + case RTE_FLOW_ITEM_TYPE_UDP: + flow_dv_translate_item_udp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; break; - case RTE_FLOW_ACTION_TYPE_QUEUE: - queue = actions->conf; - rss_desc->queue_num = 1; - rss_desc->queue[0] = queue->index; - action_flags |= MLX5_FLOW_ACTION_QUEUE; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_QUEUE; - sample_act->action_flags |= MLX5_FLOW_ACTION_QUEUE; - num_of_dest++; + case RTE_FLOW_ITEM_TYPE_GRE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; + gre_item = items; break; - case RTE_FLOW_ACTION_TYPE_RSS: - rss = actions->conf; - memcpy(rss_desc->queue, rss->queue, - rss->queue_num * sizeof(uint16_t)); - rss_desc->queue_num = rss->queue_num; - /* NULL RSS key indicates default RSS key. */ - rss_key = !rss->key ? rss_hash_default_key : rss->key; - memcpy(rss_desc->key, rss_key, MLX5_RSS_HASH_KEY_LEN); - /* - * rss->level and rss.types should be set in advance - * when expanding items for RSS. - */ - action_flags |= MLX5_FLOW_ACTION_RSS; - dev_flow->handle->fate_action = rss_desc->shared_rss ? - MLX5_FLOW_FATE_SHARED_RSS : - MLX5_FLOW_FATE_QUEUE; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + flow_dv_translate_item_gre_key(match_mask, + match_value, items); + last_item = MLX5_FLOW_LAYER_GRE_KEY; break; - case MLX5_RTE_FLOW_ACTION_TYPE_AGE: - owner_idx = (uint32_t)(uintptr_t)action->conf; - age_act = flow_aso_age_get_by_idx(dev, owner_idx); - if (flow->age == 0) { - flow->age = owner_idx; - __atomic_fetch_add(&age_act->refcnt, 1, - __ATOMIC_RELAXED); - } - age_act_pos = actions_n++; - action_flags |= MLX5_FLOW_ACTION_AGE; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_AGE: - non_shared_age = action->conf; - age_act_pos = actions_n++; - action_flags |= MLX5_FLOW_ACTION_AGE; + case RTE_FLOW_ITEM_TYPE_NVGRE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; break; - case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: - owner_idx = (uint32_t)(uintptr_t)action->conf; - cnt_act = flow_dv_counter_get_by_idx(dev, owner_idx, - NULL); - MLX5_ASSERT(cnt_act != NULL); - /** - * When creating meter drop flow in drop table, the - * counter should not overwrite the rte flow counter. - */ - if (attr->group == MLX5_FLOW_TABLE_LEVEL_METER && - dev_flow->dv.table_id == MLX5_MTR_TABLE_ID_DROP) { - dev_flow->dv.actions[actions_n++] = - cnt_act->action; - } else { - if (flow->counter == 0) { - flow->counter = owner_idx; - __atomic_fetch_add - (&cnt_act->shared_info.refcnt, - 1, __ATOMIC_RELAXED); - } - /* Save information first, will apply later. */ - action_flags |= MLX5_FLOW_ACTION_COUNT; - } + case RTE_FLOW_ITEM_TYPE_VXLAN: + flow_dv_translate_item_vxlan(dev, attr, + match_mask, match_value, + items, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN; break; - case RTE_FLOW_ACTION_TYPE_COUNT: - if (!priv->sh->cdev->config.devx) { - return rte_flow_error_set - (error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "count action not supported"); - } - /* Save information first, will apply later. */ - count = action->conf; - action_flags |= MLX5_FLOW_ACTION_COUNT; + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN_GPE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: - dev_flow->dv.actions[actions_n++] = - priv->sh->pop_vlan_action; - action_flags |= MLX5_FLOW_ACTION_OF_POP_VLAN; + case RTE_FLOW_ITEM_TYPE_GENEVE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GENEVE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: - if (!(action_flags & - MLX5_FLOW_ACTION_OF_SET_VLAN_VID)) - flow_dev_get_vlan_info_from_items(items, &vlan); - vlan.eth_proto = rte_be_to_cpu_16 - ((((const struct rte_flow_action_of_push_vlan *) - actions->conf)->ethertype)); - found_action = mlx5_flow_find_action - (actions + 1, - RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID); - if (found_action) - mlx5_update_vlan_vid_pcp(found_action, &vlan); - found_action = mlx5_flow_find_action - (actions + 1, - RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP); - if (found_action) - mlx5_update_vlan_vid_pcp(found_action, &vlan); - if (flow_dv_create_action_push_vlan - (dev, attr, &vlan, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.push_vlan_res->action; - action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN; + case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: + ret = flow_dv_translate_item_geneve_opt(dev, match_mask, + match_value, + items, error); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GENEVE TLV option"); + flow->geneve_tlv_option = 1; + last_item = MLX5_FLOW_LAYER_GENEVE_OPT; break; - case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP: - /* of_vlan_push action handled this action */ - MLX5_ASSERT(action_flags & - MLX5_FLOW_ACTION_OF_PUSH_VLAN); + case RTE_FLOW_ITEM_TYPE_MPLS: + flow_dv_translate_item_mpls(match_mask, match_value, + items, last_item, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_MPLS; break; - case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID: - if (action_flags & MLX5_FLOW_ACTION_OF_PUSH_VLAN) - break; - flow_dev_get_vlan_info_from_items(items, &vlan); - mlx5_update_vlan_vid_pcp(actions, &vlan); - /* If no VLAN push - this is a modify header action */ - if (flow_dv_convert_action_modify_vlan_vid - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID; + case RTE_FLOW_ITEM_TYPE_MARK: + flow_dv_translate_item_mark(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_MARK; break; - case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: - case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: - if (flow_dv_create_action_l2_encap(dev, actions, - dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_ENCAP; - if (action_flags & MLX5_FLOW_ACTION_SAMPLE) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ITEM_TYPE_META: + flow_dv_translate_item_meta(dev, match_mask, + match_value, attr, items); + last_item = MLX5_FLOW_ITEM_METADATA; break; - case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: - case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: - if (flow_dv_create_action_l2_decap(dev, dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_DECAP; + case RTE_FLOW_ITEM_TYPE_ICMP: + flow_dv_translate_item_icmp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP; break; - case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: - /* Handle encap with preceding decap. */ - if (action_flags & MLX5_FLOW_ACTION_DECAP) { - if (flow_dv_create_action_raw_encap - (dev, actions, dev_flow, attr, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } else { - /* Handle encap without preceding decap. */ - if (flow_dv_create_action_l2_encap - (dev, actions, dev_flow, attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } - action_flags |= MLX5_FLOW_ACTION_ENCAP; - if (action_flags & MLX5_FLOW_ACTION_SAMPLE) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ITEM_TYPE_ICMP6: + flow_dv_translate_item_icmp6(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP6; break; - case RTE_FLOW_ACTION_TYPE_RAW_DECAP: - while ((++action)->type == RTE_FLOW_ACTION_TYPE_VOID) - ; - if (action->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { - if (flow_dv_create_action_l2_decap - (dev, dev_flow, attr->transfer, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } - /* If decap is followed by encap, handle it at encap. */ - action_flags |= MLX5_FLOW_ACTION_DECAP; + case RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_item_tag(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_TAG; break; - case MLX5_RTE_FLOW_ACTION_TYPE_JUMP: - dev_flow->dv.actions[actions_n++] = - (void *)(uintptr_t)action->conf; - action_flags |= MLX5_FLOW_ACTION_JUMP; + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_mlx5_item_tag(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_TAG; break; - case RTE_FLOW_ACTION_TYPE_JUMP: - jump_group = ((const struct rte_flow_action_jump *) - action->conf)->group; - grp_info.std_tbl_fix = 0; - if (dev_flow->skip_scale & - (1 << MLX5_SCALE_JUMP_FLOW_GROUP_BIT)) - grp_info.skip_scale = 1; - else - grp_info.skip_scale = 0; - ret = mlx5_flow_group_to_table(dev, tunnel, - jump_group, - &table, - &grp_info, error); + case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + flow_dv_translate_item_tx_queue(dev, match_mask, + match_value, + items); + last_item = MLX5_FLOW_ITEM_TX_QUEUE; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + flow_dv_translate_item_gtp(match_mask, match_value, + items, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = flow_dv_translate_item_gtp_psc(match_mask, + match_value, + items); if (ret) - return ret; - tbl = flow_dv_tbl_resource_get(dev, table, attr->egress, - attr->transfer, - !!dev_flow->external, - tunnel, jump_group, 0, - 0, error); - if (!tbl) - return rte_flow_error_set - (error, errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "cannot create jump action."); - if (flow_dv_jump_tbl_resource_register - (dev, tbl, dev_flow, error)) { - flow_dv_tbl_resource_release(MLX5_SH(dev), tbl); - return rte_flow_error_set - (error, errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "cannot create jump action."); + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GTP PSC item"); + last_item = MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_ECPRI: + if (!mlx5_flex_parser_ecpri_exist(dev)) { + /* Create it only the first time to be used. */ + ret = mlx5_flex_parser_ecpri_alloc(dev); + if (ret) + return rte_flow_error_set + (error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, + "cannot create eCPRI parser"); + } + flow_dv_translate_item_ecpri(dev, match_mask, + match_value, items, + last_item); + /* No other protocol should follow eCPRI layer. */ + last_item = MLX5_FLOW_LAYER_ECPRI; + break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + flow_dv_translate_item_integrity(items, integrity_items, + &last_item); + break; + case RTE_FLOW_ITEM_TYPE_CONNTRACK: + flow_dv_translate_item_aso_ct(dev, match_mask, + match_value, items); + break; + case RTE_FLOW_ITEM_TYPE_FLEX: + flow_dv_translate_item_flex(dev, match_mask, + match_value, items, + dev_flow, tunnel != 0); + last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : + MLX5_FLOW_ITEM_OUTER_FLEX; + break; + default: + break; + } + item_flags |= last_item; + } + /* + * When E-Switch mode is enabled, we have two cases where we need to + * set the source port manually. + * The first one, is in case of NIC ingress steering rule, and the + * second is E-Switch rule where no port_id item was found. + * In both cases the source port is set according the current port + * in use. + */ + if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && + !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && + !(attr->egress && !attr->transfer)) { + if (flow_dv_translate_item_port_id(dev, match_mask, + match_value, NULL, attr)) + return -rte_errno; + } + if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + flow_dv_translate_item_integrity_post(match_mask, match_value, + integrity_items, + item_flags); + } + if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) + flow_dv_translate_item_vxlan_gpe(match_mask, match_value, + tunnel_item, item_flags); + else if (item_flags & MLX5_FLOW_LAYER_GENEVE) + flow_dv_translate_item_geneve(match_mask, match_value, + tunnel_item, item_flags); + else if (item_flags & MLX5_FLOW_LAYER_GRE) { + if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) + flow_dv_translate_item_gre(match_mask, match_value, + tunnel_item, item_flags); + else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) + flow_dv_translate_item_nvgre(match_mask, match_value, + tunnel_item, item_flags); + else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) + flow_dv_translate_item_gre_option(match_mask, match_value, + tunnel_item, gre_item, item_flags); + else + MLX5_ASSERT(false); + } + matcher->priority = priority; +#ifdef RTE_LIBRTE_MLX5_DEBUG + MLX5_ASSERT(!flow_dv_check_valid_spec(matcher->mask.buf, + dev_flow->dv.value.buf)); +#endif + /* + * Layers may be already initialized from prefix flow if this dev_flow + * is the suffix flow. + */ + handle->layers |= item_flags; + return ret; +} + +/** + * Fill the flow with DV spec, lock free + * (mutex should be acquired by caller). + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in, out] dev_flow + * Pointer to the sub flow. + * @param[in] attr + * Pointer to the flow attributes. + * @param[in] items + * Pointer to the list of items. + * @param[in] actions + * Pointer to the list of actions. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_translate(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_sh_config *dev_conf = &priv->sh->config; + struct rte_flow *flow = dev_flow->flow; + struct mlx5_flow_handle *handle = dev_flow->handle; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); + struct mlx5_flow_rss_desc *rss_desc; + uint64_t action_flags = 0; + struct mlx5_flow_dv_matcher matcher = { + .mask = { + .size = sizeof(matcher.mask.buf), + }, + }; + int actions_n = 0; + bool actions_end = false; + union { + struct mlx5_flow_dv_modify_hdr_resource res; + uint8_t len[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * + (MLX5_MAX_MODIFY_NUM + 1)]; + } mhdr_dummy; + struct mlx5_flow_dv_modify_hdr_resource *mhdr_res = &mhdr_dummy.res; + const struct rte_flow_action_count *count = NULL; + const struct rte_flow_action_age *non_shared_age = NULL; + union flow_dv_attr flow_attr = { .attr = 0 }; + uint32_t tag_be; + union mlx5_flow_tbl_key tbl_key; + uint32_t modify_action_position = UINT32_MAX; + struct rte_vlan_hdr vlan = { 0 }; + struct mlx5_flow_dv_dest_array_resource mdest_res; + struct mlx5_flow_dv_sample_resource sample_res; + void *sample_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; + const struct rte_flow_action_sample *sample = NULL; + struct mlx5_flow_sub_actions_list *sample_act; + uint32_t sample_act_pos = UINT32_MAX; + uint32_t age_act_pos = UINT32_MAX; + uint32_t num_of_dest = 0; + int tmp_actions_n = 0; + uint32_t table; + int ret = 0; + const struct mlx5_flow_tunnel *tunnel = NULL; + struct flow_grp_info grp_info = { + .external = !!dev_flow->external, + .transfer = !!attr->transfer, + .fdb_def_rule = !!priv->fdb_def_rule, + .skip_scale = dev_flow->skip_scale & + (1 << MLX5_SCALE_FLOW_GROUP_BIT), + .std_tbl_fix = true, + }; + + if (!wks) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "failed to push flow workspace"); + rss_desc = &wks->rss_desc; + memset(&mdest_res, 0, sizeof(struct mlx5_flow_dv_dest_array_resource)); + memset(&sample_res, 0, sizeof(struct mlx5_flow_dv_sample_resource)); + mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : + MLX5DV_FLOW_TABLE_TYPE_NIC_RX; + /* update normal path action resource into last index of array */ + sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM - 1]; + if (is_tunnel_offload_active(dev)) { + if (dev_flow->tunnel) { + RTE_VERIFY(dev_flow->tof_type == + MLX5_TUNNEL_OFFLOAD_MISS_RULE); + tunnel = dev_flow->tunnel; + } else { + tunnel = mlx5_get_tof(items, actions, + &dev_flow->tof_type); + dev_flow->tunnel = tunnel; + } + grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate + (dev, attr, tunnel, dev_flow->tof_type); + } + mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : + MLX5DV_FLOW_TABLE_TYPE_NIC_RX; + ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table, + &grp_info, error); + if (ret) + return ret; + dev_flow->dv.group = table; + if (attr->transfer) + mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; + /* number of actions must be set to 0 in case of dirty stack. */ + mhdr_res->actions_num = 0; + if (is_flow_tunnel_match_rule(dev_flow->tof_type)) { + /* + * do not add decap action if match rule drops packet + * HW rejects rules with decap & drop + * + * if tunnel match rule was inserted before matching tunnel set + * rule flow table used in the match rule must be registered. + * current implementation handles that in the + * flow_dv_match_register() at the function end. + */ + bool add_decap = true; + const struct rte_flow_action *ptr = actions; + + for (; ptr->type != RTE_FLOW_ACTION_TYPE_END; ptr++) { + if (ptr->type == RTE_FLOW_ACTION_TYPE_DROP) { + add_decap = false; + break; } - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.jump->action; - action_flags |= MLX5_FLOW_ACTION_JUMP; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_JUMP; - sample_act->action_flags |= MLX5_FLOW_ACTION_JUMP; - num_of_dest++; - break; - case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC: - case RTE_FLOW_ACTION_TYPE_SET_MAC_DST: - if (flow_dv_convert_action_modify_mac - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_MAC_SRC ? - MLX5_FLOW_ACTION_SET_MAC_SRC : - MLX5_FLOW_ACTION_SET_MAC_DST; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: - case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST: - if (flow_dv_convert_action_modify_ipv4 - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC ? - MLX5_FLOW_ACTION_SET_IPV4_SRC : - MLX5_FLOW_ACTION_SET_IPV4_DST; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC: - case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST: - if (flow_dv_convert_action_modify_ipv6 - (mhdr_res, actions, error)) + } + if (add_decap) { + if (flow_dv_create_action_l2_decap(dev, dev_flow, + attr->transfer, + error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC ? - MLX5_FLOW_ACTION_SET_IPV6_SRC : - MLX5_FLOW_ACTION_SET_IPV6_DST; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_DECAP; + } + } + for (; !actions_end ; actions++) { + const struct rte_flow_action_queue *queue; + const struct rte_flow_action_rss *rss; + const struct rte_flow_action *action = actions; + const uint8_t *rss_key; + struct mlx5_flow_tbl_resource *tbl; + struct mlx5_aso_age_action *age_act; + struct mlx5_flow_counter *cnt_act; + uint32_t port_id = 0; + struct mlx5_flow_dv_port_id_action_resource port_id_resource; + int action_type = actions->type; + const struct rte_flow_action *found_action = NULL; + uint32_t jump_group = 0; + uint32_t owner_idx; + struct mlx5_aso_ct_action *ct; + + if (!mlx5_flow_os_action_supported(action_type)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, + "action not supported"); + switch (action_type) { + case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET: + action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET; break; - case RTE_FLOW_ACTION_TYPE_SET_TP_SRC: - case RTE_FLOW_ACTION_TYPE_SET_TP_DST: - if (flow_dv_convert_action_modify_tp - (mhdr_res, actions, items, - &flow_attr, dev_flow, !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_TP_SRC ? - MLX5_FLOW_ACTION_SET_TP_SRC : - MLX5_FLOW_ACTION_SET_TP_DST; + case RTE_FLOW_ACTION_TYPE_VOID: break; - case RTE_FLOW_ACTION_TYPE_DEC_TTL: - if (flow_dv_convert_action_modify_dec_ttl - (mhdr_res, items, &flow_attr, dev_flow, - !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) + case RTE_FLOW_ACTION_TYPE_PORT_ID: + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + if (flow_dv_translate_action_port_id(dev, action, + &port_id, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_DEC_TTL; - break; - case RTE_FLOW_ACTION_TYPE_SET_TTL: - if (flow_dv_convert_action_modify_ttl - (mhdr_res, actions, items, &flow_attr, - dev_flow, !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) + port_id_resource.port_id = port_id; + MLX5_ASSERT(!handle->rix_port_id_action); + if (flow_dv_port_id_action_resource_register + (dev, &port_id_resource, dev_flow, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TTL; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.port_id_action->action; + action_flags |= MLX5_FLOW_ACTION_PORT_ID; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_PORT_ID; + sample_act->action_flags |= MLX5_FLOW_ACTION_PORT_ID; + num_of_dest++; break; - case RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ: - case RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ: - if (flow_dv_convert_action_modify_tcp_seq - (mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_FLAG: + action_flags |= MLX5_FLOW_ACTION_FLAG; + wks->mark = 1; + if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { + struct rte_flow_action_mark mark = { + .id = MLX5_FLOW_MARK_DEFAULT, + }; + + if (flow_dv_convert_action_mark(dev, &mark, + mhdr_res, + error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MARK_EXT; + break; + } + tag_be = mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT); + /* + * Only one FLAG or MARK is supported per device flow + * right now. So the pointer to the tag resource must be + * zero before the register process. + */ + MLX5_ASSERT(!handle->dvh.rix_tag); + if (flow_dv_tag_resource_register(dev, tag_be, + dev_flow, error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ ? - MLX5_FLOW_ACTION_INC_TCP_SEQ : - MLX5_FLOW_ACTION_DEC_TCP_SEQ; + MLX5_ASSERT(dev_flow->dv.tag_resource); + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.tag_resource->action; break; + case RTE_FLOW_ACTION_TYPE_MARK: + action_flags |= MLX5_FLOW_ACTION_MARK; + wks->mark = 1; + if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { + const struct rte_flow_action_mark *mark = + (const struct rte_flow_action_mark *) + actions->conf; - case RTE_FLOW_ACTION_TYPE_INC_TCP_ACK: - case RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK: - if (flow_dv_convert_action_modify_tcp_ack - (mhdr_res, actions, error)) + if (flow_dv_convert_action_mark(dev, mark, + mhdr_res, + error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MARK_EXT; + break; + } + /* Fall-through */ + case MLX5_RTE_FLOW_ACTION_TYPE_MARK: + /* Legacy (non-extensive) MARK action. */ + tag_be = mlx5_flow_mark_set + (((const struct rte_flow_action_mark *) + (actions->conf))->id); + MLX5_ASSERT(!handle->dvh.rix_tag); + if (flow_dv_tag_resource_register(dev, tag_be, + dev_flow, error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_INC_TCP_ACK ? - MLX5_FLOW_ACTION_INC_TCP_ACK : - MLX5_FLOW_ACTION_DEC_TCP_ACK; + MLX5_ASSERT(dev_flow->dv.tag_resource); + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.tag_resource->action; break; - case MLX5_RTE_FLOW_ACTION_TYPE_TAG: - if (flow_dv_convert_action_set_reg - (mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_SET_META: + if (flow_dv_convert_action_set_meta + (dev, mhdr_res, attr, + (const struct rte_flow_action_set_meta *) + actions->conf, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TAG; + action_flags |= MLX5_FLOW_ACTION_SET_META; break; - case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG: - if (flow_dv_convert_action_copy_mreg - (dev, mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_SET_TAG: + if (flow_dv_convert_action_set_tag + (dev, mhdr_res, + (const struct rte_flow_action_set_tag *) + actions->conf, error)) return -rte_errno; action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case MLX5_RTE_FLOW_ACTION_TYPE_DEFAULT_MISS: - action_flags |= MLX5_FLOW_ACTION_DEFAULT_MISS; - dev_flow->handle->fate_action = - MLX5_FLOW_FATE_DEFAULT_MISS; - break; - case RTE_FLOW_ACTION_TYPE_METER: - if (!wks->fm) - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, "Failed to get meter in flow."); - /* Set the meter action. */ - dev_flow->dv.actions[actions_n++] = - wks->fm->meter_action_g; - action_flags |= MLX5_FLOW_ACTION_METER; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV4_DSCP: - if (flow_dv_convert_action_modify_ipv4_dscp(mhdr_res, - actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_IPV4_DSCP; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV6_DSCP: - if (flow_dv_convert_action_modify_ipv6_dscp(mhdr_res, - actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_IPV6_DSCP; + case RTE_FLOW_ACTION_TYPE_DROP: + action_flags |= MLX5_FLOW_ACTION_DROP; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_DROP; break; - case RTE_FLOW_ACTION_TYPE_SAMPLE: - sample_act_pos = actions_n; - sample = (const struct rte_flow_action_sample *) - action->conf; - actions_n++; - action_flags |= MLX5_FLOW_ACTION_SAMPLE; - /* put encap action into group if work with port id */ - if ((action_flags & MLX5_FLOW_ACTION_ENCAP) && - (action_flags & MLX5_FLOW_ACTION_PORT_ID)) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ACTION_TYPE_QUEUE: + queue = actions->conf; + rss_desc->queue_num = 1; + rss_desc->queue[0] = queue->index; + action_flags |= MLX5_FLOW_ACTION_QUEUE; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_QUEUE; + sample_act->action_flags |= MLX5_FLOW_ACTION_QUEUE; + num_of_dest++; break; - case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: - if (flow_dv_convert_action_modify_field - (dev, mhdr_res, actions, attr, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; + case RTE_FLOW_ACTION_TYPE_RSS: + rss = actions->conf; + memcpy(rss_desc->queue, rss->queue, + rss->queue_num * sizeof(uint16_t)); + rss_desc->queue_num = rss->queue_num; + /* NULL RSS key indicates default RSS key. */ + rss_key = !rss->key ? rss_hash_default_key : rss->key; + memcpy(rss_desc->key, rss_key, MLX5_RSS_HASH_KEY_LEN); + /* + * rss->level and rss.types should be set in advance + * when expanding items for RSS. + */ + action_flags |= MLX5_FLOW_ACTION_RSS; + dev_flow->handle->fate_action = rss_desc->shared_rss ? + MLX5_FLOW_FATE_SHARED_RSS : + MLX5_FLOW_FATE_QUEUE; break; - case RTE_FLOW_ACTION_TYPE_CONNTRACK: + case MLX5_RTE_FLOW_ACTION_TYPE_AGE: owner_idx = (uint32_t)(uintptr_t)action->conf; - ct = flow_aso_ct_get_by_idx(dev, owner_idx); - if (!ct) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "Failed to get CT object."); - if (mlx5_aso_ct_available(priv->sh, ct)) - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "CT is unavailable."); - if (ct->is_original) - dev_flow->dv.actions[actions_n] = - ct->dr_action_orig; - else - dev_flow->dv.actions[actions_n] = - ct->dr_action_rply; - if (flow->ct == 0) { - flow->indirect_type = - MLX5_INDIRECT_ACTION_TYPE_CT; - flow->ct = owner_idx; - __atomic_fetch_add(&ct->refcnt, 1, + age_act = flow_aso_age_get_by_idx(dev, owner_idx); + if (flow->age == 0) { + flow->age = owner_idx; + __atomic_fetch_add(&age_act->refcnt, 1, __ATOMIC_RELAXED); } - actions_n++; - action_flags |= MLX5_FLOW_ACTION_CT; + age_act_pos = actions_n++; + action_flags |= MLX5_FLOW_ACTION_AGE; break; - case RTE_FLOW_ACTION_TYPE_END: - actions_end = true; - if (mhdr_res->actions_num) { - /* create modify action if needed. */ - if (flow_dv_modify_hdr_resource_register - (dev, mhdr_res, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[modify_action_position] = - handle->dvh.modify_hdr->action; - } - /* - * Handle AGE and COUNT action by single HW counter - * when they are not shared. + case RTE_FLOW_ACTION_TYPE_AGE: + non_shared_age = action->conf; + age_act_pos = actions_n++; + action_flags |= MLX5_FLOW_ACTION_AGE; + break; + case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: + owner_idx = (uint32_t)(uintptr_t)action->conf; + cnt_act = flow_dv_counter_get_by_idx(dev, owner_idx, + NULL); + MLX5_ASSERT(cnt_act != NULL); + /** + * When creating meter drop flow in drop table, the + * counter should not overwrite the rte flow counter. */ - if (action_flags & MLX5_FLOW_ACTION_AGE) { - if ((non_shared_age && count) || - !flow_hit_aso_supported(priv->sh, attr)) { - /* Creates age by counters. */ - cnt_act = flow_dv_prepare_counter - (dev, dev_flow, - flow, count, - non_shared_age, - error); - if (!cnt_act) - return -rte_errno; - dev_flow->dv.actions[age_act_pos] = - cnt_act->action; - break; - } - if (!flow->age && non_shared_age) { - flow->age = flow_dv_aso_age_alloc - (dev, error); - if (!flow->age) - return -rte_errno; - flow_dv_aso_age_params_init - (dev, flow->age, - non_shared_age->context ? - non_shared_age->context : - (void *)(uintptr_t) - (dev_flow->flow_idx), - non_shared_age->timeout); - } - age_act = flow_aso_age_get_by_idx(dev, - flow->age); - dev_flow->dv.actions[age_act_pos] = - age_act->dr_action; - } - if (action_flags & MLX5_FLOW_ACTION_COUNT) { - /* - * Create one count action, to be used - * by all sub-flows. - */ - cnt_act = flow_dv_prepare_counter(dev, dev_flow, - flow, count, - NULL, error); - if (!cnt_act) - return -rte_errno; + if (attr->group == MLX5_FLOW_TABLE_LEVEL_METER && + dev_flow->dv.table_id == MLX5_MTR_TABLE_ID_DROP) { dev_flow->dv.actions[actions_n++] = - cnt_act->action; + cnt_act->action; + } else { + if (flow->counter == 0) { + flow->counter = owner_idx; + __atomic_fetch_add + (&cnt_act->shared_info.refcnt, + 1, __ATOMIC_RELAXED); + } + /* Save information first, will apply later. */ + action_flags |= MLX5_FLOW_ACTION_COUNT; } - default: break; - } - if (mhdr_res->actions_num && - modify_action_position == UINT32_MAX) - modify_action_position = actions_n++; - } - for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { - int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); - int item_type = items->type; - - if (!mlx5_flow_os_item_supported(item_type)) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "item not supported"); - switch (item_type) { - case RTE_FLOW_ITEM_TYPE_ESP: - flow_dv_translate_item_esp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_ITEM_ESP; + case RTE_FLOW_ACTION_TYPE_COUNT: + if (!priv->sh->cdev->config.devx) { + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "count action not supported"); + } + /* Save information first, will apply later. */ + count = action->conf; + action_flags |= MLX5_FLOW_ACTION_COUNT; break; - case RTE_FLOW_ITEM_TYPE_PORT_ID: - flow_dv_translate_item_port_id - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_PORT_ID; + case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: + dev_flow->dv.actions[actions_n++] = + priv->sh->pop_vlan_action; + action_flags |= MLX5_FLOW_ACTION_OF_POP_VLAN; break; - case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: - flow_dv_translate_item_represented_port - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; + case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: + if (!(action_flags & + MLX5_FLOW_ACTION_OF_SET_VLAN_VID)) + flow_dev_get_vlan_info_from_items(items, &vlan); + vlan.eth_proto = rte_be_to_cpu_16 + ((((const struct rte_flow_action_of_push_vlan *) + actions->conf)->ethertype)); + found_action = mlx5_flow_find_action + (actions + 1, + RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID); + if (found_action) + mlx5_update_vlan_vid_pcp(found_action, &vlan); + found_action = mlx5_flow_find_action + (actions + 1, + RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP); + if (found_action) + mlx5_update_vlan_vid_pcp(found_action, &vlan); + if (flow_dv_create_action_push_vlan + (dev, attr, &vlan, dev_flow, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.push_vlan_res->action; + action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN; break; - case RTE_FLOW_ITEM_TYPE_ETH: - flow_dv_translate_item_eth(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = action_flags & - MLX5_FLOW_ACTION_DEFAULT_MISS && - !dev_flow->external ? - MLX5_PRIORITY_MAP_L3 : - MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : - MLX5_FLOW_LAYER_OUTER_L2; + case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP: + /* of_vlan_push action handled this action */ + MLX5_ASSERT(action_flags & + MLX5_FLOW_ACTION_OF_PUSH_VLAN); break; - case RTE_FLOW_ITEM_TYPE_VLAN: - flow_dv_translate_item_vlan(dev_flow, - match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | - MLX5_FLOW_LAYER_INNER_VLAN) : - (MLX5_FLOW_LAYER_OUTER_L2 | - MLX5_FLOW_LAYER_OUTER_VLAN); + case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID: + if (action_flags & MLX5_FLOW_ACTION_OF_PUSH_VLAN) + break; + flow_dev_get_vlan_info_from_items(items, &vlan); + mlx5_update_vlan_vid_pcp(actions, &vlan); + /* If no VLAN push - this is a modify header action */ + if (flow_dv_convert_action_modify_vlan_vid + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID; break; - case RTE_FLOW_ITEM_TYPE_IPV4: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv4(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : - MLX5_FLOW_LAYER_OUTER_L3_IPV4; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv4 *) - items->mask)->hdr.next_proto_id) { - next_protocol = - ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - next_protocol &= - ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: + if (flow_dv_create_action_l2_encap(dev, actions, + dev_flow, + attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_ENCAP; + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_IPV6: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv6(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : - MLX5_FLOW_LAYER_OUTER_L3_IPV6; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto) { - next_protocol = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol &= - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: + if (flow_dv_create_action_l2_decap(dev, dev_flow, + attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_DECAP; break; - case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: - flow_dv_translate_item_ipv6_frag_ext(match_mask, - match_value, - items, tunnel); - last_item = tunnel ? - MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : - MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header) { - next_protocol = - ((const struct rte_flow_item_ipv6_frag_ext *) - items->spec)->hdr.next_header; - next_protocol &= - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + /* Handle encap with preceding decap. */ + if (action_flags & MLX5_FLOW_ACTION_DECAP) { + if (flow_dv_create_action_raw_encap + (dev, actions, dev_flow, attr, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; } else { - /* Reset for inner layer. */ - next_protocol = 0xff; + /* Handle encap without preceding decap. */ + if (flow_dv_create_action_l2_encap + (dev, actions, dev_flow, attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; } + action_flags |= MLX5_FLOW_ACTION_ENCAP; + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_TCP: - flow_dv_translate_item_tcp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : - MLX5_FLOW_LAYER_OUTER_L4_TCP; - break; - case RTE_FLOW_ITEM_TYPE_UDP: - flow_dv_translate_item_udp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : - MLX5_FLOW_LAYER_OUTER_L4_UDP; - break; - case RTE_FLOW_ITEM_TYPE_GRE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - gre_item = items; + case RTE_FLOW_ACTION_TYPE_RAW_DECAP: + while ((++action)->type == RTE_FLOW_ACTION_TYPE_VOID) + ; + if (action->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { + if (flow_dv_create_action_l2_decap + (dev, dev_flow, attr->transfer, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + } + /* If decap is followed by encap, handle it at encap. */ + action_flags |= MLX5_FLOW_ACTION_DECAP; break; - case RTE_FLOW_ITEM_TYPE_GRE_KEY: - flow_dv_translate_item_gre_key(match_mask, - match_value, items); - last_item = MLX5_FLOW_LAYER_GRE_KEY; + case MLX5_RTE_FLOW_ACTION_TYPE_JUMP: + dev_flow->dv.actions[actions_n++] = + (void *)(uintptr_t)action->conf; + action_flags |= MLX5_FLOW_ACTION_JUMP; break; - case RTE_FLOW_ITEM_TYPE_GRE_OPTION: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_JUMP: + jump_group = ((const struct rte_flow_action_jump *) + action->conf)->group; + grp_info.std_tbl_fix = 0; + if (dev_flow->skip_scale & + (1 << MLX5_SCALE_JUMP_FLOW_GROUP_BIT)) + grp_info.skip_scale = 1; + else + grp_info.skip_scale = 0; + ret = mlx5_flow_group_to_table(dev, tunnel, + jump_group, + &table, + &grp_info, error); + if (ret) + return ret; + tbl = flow_dv_tbl_resource_get(dev, table, attr->egress, + attr->transfer, + !!dev_flow->external, + tunnel, jump_group, 0, + 0, error); + if (!tbl) + return rte_flow_error_set + (error, errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create jump action."); + if (flow_dv_jump_tbl_resource_register + (dev, tbl, dev_flow, error)) { + flow_dv_tbl_resource_release(MLX5_SH(dev), tbl); + return rte_flow_error_set + (error, errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create jump action."); + } + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.jump->action; + action_flags |= MLX5_FLOW_ACTION_JUMP; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_JUMP; + sample_act->action_flags |= MLX5_FLOW_ACTION_JUMP; + num_of_dest++; break; - case RTE_FLOW_ITEM_TYPE_NVGRE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC: + case RTE_FLOW_ACTION_TYPE_SET_MAC_DST: + if (flow_dv_convert_action_modify_mac + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_MAC_SRC ? + MLX5_FLOW_ACTION_SET_MAC_SRC : + MLX5_FLOW_ACTION_SET_MAC_DST; break; - case RTE_FLOW_ITEM_TYPE_VXLAN: - flow_dv_translate_item_vxlan(dev, attr, - match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN; + case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: + case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST: + if (flow_dv_convert_action_modify_ipv4 + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC ? + MLX5_FLOW_ACTION_SET_IPV4_SRC : + MLX5_FLOW_ACTION_SET_IPV4_DST; break; - case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN_GPE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC: + case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST: + if (flow_dv_convert_action_modify_ipv6 + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC ? + MLX5_FLOW_ACTION_SET_IPV6_SRC : + MLX5_FLOW_ACTION_SET_IPV6_DST; break; - case RTE_FLOW_ITEM_TYPE_GENEVE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GENEVE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_TP_SRC: + case RTE_FLOW_ACTION_TYPE_SET_TP_DST: + if (flow_dv_convert_action_modify_tp + (mhdr_res, actions, items, + &flow_attr, dev_flow, !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_TP_SRC ? + MLX5_FLOW_ACTION_SET_TP_SRC : + MLX5_FLOW_ACTION_SET_TP_DST; break; - case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: - ret = flow_dv_translate_item_geneve_opt(dev, match_mask, - match_value, - items, error); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GENEVE TLV option"); - flow->geneve_tlv_option = 1; - last_item = MLX5_FLOW_LAYER_GENEVE_OPT; + case RTE_FLOW_ACTION_TYPE_DEC_TTL: + if (flow_dv_convert_action_modify_dec_ttl + (mhdr_res, items, &flow_attr, dev_flow, + !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_DEC_TTL; break; - case RTE_FLOW_ITEM_TYPE_MPLS: - flow_dv_translate_item_mpls(match_mask, match_value, - items, last_item, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_MPLS; + case RTE_FLOW_ACTION_TYPE_SET_TTL: + if (flow_dv_convert_action_modify_ttl + (mhdr_res, actions, items, &flow_attr, + dev_flow, !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TTL; break; - case RTE_FLOW_ITEM_TYPE_MARK: - flow_dv_translate_item_mark(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_MARK; + case RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ: + case RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ: + if (flow_dv_convert_action_modify_tcp_seq + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ ? + MLX5_FLOW_ACTION_INC_TCP_SEQ : + MLX5_FLOW_ACTION_DEC_TCP_SEQ; break; - case RTE_FLOW_ITEM_TYPE_META: - flow_dv_translate_item_meta(dev, match_mask, - match_value, attr, items); - last_item = MLX5_FLOW_ITEM_METADATA; + + case RTE_FLOW_ACTION_TYPE_INC_TCP_ACK: + case RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK: + if (flow_dv_convert_action_modify_tcp_ack + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_INC_TCP_ACK ? + MLX5_FLOW_ACTION_INC_TCP_ACK : + MLX5_FLOW_ACTION_DEC_TCP_ACK; break; - case RTE_FLOW_ITEM_TYPE_ICMP: - flow_dv_translate_item_icmp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP; + case MLX5_RTE_FLOW_ACTION_TYPE_TAG: + if (flow_dv_convert_action_set_reg + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case RTE_FLOW_ITEM_TYPE_ICMP6: - flow_dv_translate_item_icmp6(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP6; + case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG: + if (flow_dv_convert_action_copy_mreg + (dev, mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; + case MLX5_RTE_FLOW_ACTION_TYPE_DEFAULT_MISS: + action_flags |= MLX5_FLOW_ACTION_DEFAULT_MISS; + dev_flow->handle->fate_action = + MLX5_FLOW_FATE_DEFAULT_MISS; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_mlx5_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; + case RTE_FLOW_ACTION_TYPE_METER: + if (!wks->fm) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "Failed to get meter in flow."); + /* Set the meter action. */ + dev_flow->dv.actions[actions_n++] = + wks->fm->meter_action_g; + action_flags |= MLX5_FLOW_ACTION_METER; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - flow_dv_translate_item_tx_queue(dev, match_mask, - match_value, - items); - last_item = MLX5_FLOW_ITEM_TX_QUEUE; + case RTE_FLOW_ACTION_TYPE_SET_IPV4_DSCP: + if (flow_dv_convert_action_modify_ipv4_dscp(mhdr_res, + actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_IPV4_DSCP; break; - case RTE_FLOW_ITEM_TYPE_GTP: - flow_dv_translate_item_gtp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GTP; + case RTE_FLOW_ACTION_TYPE_SET_IPV6_DSCP: + if (flow_dv_convert_action_modify_ipv6_dscp(mhdr_res, + actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_IPV6_DSCP; break; - case RTE_FLOW_ITEM_TYPE_GTP_PSC: - ret = flow_dv_translate_item_gtp_psc(match_mask, - match_value, - items); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GTP PSC item"); - last_item = MLX5_FLOW_LAYER_GTP_PSC; + case RTE_FLOW_ACTION_TYPE_SAMPLE: + sample_act_pos = actions_n; + sample = (const struct rte_flow_action_sample *) + action->conf; + actions_n++; + action_flags |= MLX5_FLOW_ACTION_SAMPLE; + /* put encap action into group if work with port id */ + if ((action_flags & MLX5_FLOW_ACTION_ENCAP) && + (action_flags & MLX5_FLOW_ACTION_PORT_ID)) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_ECPRI: - if (!mlx5_flex_parser_ecpri_exist(dev)) { - /* Create it only the first time to be used. */ - ret = mlx5_flex_parser_ecpri_alloc(dev); - if (ret) - return rte_flow_error_set - (error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, + case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: + if (flow_dv_convert_action_modify_field + (dev, mhdr_res, actions, attr, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; + break; + case RTE_FLOW_ACTION_TYPE_CONNTRACK: + owner_idx = (uint32_t)(uintptr_t)action->conf; + ct = flow_aso_ct_get_by_idx(dev, owner_idx); + if (!ct) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, - "cannot create eCPRI parser"); + "Failed to get CT object."); + if (mlx5_aso_ct_available(priv->sh, ct)) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "CT is unavailable."); + if (ct->is_original) + dev_flow->dv.actions[actions_n] = + ct->dr_action_orig; + else + dev_flow->dv.actions[actions_n] = + ct->dr_action_rply; + if (flow->ct == 0) { + flow->indirect_type = + MLX5_INDIRECT_ACTION_TYPE_CT; + flow->ct = owner_idx; + __atomic_fetch_add(&ct->refcnt, 1, + __ATOMIC_RELAXED); } - flow_dv_translate_item_ecpri(dev, match_mask, - match_value, items, - last_item); - /* No other protocol should follow eCPRI layer. */ - last_item = MLX5_FLOW_LAYER_ECPRI; - break; - case RTE_FLOW_ITEM_TYPE_INTEGRITY: - flow_dv_translate_item_integrity(items, integrity_items, - &last_item); - break; - case RTE_FLOW_ITEM_TYPE_CONNTRACK: - flow_dv_translate_item_aso_ct(dev, match_mask, - match_value, items); - break; - case RTE_FLOW_ITEM_TYPE_FLEX: - flow_dv_translate_item_flex(dev, match_mask, - match_value, items, - dev_flow, tunnel != 0); - last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : - MLX5_FLOW_ITEM_OUTER_FLEX; + actions_n++; + action_flags |= MLX5_FLOW_ACTION_CT; break; + case RTE_FLOW_ACTION_TYPE_END: + actions_end = true; + if (mhdr_res->actions_num) { + /* create modify action if needed. */ + if (flow_dv_modify_hdr_resource_register + (dev, mhdr_res, dev_flow, error)) + return -rte_errno; + dev_flow->dv.actions[modify_action_position] = + handle->dvh.modify_hdr->action; + } + /* + * Handle AGE and COUNT action by single HW counter + * when they are not shared. + */ + if (action_flags & MLX5_FLOW_ACTION_AGE) { + if ((non_shared_age && count) || + !flow_hit_aso_supported(priv->sh, attr)) { + /* Creates age by counters. */ + cnt_act = flow_dv_prepare_counter + (dev, dev_flow, + flow, count, + non_shared_age, + error); + if (!cnt_act) + return -rte_errno; + dev_flow->dv.actions[age_act_pos] = + cnt_act->action; + break; + } + if (!flow->age && non_shared_age) { + flow->age = flow_dv_aso_age_alloc + (dev, error); + if (!flow->age) + return -rte_errno; + flow_dv_aso_age_params_init + (dev, flow->age, + non_shared_age->context ? + non_shared_age->context : + (void *)(uintptr_t) + (dev_flow->flow_idx), + non_shared_age->timeout); + } + age_act = flow_aso_age_get_by_idx(dev, + flow->age); + dev_flow->dv.actions[age_act_pos] = + age_act->dr_action; + } + if (action_flags & MLX5_FLOW_ACTION_COUNT) { + /* + * Create one count action, to be used + * by all sub-flows. + */ + cnt_act = flow_dv_prepare_counter(dev, dev_flow, + flow, count, + NULL, error); + if (!cnt_act) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + cnt_act->action; + } default: break; } - item_flags |= last_item; - } - /* - * When E-Switch mode is enabled, we have two cases where we need to - * set the source port manually. - * The first one, is in case of NIC ingress steering rule, and the - * second is E-Switch rule where no port_id item was found. - * In both cases the source port is set according the current port - * in use. - */ - if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && - !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && - !(attr->egress && !attr->transfer)) { - if (flow_dv_translate_item_port_id(dev, match_mask, - match_value, NULL, attr)) - return -rte_errno; - } - if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { - flow_dv_translate_item_integrity_post(match_mask, match_value, - integrity_items, - item_flags); - } - if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) - flow_dv_translate_item_vxlan_gpe(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GENEVE) - flow_dv_translate_item_geneve(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GRE) { - if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) - flow_dv_translate_item_gre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) - flow_dv_translate_item_nvgre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) - flow_dv_translate_item_gre_option(match_mask, match_value, - tunnel_item, gre_item, item_flags); - else - MLX5_ASSERT(false); + if (mhdr_res->actions_num && + modify_action_position == UINT32_MAX) + modify_action_position = actions_n++; } -#ifdef RTE_LIBRTE_MLX5_DEBUG - MLX5_ASSERT(!flow_dv_check_valid_spec(matcher.mask.buf, - dev_flow->dv.value.buf)); -#endif - /* - * Layers may be already initialized from prefix flow if this dev_flow - * is the suffix flow. - */ - handle->layers |= item_flags; + dev_flow->act_flags = action_flags; + ret = flow_dv_translate_items(dev, dev_flow, attr, items, &matcher, + error); + if (ret) + return -rte_errno; if (action_flags & MLX5_FLOW_ACTION_RSS) flow_dv_hashfields_set(dev_flow->handle->layers, rss_desc, @@ -14153,7 +14197,6 @@ flow_dv_translate(struct rte_eth_dev *dev, actions_n = tmp_actions_n; } dev_flow->dv.actions_n = actions_n; - dev_flow->act_flags = action_flags; if (wks->skip_matcher_reg) return 0; /* Register matcher. */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 02/19] net/mlx5: split flow item matcher and value translation 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker 2022-10-06 15:03 ` [v2 01/19] net/mlx5: split flow item translation Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 03/19] net/mlx5: add hardware steering item translation function Alex Vesker ` (16 subsequent siblings) 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> As hardware steering mode translates flow matcher and value in two different stages, split the flow item matcher and value translation to help reuse the code. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow.h | 32 + drivers/net/mlx5/mlx5_flow_dv.c | 2317 +++++++++++++++---------------- 2 files changed, 1188 insertions(+), 1161 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 0fa1735b1a..2ebb8496f2 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1264,6 +1264,38 @@ struct mlx5_flow_workspace { uint32_t skip_matcher_reg:1; /* Indicates if need to skip matcher register in translate. */ uint32_t mark:1; /* Indicates if flow contains mark action. */ + uint32_t vport_meta_tag; /* Used for vport index match. */ +}; + +/* Matcher translate type. */ +enum MLX5_SET_MATCHER { + MLX5_SET_MATCHER_SW_V = 1 << 0, + MLX5_SET_MATCHER_SW_M = 1 << 1, + MLX5_SET_MATCHER_HS_V = 1 << 2, + MLX5_SET_MATCHER_HS_M = 1 << 3, +}; + +#define MLX5_SET_MATCHER_SW (MLX5_SET_MATCHER_SW_V | MLX5_SET_MATCHER_SW_M) +#define MLX5_SET_MATCHER_HS (MLX5_SET_MATCHER_HS_V | MLX5_SET_MATCHER_HS_M) +#define MLX5_SET_MATCHER_V (MLX5_SET_MATCHER_SW_V | MLX5_SET_MATCHER_HS_V) +#define MLX5_SET_MATCHER_M (MLX5_SET_MATCHER_SW_M | MLX5_SET_MATCHER_HS_M) + +/* Flow matcher workspace intermediate data. */ +struct mlx5_dv_matcher_workspace { + uint8_t priority; /* Flow priority. */ + uint64_t last_item; /* Last item in pattern. */ + uint64_t item_flags; /* Flow item pattern flags. */ + uint64_t action_flags; /* Flow action flags. */ + bool external; /* External flow or not. */ + uint32_t vlan_tag:12; /* Flow item VLAN tag. */ + uint8_t next_protocol; /* Tunnel next protocol */ + uint32_t geneve_tlv_option; /* Flow item Geneve TLV option. */ + uint32_t group; /* Flow group. */ + uint16_t udp_dport; /* Flow item UDP port. */ + const struct rte_flow_attr *attr; /* Flow attribute. */ + struct mlx5_flow_rss_desc *rss_desc; /* RSS descriptor. */ + const struct rte_flow_item *tunnel_item; /* Flow tunnel item. */ + const struct rte_flow_item *gre_item; /* Flow GRE item. */ }; struct mlx5_flow_split_info { diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 70a3279e2f..a2704f0b98 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -63,6 +63,25 @@ #define MLX5DV_FLOW_VLAN_PCP_MASK_BE RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK) #define MLX5DV_FLOW_VLAN_VID_MASK_BE RTE_BE16(MLX5DV_FLOW_VLAN_VID_MASK) +#define MLX5_ITEM_VALID(item, key_type) \ + (((MLX5_SET_MATCHER_SW & (key_type)) && !((item)->spec)) || \ + ((MLX5_SET_MATCHER_HS_V == (key_type)) && !((item)->spec)) || \ + ((MLX5_SET_MATCHER_HS_M == (key_type)) && !((item)->mask))) + +#define MLX5_ITEM_UPDATE(item, key_type, v, m, gm) \ + do { \ + if ((key_type) == MLX5_SET_MATCHER_SW_V) { \ + v = (item)->spec; \ + m = (item)->mask ? (item)->mask : (gm); \ + } else if ((key_type) == MLX5_SET_MATCHER_HS_V) { \ + v = (item)->spec; \ + m = (v); \ + } else { \ + v = (item)->mask ? (item)->mask : (gm); \ + m = (v); \ + } \ + } while (0) + union flow_dv_attr { struct { uint32_t valid:1; @@ -8323,70 +8342,61 @@ flow_dv_check_valid_spec(void *match_mask, void *match_value) static inline void flow_dv_set_match_ip_version(uint32_t group, void *headers_v, - void *headers_m, + uint32_t key_type, uint8_t ip_version) { - if (group == 0) - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, 0xf); + if (group == 0 && (key_type & MLX5_SET_MATCHER_M)) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, 0xf); else - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, ip_version); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, ip_version); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, 0); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ethertype, 0); } /** - * Add Ethernet item to matcher and to the value. + * Add Ethernet item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] grpup + * Flow matcher group. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_eth(void *matcher, void *key, - const struct rte_flow_item *item, int inner, - uint32_t group) +flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_eth *eth_m = item->mask; - const struct rte_flow_item_eth *eth_v = item->spec; + const struct rte_flow_item_eth *eth_vv = item->spec; + const struct rte_flow_item_eth *eth_m; + const struct rte_flow_item_eth *eth_v; const struct rte_flow_item_eth nic_mask = { .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", .src.addr_bytes = "\xff\xff\xff\xff\xff\xff", .type = RTE_BE16(0xffff), .has_vlan = 0, }; - void *hdrs_m; void *hdrs_v; char *l24_v; unsigned int i; - if (!eth_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!eth_m) - eth_m = &nic_mask; - if (inner) { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); + MLX5_ITEM_UPDATE(item, key_type, eth_v, eth_m, &nic_mask); + if (!eth_vv) + eth_vv = eth_v; + if (inner) hdrs_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); + else hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, dmac_47_16), - ð_m->dst, sizeof(eth_m->dst)); /* The value must be in the range of the mask. */ l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, dmac_47_16); for (i = 0; i < sizeof(eth_m->dst); ++i) l24_v[i] = eth_m->dst.addr_bytes[i] & eth_v->dst.addr_bytes[i]; - memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, smac_47_16), - ð_m->src, sizeof(eth_m->src)); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, smac_47_16); /* The value must be in the range of the mask. */ for (i = 0; i < sizeof(eth_m->dst); ++i) @@ -8400,145 +8410,149 @@ flow_dv_translate_item_eth(void *matcher, void *key, * eCPRI over Ether layer will use type value 0xAEFE. */ if (eth_m->type == 0xFFFF) { + rte_be16_t type = eth_v->type; + + /* + * When set the matcher mask, refer to the original spec + * value. + */ + if (key_type == MLX5_SET_MATCHER_SW_M) { + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + type = eth_vv->type; + } /* Set cvlan_tag mask for any single\multi\un-tagged case. */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); - switch (eth_v->type) { + switch (type) { case RTE_BE16(RTE_ETHER_TYPE_VLAN): MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); return; case RTE_BE16(RTE_ETHER_TYPE_QINQ): - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); return; case RTE_BE16(RTE_ETHER_TYPE_IPV4): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 4); + flow_dv_set_match_ip_version(group, hdrs_v, key_type, + 4); return; case RTE_BE16(RTE_ETHER_TYPE_IPV6): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 6); + flow_dv_set_match_ip_version(group, hdrs_v, key_type, + 6); return; default: break; } } - if (eth_m->has_vlan) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); - if (eth_v->has_vlan) { - /* - * Here, when also has_more_vlan field in VLAN item is - * not set, only single-tagged packets will be matched. - */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + /* + * Only SW steering value should refer to the mask value. + * Other cases are using the fake masks, just ignore the mask. + */ + if (eth_v->has_vlan && eth_m->has_vlan) { + /* + * Here, when also has_more_vlan field in VLAN item is + * not set, only single-tagged packets will be matched. + */ + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + if (key_type != MLX5_SET_MATCHER_HS_M && eth_vv->has_vlan) return; - } } - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, ethertype, - rte_be_to_cpu_16(eth_m->type)); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype); *(uint16_t *)(l24_v) = eth_m->type & eth_v->type; } /** - * Add VLAN item to matcher and to the value. + * Add VLAN item to the value. * - * @param[in, out] dev_flow - * Flow descriptor. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] wks + * Item workspace. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow, - void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_vlan *vlan_m = item->mask; - const struct rte_flow_item_vlan *vlan_v = item->spec; - void *hdrs_m; + const struct rte_flow_item_vlan *vlan_m; + const struct rte_flow_item_vlan *vlan_v; + const struct rte_flow_item_vlan *vlan_vv = item->spec; void *hdrs_v; - uint16_t tci_m; uint16_t tci_v; if (inner) { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); } else { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); /* * This is workaround, masks are not supported, * and pre-validated. */ - if (vlan_v) - dev_flow->handle->vf_vlan.tag = - rte_be_to_cpu_16(vlan_v->tci) & 0x0fff; + if (vlan_vv) + wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->tci) & 0x0fff; } /* * When VLAN item exists in flow, mark packet as tagged, * even if TCI is not specified. */ - if (!MLX5_GET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag)) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); + if (!MLX5_GET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag)) MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); - } - if (!vlan_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!vlan_m) - vlan_m = &rte_flow_item_vlan_mask; - tci_m = rte_be_to_cpu_16(vlan_m->tci); + MLX5_ITEM_UPDATE(item, key_type, vlan_v, vlan_m, + &rte_flow_item_vlan_mask); tci_v = rte_be_to_cpu_16(vlan_m->tci & vlan_v->tci); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_vid, tci_m); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_vid, tci_v); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_cfi, tci_m >> 12); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_cfi, tci_v >> 12); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_prio, tci_m >> 13); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_prio, tci_v >> 13); /* * HW is optimized for IPv4/IPv6. In such cases, avoid setting * ethertype, and use ip_version field instead. */ if (vlan_m->inner_type == 0xFFFF) { - switch (vlan_v->inner_type) { + rte_be16_t inner_type = vlan_v->inner_type; + + /* + * When set the matcher mask, refer to the original spec + * value. + */ + if (key_type == MLX5_SET_MATCHER_SW_M) + inner_type = vlan_vv->inner_type; + switch (inner_type) { case RTE_BE16(RTE_ETHER_TYPE_VLAN): - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); + if (key_type & MLX5_SET_MATCHER_V) + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, + cvlan_tag, 0); return; case RTE_BE16(RTE_ETHER_TYPE_IPV4): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 4); + flow_dv_set_match_ip_version + (wks->group, hdrs_v, key_type, 4); return; case RTE_BE16(RTE_ETHER_TYPE_IPV6): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 6); + flow_dv_set_match_ip_version + (wks->group, hdrs_v, key_type, 6); return; default: break; } } if (vlan_m->has_more_vlan && vlan_v->has_more_vlan) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); /* Only one vlan_tag bit can be set. */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); + if (key_type & MLX5_SET_MATCHER_V) + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); return; } - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, ethertype, - rte_be_to_cpu_16(vlan_m->inner_type)); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, ethertype, rte_be_to_cpu_16(vlan_m->inner_type & vlan_v->inner_type)); } /** - * Add IPV4 item to matcher and to the value. + * Add IPV4 item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8547,14 +8561,15 @@ flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow, * Item is inner pattern. * @param[in] group * The group to insert the rule. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv4(void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_ipv4(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_ipv4 *ipv4_m = item->mask; - const struct rte_flow_item_ipv4 *ipv4_v = item->spec; + const struct rte_flow_item_ipv4 *ipv4_m; + const struct rte_flow_item_ipv4 *ipv4_v; const struct rte_flow_item_ipv4 nic_mask = { .hdr = { .src_addr = RTE_BE32(0xffffffff), @@ -8564,68 +8579,41 @@ flow_dv_translate_item_ipv4(void *matcher, void *key, .time_to_live = 0xff, }, }; - void *headers_m; void *headers_v; - char *l24_m; char *l24_v; - uint8_t tos, ihl_m, ihl_v; + uint8_t tos; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - flow_dv_set_match_ip_version(group, headers_v, headers_m, 4); - if (!ipv4_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + flow_dv_set_match_ip_version(group, headers_v, key_type, 4); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv4_m) - ipv4_m = &nic_mask; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - dst_ipv4_dst_ipv6.ipv4_layout.ipv4); + MLX5_ITEM_UPDATE(item, key_type, ipv4_v, ipv4_m, &nic_mask); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, dst_ipv4_dst_ipv6.ipv4_layout.ipv4); - *(uint32_t *)l24_m = ipv4_m->hdr.dst_addr; *(uint32_t *)l24_v = ipv4_m->hdr.dst_addr & ipv4_v->hdr.dst_addr; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - src_ipv4_src_ipv6.ipv4_layout.ipv4); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, src_ipv4_src_ipv6.ipv4_layout.ipv4); - *(uint32_t *)l24_m = ipv4_m->hdr.src_addr; *(uint32_t *)l24_v = ipv4_m->hdr.src_addr & ipv4_v->hdr.src_addr; tos = ipv4_m->hdr.type_of_service & ipv4_v->hdr.type_of_service; - ihl_m = ipv4_m->hdr.version_ihl & RTE_IPV4_HDR_IHL_MASK; - ihl_v = ipv4_v->hdr.version_ihl & RTE_IPV4_HDR_IHL_MASK; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_ihl, ihl_m); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_ihl, ihl_m & ihl_v); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ecn, - ipv4_m->hdr.type_of_service); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_ihl, + ipv4_v->hdr.ihl & ipv4_m->hdr.ihl); + if (key_type == MLX5_SET_MATCHER_SW_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, + ipv4_v->hdr.type_of_service); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, tos); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_dscp, - ipv4_m->hdr.type_of_service >> 2); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp, tos >> 2); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv4_m->hdr.next_proto_id); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv4_v->hdr.next_proto_id & ipv4_m->hdr.next_proto_id); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ttl_hoplimit, - ipv4_m->hdr.time_to_live); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit, ipv4_v->hdr.time_to_live & ipv4_m->hdr.time_to_live); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, - !!(ipv4_m->hdr.fragment_offset)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, !!(ipv4_v->hdr.fragment_offset & ipv4_m->hdr.fragment_offset)); } /** - * Add IPV6 item to matcher and to the value. + * Add IPV6 item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8634,14 +8622,15 @@ flow_dv_translate_item_ipv4(void *matcher, void *key, * Item is inner pattern. * @param[in] group * The group to insert the rule. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv6(void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_ipv6(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_ipv6 *ipv6_m = item->mask; - const struct rte_flow_item_ipv6 *ipv6_v = item->spec; + const struct rte_flow_item_ipv6 *ipv6_m; + const struct rte_flow_item_ipv6 *ipv6_v; const struct rte_flow_item_ipv6 nic_mask = { .hdr = { .src_addr = @@ -8655,287 +8644,217 @@ flow_dv_translate_item_ipv6(void *matcher, void *key, .hop_limits = 0xff, }, }; - void *headers_m; void *headers_v; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - char *l24_m; char *l24_v; - uint32_t vtc_m; uint32_t vtc_v; int i; int size; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - flow_dv_set_match_ip_version(group, headers_v, headers_m, 6); - if (!ipv6_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + flow_dv_set_match_ip_version(group, headers_v, key_type, 6); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv6_m) - ipv6_m = &nic_mask; + MLX5_ITEM_UPDATE(item, key_type, ipv6_v, ipv6_m, &nic_mask); size = sizeof(ipv6_m->hdr.dst_addr); - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - dst_ipv4_dst_ipv6.ipv6_layout.ipv6); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, dst_ipv4_dst_ipv6.ipv6_layout.ipv6); - memcpy(l24_m, ipv6_m->hdr.dst_addr, size); for (i = 0; i < size; ++i) - l24_v[i] = l24_m[i] & ipv6_v->hdr.dst_addr[i]; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - src_ipv4_src_ipv6.ipv6_layout.ipv6); + l24_v[i] = ipv6_m->hdr.dst_addr[i] & ipv6_v->hdr.dst_addr[i]; l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, src_ipv4_src_ipv6.ipv6_layout.ipv6); - memcpy(l24_m, ipv6_m->hdr.src_addr, size); for (i = 0; i < size; ++i) - l24_v[i] = l24_m[i] & ipv6_v->hdr.src_addr[i]; + l24_v[i] = ipv6_m->hdr.src_addr[i] & ipv6_v->hdr.src_addr[i]; /* TOS. */ - vtc_m = rte_be_to_cpu_32(ipv6_m->hdr.vtc_flow); vtc_v = rte_be_to_cpu_32(ipv6_m->hdr.vtc_flow & ipv6_v->hdr.vtc_flow); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ecn, vtc_m >> 20); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, vtc_v >> 20); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_dscp, vtc_m >> 22); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp, vtc_v >> 22); /* Label. */ - if (inner) { - MLX5_SET(fte_match_set_misc, misc_m, inner_ipv6_flow_label, - vtc_m); + if (inner) MLX5_SET(fte_match_set_misc, misc_v, inner_ipv6_flow_label, vtc_v); - } else { - MLX5_SET(fte_match_set_misc, misc_m, outer_ipv6_flow_label, - vtc_m); + else MLX5_SET(fte_match_set_misc, misc_v, outer_ipv6_flow_label, vtc_v); - } /* Protocol. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv6_m->hdr.proto); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv6_v->hdr.proto & ipv6_m->hdr.proto); /* Hop limit. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ttl_hoplimit, - ipv6_m->hdr.hop_limits); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit, ipv6_v->hdr.hop_limits & ipv6_m->hdr.hop_limits); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, - !!(ipv6_m->has_frag_ext)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, !!(ipv6_v->has_frag_ext & ipv6_m->has_frag_ext)); } /** - * Add IPV6 fragment extension item to matcher and to the value. + * Add IPV6 fragment extension item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv6_frag_ext(void *matcher, void *key, +flow_dv_translate_item_ipv6_frag_ext(void *key, const struct rte_flow_item *item, - int inner) + int inner, uint32_t key_type) { - const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_m = item->mask; - const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_v = item->spec; + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_m; + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_v; const struct rte_flow_item_ipv6_frag_ext nic_mask = { .hdr = { .next_header = 0xff, .frag_data = RTE_BE16(0xffff), }, }; - void *headers_m; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); /* IPv6 fragment extension item exists, so packet is IP fragment. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 1); - if (!ipv6_frag_ext_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv6_frag_ext_m) - ipv6_frag_ext_m = &nic_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv6_frag_ext_m->hdr.next_header); + MLX5_ITEM_UPDATE(item, key_type, ipv6_frag_ext_v, + ipv6_frag_ext_m, &nic_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv6_frag_ext_v->hdr.next_header & ipv6_frag_ext_m->hdr.next_header); } /** - * Add TCP item to matcher and to the value. + * Add TCP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_tcp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_tcp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_tcp *tcp_m = item->mask; - const struct rte_flow_item_tcp *tcp_v = item->spec; - void *headers_m; + const struct rte_flow_item_tcp *tcp_m; + const struct rte_flow_item_tcp *tcp_v; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_TCP); - if (!tcp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_TCP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!tcp_m) - tcp_m = &rte_flow_item_tcp_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_sport, - rte_be_to_cpu_16(tcp_m->hdr.src_port)); + MLX5_ITEM_UPDATE(item, key_type, tcp_v, tcp_m, + &rte_flow_item_tcp_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_sport, rte_be_to_cpu_16(tcp_v->hdr.src_port & tcp_m->hdr.src_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_dport, - rte_be_to_cpu_16(tcp_m->hdr.dst_port)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_dport, rte_be_to_cpu_16(tcp_v->hdr.dst_port & tcp_m->hdr.dst_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_flags, - tcp_m->hdr.tcp_flags); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_flags, - (tcp_v->hdr.tcp_flags & tcp_m->hdr.tcp_flags)); + tcp_v->hdr.tcp_flags & tcp_m->hdr.tcp_flags); } /** - * Add ESP item to matcher and to the value. + * Add ESP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_esp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_esp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_esp *esp_m = item->mask; - const struct rte_flow_item_esp *esp_v = item->spec; - void *headers_m; + const struct rte_flow_item_esp *esp_m; + const struct rte_flow_item_esp *esp_v; void *headers_v; - char *spi_m; char *spi_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ESP); - if (!esp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_ESP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!esp_m) - esp_m = &rte_flow_item_esp_mask; - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + MLX5_ITEM_UPDATE(item, key_type, esp_v, esp_m, + &rte_flow_item_esp_mask); headers_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - if (inner) { - spi_m = MLX5_ADDR_OF(fte_match_set_misc, headers_m, inner_esp_spi); - spi_v = MLX5_ADDR_OF(fte_match_set_misc, headers_v, inner_esp_spi); - } else { - spi_m = MLX5_ADDR_OF(fte_match_set_misc, headers_m, outer_esp_spi); - spi_v = MLX5_ADDR_OF(fte_match_set_misc, headers_v, outer_esp_spi); - } - *(uint32_t *)spi_m = esp_m->hdr.spi; + spi_v = inner ? MLX5_ADDR_OF(fte_match_set_misc, headers_v, + inner_esp_spi) : MLX5_ADDR_OF(fte_match_set_misc + , headers_v, outer_esp_spi); *(uint32_t *)spi_v = esp_m->hdr.spi & esp_v->hdr.spi; } /** - * Add UDP item to matcher and to the value. + * Add UDP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_udp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_udp(void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_udp *udp_m = item->mask; - const struct rte_flow_item_udp *udp_v = item->spec; - void *headers_m; + const struct rte_flow_item_udp *udp_m; + const struct rte_flow_item_udp *udp_v; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_UDP); - if (!udp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_UDP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!udp_m) - udp_m = &rte_flow_item_udp_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_sport, - rte_be_to_cpu_16(udp_m->hdr.src_port)); + MLX5_ITEM_UPDATE(item, key_type, udp_v, udp_m, + &rte_flow_item_udp_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_sport, rte_be_to_cpu_16(udp_v->hdr.src_port & udp_m->hdr.src_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, - rte_be_to_cpu_16(udp_m->hdr.dst_port)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, rte_be_to_cpu_16(udp_v->hdr.dst_port & udp_m->hdr.dst_port)); + /* Force get UDP dport in case to be used in VXLAN translate. */ + if (key_type & MLX5_SET_MATCHER_SW) { + udp_v = item->spec; + wks->udp_dport = rte_be_to_cpu_16(udp_v->hdr.dst_port & + udp_m->hdr.dst_port); + } } /** - * Add GRE optional Key item to matcher and to the value. + * Add GRE optional Key item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8944,55 +8863,46 @@ flow_dv_translate_item_udp(void *matcher, void *key, * Item is inner pattern. */ static void -flow_dv_translate_item_gre_key(void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_gre_key(void *key, const struct rte_flow_item *item, + uint32_t key_type) { - const rte_be32_t *key_m = item->mask; - const rte_be32_t *key_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const rte_be32_t *key_m; + const rte_be32_t *key_v; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); rte_be32_t gre_key_default_mask = RTE_BE32(UINT32_MAX); /* GRE K bit must be on and should already be validated */ - MLX5_SET(fte_match_set_misc, misc_m, gre_k_present, 1); MLX5_SET(fte_match_set_misc, misc_v, gre_k_present, 1); - if (!key_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!key_m) - key_m = &gre_key_default_mask; - MLX5_SET(fte_match_set_misc, misc_m, gre_key_h, - rte_be_to_cpu_32(*key_m) >> 8); + MLX5_ITEM_UPDATE(item, key_type, key_v, key_m, + &gre_key_default_mask); MLX5_SET(fte_match_set_misc, misc_v, gre_key_h, rte_be_to_cpu_32((*key_v) & (*key_m)) >> 8); - MLX5_SET(fte_match_set_misc, misc_m, gre_key_l, - rte_be_to_cpu_32(*key_m) & 0xFF); MLX5_SET(fte_match_set_misc, misc_v, gre_key_l, rte_be_to_cpu_32((*key_v) & (*key_m)) & 0xFF); } /** - * Add GRE item to matcher and to the value. + * Add GRE item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gre(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t pattern_flags) +flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item, + uint64_t pattern_flags, uint32_t key_type) { static const struct rte_flow_item_gre empty_gre = {0,}; const struct rte_flow_item_gre *gre_m = item->mask; const struct rte_flow_item_gre *gre_v = item->spec; - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); struct { union { @@ -9010,8 +8920,11 @@ flow_dv_translate_item_gre(void *matcher, void *key, } gre_crks_rsvd0_ver_m, gre_crks_rsvd0_ver_v; uint16_t protocol_m, protocol_v; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_GRE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, + IPPROTO_GRE); if (!gre_v) { gre_v = &empty_gre; gre_m = &empty_gre; @@ -9019,20 +8932,18 @@ flow_dv_translate_item_gre(void *matcher, void *key, if (!gre_m) gre_m = &rte_flow_item_gre_mask; } + if (key_type & MLX5_SET_MATCHER_M) + gre_v = gre_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + gre_m = gre_v; gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->c_rsvd0_ver); gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->c_rsvd0_ver); - MLX5_SET(fte_match_set_misc, misc_m, gre_c_present, - gre_crks_rsvd0_ver_m.c_present); MLX5_SET(fte_match_set_misc, misc_v, gre_c_present, gre_crks_rsvd0_ver_v.c_present & gre_crks_rsvd0_ver_m.c_present); - MLX5_SET(fte_match_set_misc, misc_m, gre_k_present, - gre_crks_rsvd0_ver_m.k_present); MLX5_SET(fte_match_set_misc, misc_v, gre_k_present, gre_crks_rsvd0_ver_v.k_present & gre_crks_rsvd0_ver_m.k_present); - MLX5_SET(fte_match_set_misc, misc_m, gre_s_present, - gre_crks_rsvd0_ver_m.s_present); MLX5_SET(fte_match_set_misc, misc_v, gre_s_present, gre_crks_rsvd0_ver_v.s_present & gre_crks_rsvd0_ver_m.s_present); @@ -9043,17 +8954,17 @@ flow_dv_translate_item_gre(void *matcher, void *key, protocol_v = mlx5_translate_tunnel_etypes(pattern_flags); if (protocol_v) protocol_m = 0xFFFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + protocol_v = protocol_m; } - MLX5_SET(fte_match_set_misc, misc_m, gre_protocol, protocol_m); MLX5_SET(fte_match_set_misc, misc_v, gre_protocol, protocol_m & protocol_v); } /** - * Add GRE optional items to matcher and to the value. + * Add GRE optional items to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -9062,24 +8973,28 @@ flow_dv_translate_item_gre(void *matcher, void *key, * Pointer to gre_item. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gre_option(void *matcher, void *key, +flow_dv_translate_item_gre_option(void *key, const struct rte_flow_item *item, const struct rte_flow_item *gre_item, - uint64_t pattern_flags) + uint64_t pattern_flags, uint32_t key_type) { - const struct rte_flow_item_gre_opt *option_m = item->mask; - const struct rte_flow_item_gre_opt *option_v = item->spec; + const struct rte_flow_item_gre_opt *option_m; + const struct rte_flow_item_gre_opt *option_v; const struct rte_flow_item_gre *gre_m = gre_item->mask; const struct rte_flow_item_gre *gre_v = gre_item->spec; static const struct rte_flow_item_gre empty_gre = {0}; + struct rte_flow_item_gre_opt option_dm; struct rte_flow_item gre_key_item; uint16_t c_rsvd0_ver_m, c_rsvd0_ver_v; uint16_t protocol_m, protocol_v; - void *misc5_m; void *misc5_v; + memset(&option_dm, 0, sizeof(option_dm)); + MLX5_ITEM_UPDATE(item, key_type, option_v, option_m, &option_dm); /* * If only match key field, keep using misc for matching. * If need to match checksum or sequence, using misc5 and do @@ -9087,11 +9002,10 @@ flow_dv_translate_item_gre_option(void *matcher, void *key, */ if (!(option_m->sequence.sequence || option_m->checksum_rsvd.checksum)) { - flow_dv_translate_item_gre(matcher, key, gre_item, - pattern_flags); + flow_dv_translate_item_gre(key, gre_item, pattern_flags, key_type); gre_key_item.spec = &option_v->key.key; gre_key_item.mask = &option_m->key.key; - flow_dv_translate_item_gre_key(matcher, key, &gre_key_item); + flow_dv_translate_item_gre_key(key, &gre_key_item, key_type); return; } if (!gre_v) { @@ -9126,57 +9040,49 @@ flow_dv_translate_item_gre_option(void *matcher, void *key, c_rsvd0_ver_v |= RTE_BE16(0x8000); c_rsvd0_ver_m |= RTE_BE16(0x8000); } + if (key_type & MLX5_SET_MATCHER_M) { + c_rsvd0_ver_v = c_rsvd0_ver_m; + protocol_v = protocol_m; + } /* * Hardware parses GRE optional field into the fixed location, * do not need to adjust the tunnel dword indices. */ misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); - misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_0, rte_be_to_cpu_32((c_rsvd0_ver_v | protocol_v << 16) & (c_rsvd0_ver_m | protocol_m << 16))); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_0, - rte_be_to_cpu_32(c_rsvd0_ver_m | protocol_m << 16)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_1, rte_be_to_cpu_32(option_v->checksum_rsvd.checksum & option_m->checksum_rsvd.checksum)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_1, - rte_be_to_cpu_32(option_m->checksum_rsvd.checksum)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_2, rte_be_to_cpu_32(option_v->key.key & option_m->key.key)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_2, - rte_be_to_cpu_32(option_m->key.key)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_3, rte_be_to_cpu_32(option_v->sequence.sequence & option_m->sequence.sequence)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_3, - rte_be_to_cpu_32(option_m->sequence.sequence)); } /** * Add NVGRE item to matcher and to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_nvgre(void *matcher, void *key, - const struct rte_flow_item *item, - unsigned long pattern_flags) +flow_dv_translate_item_nvgre(void *key, const struct rte_flow_item *item, + unsigned long pattern_flags, uint32_t key_type) { - const struct rte_flow_item_nvgre *nvgre_m = item->mask; - const struct rte_flow_item_nvgre *nvgre_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const struct rte_flow_item_nvgre *nvgre_m; + const struct rte_flow_item_nvgre *nvgre_v; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); const char *tni_flow_id_m; const char *tni_flow_id_v; - char *gre_key_m; char *gre_key_v; int size; int i; @@ -9195,158 +9101,145 @@ flow_dv_translate_item_nvgre(void *matcher, void *key, .mask = &gre_mask, .last = NULL, }; - flow_dv_translate_item_gre(matcher, key, &gre_item, pattern_flags); - if (!nvgre_v) + flow_dv_translate_item_gre(key, &gre_item, pattern_flags, key_type); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!nvgre_m) - nvgre_m = &rte_flow_item_nvgre_mask; + MLX5_ITEM_UPDATE(item, key_type, nvgre_v, nvgre_m, + &rte_flow_item_nvgre_mask); tni_flow_id_m = (const char *)nvgre_m->tni; tni_flow_id_v = (const char *)nvgre_v->tni; size = sizeof(nvgre_m->tni) + sizeof(nvgre_m->flow_id); - gre_key_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, gre_key_h); gre_key_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, gre_key_h); - memcpy(gre_key_m, tni_flow_id_m, size); for (i = 0; i < size; ++i) - gre_key_v[i] = gre_key_m[i] & tni_flow_id_v[i]; + gre_key_v[i] = tni_flow_id_m[i] & tni_flow_id_v[i]; } /** - * Add VXLAN item to matcher and to the value. + * Add VXLAN item to the value. * * @param[in] dev * Pointer to the Ethernet device structure. * @param[in] attr * Flow rule attributes. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] wks + * Matcher workspace. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_vxlan(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, - void *matcher, void *key, - const struct rte_flow_item *item, - int inner) + void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_vxlan *vxlan_m = item->mask; - const struct rte_flow_item_vxlan *vxlan_v = item->spec; - void *headers_m; + const struct rte_flow_item_vxlan *vxlan_m; + const struct rte_flow_item_vxlan *vxlan_v; + const struct rte_flow_item_vxlan *vxlan_vv = item->spec; void *headers_v; - void *misc5_m; + void *misc_v; void *misc5_v; + uint32_t tunnel_v; uint32_t *tunnel_header_v; - uint32_t *tunnel_header_m; + char *vni_v; uint16_t dport; + int size; + int i; struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_item_vxlan nic_mask = { .vni = "\xff\xff\xff", .rsvd1 = 0xff, }; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); dport = item->type == RTE_FLOW_ITEM_TYPE_VXLAN ? MLX5_UDP_PORT_VXLAN : MLX5_UDP_PORT_VXLAN_GPE; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, dport); - } - dport = MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport); - if (!vxlan_v) - return; - if (!vxlan_m) { - if ((!attr->group && !priv->sh->tunnel_header_0_1) || - (attr->group && !priv->sh->misc5_cap)) - vxlan_m = &rte_flow_item_vxlan_mask; + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xFFFF); else - vxlan_m = &nic_mask; + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, dport); } + /* + * Read the UDP dport to check if the value satisfies the VXLAN + * matching with MISC5 for CX5. + */ + if (wks->udp_dport) + dport = wks->udp_dport; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, vxlan_v, vxlan_m, &nic_mask); + if (item->mask == &nic_mask && + ((!attr->group && !priv->sh->tunnel_header_0_1) || + (attr->group && !priv->sh->misc5_cap))) + vxlan_m = &rte_flow_item_vxlan_mask; if ((priv->sh->steering_format_version == - MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 && - dport != MLX5_UDP_PORT_VXLAN) || - (!attr->group && !attr->transfer && !priv->sh->tunnel_header_0_1) || + MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 && + dport != MLX5_UDP_PORT_VXLAN) || + (!attr->group && !attr->transfer) || ((attr->group || attr->transfer) && !priv->sh->misc5_cap)) { - void *misc_m; - void *misc_v; - char *vni_m; - char *vni_v; - int size; - int i; - misc_m = MLX5_ADDR_OF(fte_match_param, - matcher, misc_parameters); misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); size = sizeof(vxlan_m->vni); - vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni); vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni); - memcpy(vni_m, vxlan_m->vni, size); for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & vxlan_v->vni[i]; + vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i]; return; } - misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5); - misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5, misc5_v, tunnel_header_1); - tunnel_header_m = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5, - misc5_m, - tunnel_header_1); - *tunnel_header_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) | - (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 | - (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16; - if (*tunnel_header_v) - *tunnel_header_m = vxlan_m->vni[0] | - vxlan_m->vni[1] << 8 | - vxlan_m->vni[2] << 16; - else - *tunnel_header_m = 0x0; - *tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24; - if (vxlan_v->rsvd1 & vxlan_m->rsvd1) - *tunnel_header_m |= vxlan_m->rsvd1 << 24; + tunnel_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) | + (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 | + (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16; + *tunnel_header_v = tunnel_v; + if (key_type == MLX5_SET_MATCHER_SW_M) { + tunnel_v = (vxlan_vv->vni[0] & vxlan_m->vni[0]) | + (vxlan_vv->vni[1] & vxlan_m->vni[1]) << 8 | + (vxlan_vv->vni[2] & vxlan_m->vni[2]) << 16; + if (!tunnel_v) + *tunnel_header_v = 0x0; + if (vxlan_vv->rsvd1 & vxlan_m->rsvd1) + *tunnel_header_v |= vxlan_v->rsvd1 << 24; + } else { + *tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24; + } } /** - * Add VXLAN-GPE item to matcher and to the value. + * Add VXLAN-GPE item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] pattern_flags + * Item pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, - const struct rte_flow_item *item, - const uint64_t pattern_flags) +flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item, + const uint64_t pattern_flags, + uint32_t key_type) { static const struct rte_flow_item_vxlan_gpe dummy_vxlan_gpe_hdr = {0, }; const struct rte_flow_item_vxlan_gpe *vxlan_m = item->mask; const struct rte_flow_item_vxlan_gpe *vxlan_v = item->spec; /* The item was validated to be on the outer side */ - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = - MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_3); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - char *vni_m = - MLX5_ADDR_OF(fte_match_set_misc3, misc_m, outer_vxlan_gpe_vni); char *vni_v = MLX5_ADDR_OF(fte_match_set_misc3, misc_v, outer_vxlan_gpe_vni); int i, size = sizeof(vxlan_m->vni); @@ -9355,9 +9248,12 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, uint8_t m_protocol, v_protocol; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_VXLAN_GPE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + MLX5_UDP_PORT_VXLAN_GPE); } if (!vxlan_v) { vxlan_v = &dummy_vxlan_gpe_hdr; @@ -9366,15 +9262,18 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, if (!vxlan_m) vxlan_m = &rte_flow_item_vxlan_gpe_mask; } - memcpy(vni_m, vxlan_m->vni, size); + if (key_type & MLX5_SET_MATCHER_M) + vxlan_v = vxlan_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + vxlan_m = vxlan_v; for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & vxlan_v->vni[i]; + vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i]; if (vxlan_m->flags) { flags_m = vxlan_m->flags; flags_v = vxlan_v->flags; } - MLX5_SET(fte_match_set_misc3, misc_m, outer_vxlan_gpe_flags, flags_m); - MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags, flags_v); + MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags, + flags_m & flags_v); m_protocol = vxlan_m->protocol; v_protocol = vxlan_v->protocol; if (!m_protocol) { @@ -9387,50 +9286,50 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, v_protocol = RTE_VXLAN_GPE_TYPE_IPV6; if (v_protocol) m_protocol = 0xFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + v_protocol = m_protocol; } - MLX5_SET(fte_match_set_misc3, misc_m, - outer_vxlan_gpe_next_protocol, m_protocol); MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_next_protocol, m_protocol & v_protocol); } /** - * Add Geneve item to matcher and to the value. + * Add Geneve item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] pattern_flags + * Item pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_geneve(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t pattern_flags) +flow_dv_translate_item_geneve(void *key, const struct rte_flow_item *item, + uint64_t pattern_flags, uint32_t key_type) { static const struct rte_flow_item_geneve empty_geneve = {0,}; const struct rte_flow_item_geneve *geneve_m = item->mask; const struct rte_flow_item_geneve *geneve_v = item->spec; /* GENEVE flow item validation allows single tunnel item */ - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); uint16_t gbhdr_m; uint16_t gbhdr_v; - char *vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, geneve_vni); char *vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, geneve_vni); size_t size = sizeof(geneve_m->vni), i; uint16_t protocol_m, protocol_v; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_GENEVE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + MLX5_UDP_PORT_GENEVE); } if (!geneve_v) { geneve_v = &empty_geneve; @@ -9439,17 +9338,16 @@ flow_dv_translate_item_geneve(void *matcher, void *key, if (!geneve_m) geneve_m = &rte_flow_item_geneve_mask; } - memcpy(vni_m, geneve_m->vni, size); + if (key_type & MLX5_SET_MATCHER_M) + geneve_v = geneve_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + geneve_m = geneve_v; for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & geneve_v->vni[i]; + vni_v[i] = geneve_m->vni[i] & geneve_v->vni[i]; gbhdr_m = rte_be_to_cpu_16(geneve_m->ver_opt_len_o_c_rsvd0); gbhdr_v = rte_be_to_cpu_16(geneve_v->ver_opt_len_o_c_rsvd0); - MLX5_SET(fte_match_set_misc, misc_m, geneve_oam, - MLX5_GENEVE_OAMF_VAL(gbhdr_m)); MLX5_SET(fte_match_set_misc, misc_v, geneve_oam, MLX5_GENEVE_OAMF_VAL(gbhdr_v) & MLX5_GENEVE_OAMF_VAL(gbhdr_m)); - MLX5_SET(fte_match_set_misc, misc_m, geneve_opt_len, - MLX5_GENEVE_OPTLEN_VAL(gbhdr_m)); MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, MLX5_GENEVE_OPTLEN_VAL(gbhdr_v) & MLX5_GENEVE_OPTLEN_VAL(gbhdr_m)); @@ -9460,8 +9358,10 @@ flow_dv_translate_item_geneve(void *matcher, void *key, protocol_v = mlx5_translate_tunnel_etypes(pattern_flags); if (protocol_v) protocol_m = 0xFFFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + protocol_v = protocol_m; } - MLX5_SET(fte_match_set_misc, misc_m, geneve_protocol_type, protocol_m); MLX5_SET(fte_match_set_misc, misc_v, geneve_protocol_type, protocol_m & protocol_v); } @@ -9471,10 +9371,8 @@ flow_dv_translate_item_geneve(void *matcher, void *key, * * @param dev[in, out] * Pointer to rte_eth_dev structure. - * @param[in, out] tag_be24 - * Tag value in big endian then R-shift 8. - * @parm[in, out] dev_flow - * Pointer to the dev_flow. + * @param[in] item + * Flow pattern to translate. * @param[out] error * pointer to error structure. * @@ -9551,38 +9449,38 @@ flow_dev_geneve_tlv_option_resource_register(struct rte_eth_dev *dev, } /** - * Add Geneve TLV option item to matcher. + * Add Geneve TLV option item to value. * * @param[in, out] dev * Pointer to rte_eth_dev structure. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. * @param[out] error * Pointer to error structure. */ static int -flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, +flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type, struct rte_flow_error *error) { - const struct rte_flow_item_geneve_opt *geneve_opt_m = item->mask; - const struct rte_flow_item_geneve_opt *geneve_opt_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const struct rte_flow_item_geneve_opt *geneve_opt_m; + const struct rte_flow_item_geneve_opt *geneve_opt_v; + const struct rte_flow_item_geneve_opt *geneve_opt_vv = item->spec; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); rte_be32_t opt_data_key = 0, opt_data_mask = 0; + uint32_t *data; int ret = 0; - if (!geneve_opt_v) + if (MLX5_ITEM_VALID(item, key_type)) return -1; - if (!geneve_opt_m) - geneve_opt_m = &rte_flow_item_geneve_opt_mask; + MLX5_ITEM_UPDATE(item, key_type, geneve_opt_v, geneve_opt_m, + &rte_flow_item_geneve_opt_mask); ret = flow_dev_geneve_tlv_option_resource_register(dev, item, error); if (ret) { @@ -9596,17 +9494,21 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, * If the option length was not requested but the GENEVE TLV option item * is present we set the option length field implicitly. */ - if (!MLX5_GET16(fte_match_set_misc, misc_m, geneve_opt_len)) { - MLX5_SET(fte_match_set_misc, misc_m, geneve_opt_len, - MLX5_GENEVE_OPTLEN_MASK); - MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, - geneve_opt_v->option_len + 1); - } - MLX5_SET(fte_match_set_misc, misc_m, geneve_tlv_option_0_exist, 1); - MLX5_SET(fte_match_set_misc, misc_v, geneve_tlv_option_0_exist, 1); + if (!MLX5_GET16(fte_match_set_misc, misc_v, geneve_opt_len)) { + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, + MLX5_GENEVE_OPTLEN_MASK); + else + MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, + geneve_opt_v->option_len + 1); + } /* Set the data. */ - if (geneve_opt_v->data) { - memcpy(&opt_data_key, geneve_opt_v->data, + if (key_type == MLX5_SET_MATCHER_SW_V) + data = geneve_opt_vv->data; + else + data = geneve_opt_v->data; + if (data) { + memcpy(&opt_data_key, data, RTE_MIN((uint32_t)(geneve_opt_v->option_len * 4), sizeof(opt_data_key))); MLX5_ASSERT((uint32_t)(geneve_opt_v->option_len * 4) <= @@ -9616,9 +9518,6 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, sizeof(opt_data_mask))); MLX5_ASSERT((uint32_t)(geneve_opt_v->option_len * 4) <= sizeof(opt_data_mask)); - MLX5_SET(fte_match_set_misc3, misc3_m, - geneve_tlv_option_0_data, - rte_be_to_cpu_32(opt_data_mask)); MLX5_SET(fte_match_set_misc3, misc3_v, geneve_tlv_option_0_data, rte_be_to_cpu_32(opt_data_key & opt_data_mask)); @@ -9627,10 +9526,8 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, } /** - * Add MPLS item to matcher and to the value. + * Add MPLS item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -9639,93 +9536,78 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, * The protocol layer indicated in previous item. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_mpls(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t prev_layer, - int inner) +flow_dv_translate_item_mpls(void *key, const struct rte_flow_item *item, + uint64_t prev_layer, int inner, + uint32_t key_type) { - const uint32_t *in_mpls_m = item->mask; - const uint32_t *in_mpls_v = item->spec; - uint32_t *out_mpls_m = 0; + const uint32_t *in_mpls_m; + const uint32_t *in_mpls_v; uint32_t *out_mpls_v = 0; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - void *misc2_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_2); void *misc2_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2); - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); switch (prev_layer) { case MLX5_FLOW_LAYER_OUTER_L4_UDP: if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, - 0xffff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_MPLS); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xffff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, MLX5_UDP_PORT_MPLS); } break; case MLX5_FLOW_LAYER_GRE: /* Fall-through. */ case MLX5_FLOW_LAYER_GRE_KEY: if (!MLX5_GET16(fte_match_set_misc, misc_v, gre_protocol)) { - MLX5_SET(fte_match_set_misc, misc_m, gre_protocol, - 0xffff); - MLX5_SET(fte_match_set_misc, misc_v, gre_protocol, - RTE_ETHER_TYPE_MPLS); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_misc, misc_v, + gre_protocol, 0xffff); + else + MLX5_SET(fte_match_set_misc, misc_v, + gre_protocol, RTE_ETHER_TYPE_MPLS); } break; default: break; } - if (!in_mpls_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!in_mpls_m) - in_mpls_m = (const uint32_t *)&rte_flow_item_mpls_mask; + MLX5_ITEM_UPDATE(item, key_type, in_mpls_v, in_mpls_m, + &rte_flow_item_mpls_mask); switch (prev_layer) { case MLX5_FLOW_LAYER_OUTER_L4_UDP: - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_m, - outer_first_mpls_over_udp); out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls_over_udp); break; case MLX5_FLOW_LAYER_GRE: - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_m, - outer_first_mpls_over_gre); out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls_over_gre); break; default: /* Inner MPLS not over GRE is not supported. */ - if (!inner) { - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, - misc2_m, - outer_first_mpls); + if (!inner) out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls); - } break; } - if (out_mpls_m && out_mpls_v) { - *out_mpls_m = *in_mpls_m; + if (out_mpls_v) *out_mpls_v = *in_mpls_v & *in_mpls_m; - } } /** * Add metadata register item to matcher * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] reg_type @@ -9736,12 +9618,9 @@ flow_dv_translate_item_mpls(void *matcher, void *key, * Register mask */ static void -flow_dv_match_meta_reg(void *matcher, void *key, - enum modify_reg reg_type, +flow_dv_match_meta_reg(void *key, enum modify_reg reg_type, uint32_t data, uint32_t mask) { - void *misc2_m = - MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2); void *misc2_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2); uint32_t temp; @@ -9749,11 +9628,9 @@ flow_dv_match_meta_reg(void *matcher, void *key, data &= mask; switch (reg_type) { case REG_A: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_a, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_a, data); break; case REG_B: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_b, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_b, data); break; case REG_C_0: @@ -9762,40 +9639,31 @@ flow_dv_match_meta_reg(void *matcher, void *key, * source vport index and META item value, we should set * this field according to specified mask, not as whole one. */ - temp = MLX5_GET(fte_match_set_misc2, misc2_m, metadata_reg_c_0); - temp |= mask; - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_0, temp); temp = MLX5_GET(fte_match_set_misc2, misc2_v, metadata_reg_c_0); - temp &= ~mask; + if (mask) + temp &= ~mask; temp |= data; MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_0, temp); break; case REG_C_1: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_1, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_1, data); break; case REG_C_2: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_2, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_2, data); break; case REG_C_3: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_3, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_3, data); break; case REG_C_4: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_4, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_4, data); break; case REG_C_5: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_5, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_5, data); break; case REG_C_6: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_6, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_6, data); break; case REG_C_7: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_7, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_7, data); break; default: @@ -9804,34 +9672,71 @@ flow_dv_match_meta_reg(void *matcher, void *key, } } +/** + * Add metadata register item to matcher + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] reg_type + * Type of device metadata register + * @param[in] value + * Register value + * @param[in] mask + * Register mask + */ +static void +flow_dv_match_meta_reg_all(void *matcher, void *key, enum modify_reg reg_type, + uint32_t data, uint32_t mask) +{ + flow_dv_match_meta_reg(key, reg_type, data, mask); + flow_dv_match_meta_reg(matcher, reg_type, mask, mask); +} + /** * Add MARK item to matcher * * @param[in] dev * The device to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_mark(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_mark(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_item_mark *mark; uint32_t value; - uint32_t mask; - - mark = item->mask ? (const void *)item->mask : - &rte_flow_item_mark_mask; - mask = mark->id & priv->sh->dv_mark_mask; - mark = (const void *)item->spec; - MLX5_ASSERT(mark); - value = mark->id & priv->sh->dv_mark_mask & mask; + uint32_t mask = 0; + + if (key_type & MLX5_SET_MATCHER_SW) { + mark = item->mask ? (const void *)item->mask : + &rte_flow_item_mark_mask; + mask = mark->id; + if (key_type == MLX5_SET_MATCHER_SW_M) { + value = mask; + } else { + mark = (const void *)item->spec; + MLX5_ASSERT(mark); + value = mark->id; + } + } else { + mark = (key_type == MLX5_SET_MATCHER_HS_V) ? + (const void *)item->spec : (const void *)item->mask; + MLX5_ASSERT(mark); + value = mark->id; + if (key_type == MLX5_SET_MATCHER_HS_M) + mask = value; + } + mask &= priv->sh->dv_mark_mask; + value &= mask; if (mask) { enum modify_reg reg; @@ -9847,7 +9752,7 @@ flow_dv_translate_item_mark(struct rte_eth_dev *dev, mask <<= shl_c0; value <<= shl_c0; } - flow_dv_match_meta_reg(matcher, key, reg, value, mask); + flow_dv_match_meta_reg(key, reg, value, mask); } } @@ -9856,65 +9761,66 @@ flow_dv_translate_item_mark(struct rte_eth_dev *dev, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] attr * Attributes of flow that includes this item. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_meta(struct rte_eth_dev *dev, - void *matcher, void *key, + void *key, const struct rte_flow_attr *attr, - const struct rte_flow_item *item) + const struct rte_flow_item *item, + uint32_t key_type) { const struct rte_flow_item_meta *meta_m; const struct rte_flow_item_meta *meta_v; + uint32_t value; + uint32_t mask = 0; + int reg; - meta_m = (const void *)item->mask; - if (!meta_m) - meta_m = &rte_flow_item_meta_mask; - meta_v = (const void *)item->spec; - if (meta_v) { - int reg; - uint32_t value = meta_v->data; - uint32_t mask = meta_m->data; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, meta_v, meta_m, + &rte_flow_item_meta_mask); + value = meta_v->data; + mask = meta_m->data; + if (key_type == MLX5_SET_MATCHER_HS_M) + mask = value; + reg = flow_dv_get_metadata_reg(dev, attr, NULL); + if (reg < 0) + return; + MLX5_ASSERT(reg != REG_NON); + if (reg == REG_C_0) { + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t msk_c0 = priv->sh->dv_regc0_mask; + uint32_t shl_c0 = rte_bsf32(msk_c0); - reg = flow_dv_get_metadata_reg(dev, attr, NULL); - if (reg < 0) - return; - MLX5_ASSERT(reg != REG_NON); - if (reg == REG_C_0) { - struct mlx5_priv *priv = dev->data->dev_private; - uint32_t msk_c0 = priv->sh->dv_regc0_mask; - uint32_t shl_c0 = rte_bsf32(msk_c0); - - mask &= msk_c0; - mask <<= shl_c0; - value <<= shl_c0; - } - flow_dv_match_meta_reg(matcher, key, reg, value, mask); + mask &= msk_c0; + mask <<= shl_c0; + value <<= shl_c0; } + flow_dv_match_meta_reg(key, reg, value, mask); } /** * Add vport metadata Reg C0 item to matcher * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. - * @param[in] reg - * Flow pattern to translate. + * @param[in] value + * Register value + * @param[in] mask + * Register mask */ static void -flow_dv_translate_item_meta_vport(void *matcher, void *key, - uint32_t value, uint32_t mask) +flow_dv_translate_item_meta_vport(void *key, uint32_t value, uint32_t mask) { - flow_dv_match_meta_reg(matcher, key, REG_C_0, value, mask); + flow_dv_match_meta_reg(key, REG_C_0, value, mask); } /** @@ -9922,17 +9828,17 @@ flow_dv_translate_item_meta_vport(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { const struct mlx5_rte_flow_item_tag *tag_v = item->spec; const struct mlx5_rte_flow_item_tag *tag_m = item->mask; @@ -9941,6 +9847,8 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, MLX5_ASSERT(tag_v); value = tag_v->data; mask = tag_m ? tag_m->data : UINT32_MAX; + if (key_type & MLX5_SET_MATCHER_M) + value = mask; if (tag_v->id == REG_C_0) { struct mlx5_priv *priv = dev->data->dev_private; uint32_t msk_c0 = priv->sh->dv_regc0_mask; @@ -9950,7 +9858,7 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, mask <<= shl_c0; value <<= shl_c0; } - flow_dv_match_meta_reg(matcher, key, tag_v->id, value, mask); + flow_dv_match_meta_reg(key, tag_v->id, value, mask); } /** @@ -9958,50 +9866,50 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_tag(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_tag(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { - const struct rte_flow_item_tag *tag_v = item->spec; - const struct rte_flow_item_tag *tag_m = item->mask; + const struct rte_flow_item_tag *tag_vv = item->spec; + const struct rte_flow_item_tag *tag_v; + const struct rte_flow_item_tag *tag_m; enum modify_reg reg; + uint32_t index; - MLX5_ASSERT(tag_v); - tag_m = tag_m ? tag_m : &rte_flow_item_tag_mask; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, tag_v, tag_m, + &rte_flow_item_tag_mask); + /* When set mask, the index should be from spec. */ + index = tag_vv ? tag_vv->index : tag_v->index; /* Get the metadata register index for the tag. */ - reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, tag_v->index, NULL); + reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, index, NULL); MLX5_ASSERT(reg > 0); - flow_dv_match_meta_reg(matcher, key, reg, tag_v->data, tag_m->data); + flow_dv_match_meta_reg(key, reg, tag_v->data, tag_m->data); } /** * Add source vport match to the specified matcher. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] port * Source vport value to match - * @param[in] mask - * Mask */ static void -flow_dv_translate_item_source_vport(void *matcher, void *key, - int16_t port, uint16_t mask) +flow_dv_translate_item_source_vport(void *key, + int16_t port) { - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - MLX5_SET(fte_match_set_misc, misc_m, source_port, mask); MLX5_SET(fte_match_set_misc, misc_v, source_port, port); } @@ -10010,31 +9918,34 @@ flow_dv_translate_item_source_vport(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] + * @param[in] attr * Flow attributes. + * @param[in] key_type + * Set flow matcher mask or value. * * @return * 0 on success, a negative errno value otherwise. */ static int -flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, - const struct rte_flow_attr *attr) +flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + const struct rte_flow_attr *attr, + uint32_t key_type) { const struct rte_flow_item_port_id *pid_m = item ? item->mask : NULL; const struct rte_flow_item_port_id *pid_v = item ? item->spec : NULL; struct mlx5_priv *priv; uint16_t mask, id; + uint32_t vport_meta; if (pid_v && pid_v->id == MLX5_PORT_ESW_MGR) { - flow_dv_translate_item_source_vport(matcher, key, - mlx5_flow_get_esw_manager_vport_id(dev), 0xffff); + flow_dv_translate_item_source_vport(key, + key_type & MLX5_SET_MATCHER_V ? + mlx5_flow_get_esw_manager_vport_id(dev) : 0xffff); return 0; } mask = pid_m ? pid_m->id : 0xffff; @@ -10042,6 +9953,13 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, priv = mlx5_port_to_eswitch_info(id, item == NULL); if (!priv) return -rte_errno; + if (key_type & MLX5_SET_MATCHER_M) { + id = mask; + vport_meta = priv->vport_meta_mask; + } else { + id = priv->vport_id; + vport_meta = priv->vport_meta_tag; + } /* * Translate to vport field or to metadata, depending on mode. * Kernel can use either misc.source_port or half of C0 metadata @@ -10055,20 +9973,17 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, */ if (mask == 0xffff && priv->vport_id == 0xffff && priv->pf_bond < 0 && attr->transfer) - flow_dv_translate_item_source_vport - (matcher, key, priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); /* * We should always set the vport metadata register, * otherwise the SW steering library can drop * the rule if wire vport metadata value is not zero, * it depends on kernel configuration. */ - flow_dv_translate_item_meta_vport(matcher, key, - priv->vport_meta_tag, - priv->vport_meta_mask); + flow_dv_translate_item_meta_vport + (key, vport_meta, priv->vport_meta_mask); } else { - flow_dv_translate_item_source_vport(matcher, key, - priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); } return 0; } @@ -10078,8 +9993,6 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -10091,21 +10004,25 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, * 0 on success, a negative errno value otherwise. */ static int -flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, - void *key, +flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *key, const struct rte_flow_item *item, - const struct rte_flow_attr *attr) + const struct rte_flow_attr *attr, + uint32_t key_type) { const struct rte_flow_item_ethdev *pid_m = item ? item->mask : NULL; const struct rte_flow_item_ethdev *pid_v = item ? item->spec : NULL; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); struct mlx5_priv *priv; uint16_t mask, id; + uint32_t vport_meta; + MLX5_ASSERT(wks); if (!pid_m && !pid_v) return 0; if (pid_v && pid_v->port_id == UINT16_MAX) { - flow_dv_translate_item_source_vport(matcher, key, - mlx5_flow_get_esw_manager_vport_id(dev), UINT16_MAX); + flow_dv_translate_item_source_vport(key, + key_type & MLX5_SET_MATCHER_V ? + mlx5_flow_get_esw_manager_vport_id(dev) : 0xffff); return 0; } mask = pid_m ? pid_m->port_id : UINT16_MAX; @@ -10113,6 +10030,14 @@ flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, priv = mlx5_port_to_eswitch_info(id, item == NULL); if (!priv) return -rte_errno; + if (key_type & MLX5_SET_MATCHER_M) { + id = mask; + vport_meta = priv->vport_meta_mask; + } else { + id = priv->vport_id; + vport_meta = priv->vport_meta_tag; + wks->vport_meta_tag = vport_meta; + } /* * Translate to vport field or to metadata, depending on mode. * Kernel can use either misc.source_port or half of C0 metadata @@ -10125,119 +10050,133 @@ flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, * save the extra vport match. */ if (mask == UINT16_MAX && priv->vport_id == UINT16_MAX && - priv->pf_bond < 0 && attr->transfer) - flow_dv_translate_item_source_vport - (matcher, key, priv->vport_id, mask); + priv->pf_bond < 0 && attr->transfer && + priv->sh->config.dv_flow_en != 2) + flow_dv_translate_item_source_vport(key, id); /* * We should always set the vport metadata register, * otherwise the SW steering library can drop * the rule if wire vport metadata value is not zero, * it depends on kernel configuration. */ - flow_dv_translate_item_meta_vport(matcher, key, - priv->vport_meta_tag, + flow_dv_translate_item_meta_vport(key, vport_meta, priv->vport_meta_mask); } else { - flow_dv_translate_item_source_vport(matcher, key, - priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); } return 0; } /** - * Add ICMP6 item to matcher and to the value. + * Translate port-id item to eswitch match on port-id. * + * @param[in] dev + * The devich to configure through. * @param[in, out] matcher * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] attr + * Flow attributes. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +static int +flow_dv_translate_item_port_id_all(struct rte_eth_dev *dev, + void *matcher, void *key, + const struct rte_flow_item *item, + const struct rte_flow_attr *attr) +{ + int ret; + + ret = flow_dv_translate_item_port_id + (dev, matcher, item, attr, MLX5_SET_MATCHER_SW_M); + if (ret) + return ret; + ret = flow_dv_translate_item_port_id + (dev, key, item, attr, MLX5_SET_MATCHER_SW_V); + return ret; +} + + +/** + * Add ICMP6 item to the value. + * + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_icmp6(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_icmp6(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_icmp6 *icmp6_m = item->mask; - const struct rte_flow_item_icmp6 *icmp6_v = item->spec; - void *headers_m; + const struct rte_flow_item_icmp6 *icmp6_m; + const struct rte_flow_item_icmp6 *icmp6_v; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ICMPV6); - if (!icmp6_v) + + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, 0xFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, + IPPROTO_ICMPV6); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!icmp6_m) - icmp6_m = &rte_flow_item_icmp6_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, icmpv6_type, icmp6_m->type); + MLX5_ITEM_UPDATE(item, key_type, icmp6_v, icmp6_m, + &rte_flow_item_icmp6_mask); MLX5_SET(fte_match_set_misc3, misc3_v, icmpv6_type, icmp6_v->type & icmp6_m->type); - MLX5_SET(fte_match_set_misc3, misc3_m, icmpv6_code, icmp6_m->code); MLX5_SET(fte_match_set_misc3, misc3_v, icmpv6_code, icmp6_v->code & icmp6_m->code); } /** - * Add ICMP item to matcher and to the value. + * Add ICMP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_icmp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_icmp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_icmp *icmp_m = item->mask; - const struct rte_flow_item_icmp *icmp_v = item->spec; + const struct rte_flow_item_icmp *icmp_m; + const struct rte_flow_item_icmp *icmp_v; uint32_t icmp_header_data_m = 0; uint32_t icmp_header_data_v = 0; - void *headers_m; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ICMP); - if (!icmp_v) + + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_ICMP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!icmp_m) - icmp_m = &rte_flow_item_icmp_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_type, - icmp_m->hdr.icmp_type); + MLX5_ITEM_UPDATE(item, key_type, icmp_v, icmp_m, + &rte_flow_item_icmp_mask); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_type, icmp_v->hdr.icmp_type & icmp_m->hdr.icmp_type); - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_code, - icmp_m->hdr.icmp_code); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_code, icmp_v->hdr.icmp_code & icmp_m->hdr.icmp_code); icmp_header_data_m = rte_be_to_cpu_16(icmp_m->hdr.icmp_seq_nb); @@ -10246,64 +10185,51 @@ flow_dv_translate_item_icmp(void *matcher, void *key, icmp_header_data_v = rte_be_to_cpu_16(icmp_v->hdr.icmp_seq_nb); icmp_header_data_v |= rte_be_to_cpu_16(icmp_v->hdr.icmp_ident) << 16; - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_header_data, - icmp_header_data_m); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_header_data, icmp_header_data_v & icmp_header_data_m); } } /** - * Add GTP item to matcher and to the value. + * Add GTP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gtp(void *matcher, void *key, - const struct rte_flow_item *item, int inner) +flow_dv_translate_item_gtp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_gtp *gtp_m = item->mask; - const struct rte_flow_item_gtp *gtp_v = item->spec; - void *headers_m; + const struct rte_flow_item_gtp *gtp_m; + const struct rte_flow_item_gtp *gtp_v; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); uint16_t dport = RTE_GTPU_UDP_PORT; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, dport); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, dport); } - if (!gtp_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!gtp_m) - gtp_m = &rte_flow_item_gtp_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_flags, - gtp_m->v_pt_rsv_flags); + MLX5_ITEM_UPDATE(item, key_type, gtp_v, gtp_m, + &rte_flow_item_gtp_mask); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags, gtp_v->v_pt_rsv_flags & gtp_m->v_pt_rsv_flags); - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_type, gtp_m->msg_type); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_type, gtp_v->msg_type & gtp_m->msg_type); - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_teid, - rte_be_to_cpu_32(gtp_m->teid)); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_teid, rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid)); } @@ -10311,21 +10237,19 @@ flow_dv_translate_item_gtp(void *matcher, void *key, /** * Add GTP PSC item to matcher. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static int -flow_dv_translate_item_gtp_psc(void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_gtp_psc(void *key, const struct rte_flow_item *item, + uint32_t key_type) { - const struct rte_flow_item_gtp_psc *gtp_psc_m = item->mask; - const struct rte_flow_item_gtp_psc *gtp_psc_v = item->spec; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); + const struct rte_flow_item_gtp_psc *gtp_psc_m; + const struct rte_flow_item_gtp_psc *gtp_psc_v; void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); union { uint32_t w32; @@ -10335,52 +10259,40 @@ flow_dv_translate_item_gtp_psc(void *matcher, void *key, uint8_t next_ext_header_type; }; } dw_2; + union { + uint32_t w32; + struct { + uint8_t len; + uint8_t type_flags; + uint8_t qfi; + uint8_t reserved; + }; + } dw_0; uint8_t gtp_flags; /* Always set E-flag match on one, regardless of GTP item settings. */ - gtp_flags = MLX5_GET(fte_match_set_misc3, misc3_m, gtpu_msg_flags); - gtp_flags |= MLX5_GTP_EXT_HEADER_FLAG; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_flags, gtp_flags); gtp_flags = MLX5_GET(fte_match_set_misc3, misc3_v, gtpu_msg_flags); gtp_flags |= MLX5_GTP_EXT_HEADER_FLAG; MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags, gtp_flags); /*Set next extension header type. */ dw_2.seq_num = 0; dw_2.npdu_num = 0; - dw_2.next_ext_header_type = 0xff; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_dw_2, - rte_cpu_to_be_32(dw_2.w32)); - dw_2.seq_num = 0; - dw_2.npdu_num = 0; - dw_2.next_ext_header_type = 0x85; + if (key_type & MLX5_SET_MATCHER_M) + dw_2.next_ext_header_type = 0xff; + else + dw_2.next_ext_header_type = 0x85; MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_dw_2, rte_cpu_to_be_32(dw_2.w32)); - if (gtp_psc_v) { - union { - uint32_t w32; - struct { - uint8_t len; - uint8_t type_flags; - uint8_t qfi; - uint8_t reserved; - }; - } dw_0; - - /*Set extension header PDU type and Qos. */ - if (!gtp_psc_m) - gtp_psc_m = &rte_flow_item_gtp_psc_mask; - dw_0.w32 = 0; - dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_m->hdr.type); - dw_0.qfi = gtp_psc_m->hdr.qfi; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_first_ext_dw_0, - rte_cpu_to_be_32(dw_0.w32)); - dw_0.w32 = 0; - dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_v->hdr.type & - gtp_psc_m->hdr.type); - dw_0.qfi = gtp_psc_v->hdr.qfi & gtp_psc_m->hdr.qfi; - MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_first_ext_dw_0, - rte_cpu_to_be_32(dw_0.w32)); - } + if (MLX5_ITEM_VALID(item, key_type)) + return 0; + MLX5_ITEM_UPDATE(item, key_type, gtp_psc_v, + gtp_psc_m, &rte_flow_item_gtp_psc_mask); + dw_0.w32 = 0; + dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_v->hdr.type & + gtp_psc_m->hdr.type); + dw_0.qfi = gtp_psc_v->hdr.qfi & gtp_psc_m->hdr.qfi; + MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_first_ext_dw_0, + rte_cpu_to_be_32(dw_0.w32)); return 0; } @@ -10389,29 +10301,27 @@ flow_dv_translate_item_gtp_psc(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] last_item * Last item flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, - uint64_t last_item) +flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint64_t last_item, uint32_t key_type) { struct mlx5_priv *priv = dev->data->dev_private; - const struct rte_flow_item_ecpri *ecpri_m = item->mask; - const struct rte_flow_item_ecpri *ecpri_v = item->spec; + const struct rte_flow_item_ecpri *ecpri_m; + const struct rte_flow_item_ecpri *ecpri_v; + const struct rte_flow_item_ecpri *ecpri_vv = item->spec; struct rte_ecpri_common_hdr common; - void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_4); void *misc4_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_4); uint32_t *samples; - void *dw_m; void *dw_v; /* @@ -10419,21 +10329,22 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, * match on eCPRI EtherType implicitly. */ if (last_item & MLX5_FLOW_LAYER_OUTER_L2) { - void *hdrs_m, *hdrs_v, *l2m, *l2v; + void *hdrs_v, *l2v; - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - l2m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, ethertype); l2v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype); - if (*(uint16_t *)l2m == 0 && *(uint16_t *)l2v == 0) { - *(uint16_t *)l2m = UINT16_MAX; - *(uint16_t *)l2v = RTE_BE16(RTE_ETHER_TYPE_ECPRI); + if (*(uint16_t *)l2v == 0) { + if (key_type & MLX5_SET_MATCHER_M) + *(uint16_t *)l2v = UINT16_MAX; + else + *(uint16_t *)l2v = + RTE_BE16(RTE_ETHER_TYPE_ECPRI); } } - if (!ecpri_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ecpri_m) - ecpri_m = &rte_flow_item_ecpri_mask; + MLX5_ITEM_UPDATE(item, key_type, ecpri_v, ecpri_m, + &rte_flow_item_ecpri_mask); /* * Maximal four DW samples are supported in a single matching now. * Two are used now for a eCPRI matching: @@ -10445,16 +10356,11 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, return; samples = priv->sh->ecpri_parser.ids; /* Need to take the whole DW as the mask to fill the entry. */ - dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m, - prog_sample_field_value_0); dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v, prog_sample_field_value_0); /* Already big endian (network order) in the header. */ - *(uint32_t *)dw_m = ecpri_m->hdr.common.u32; *(uint32_t *)dw_v = ecpri_v->hdr.common.u32 & ecpri_m->hdr.common.u32; /* Sample#0, used for matching type, offset 0. */ - MLX5_SET(fte_match_set_misc4, misc4_m, - prog_sample_field_id_0, samples[0]); /* It makes no sense to set the sample ID in the mask field. */ MLX5_SET(fte_match_set_misc4, misc4_v, prog_sample_field_id_0, samples[0]); @@ -10463,21 +10369,19 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, * Some wildcard rules only matching type field should be supported. */ if (ecpri_m->hdr.dummy[0]) { - common.u32 = rte_be_to_cpu_32(ecpri_v->hdr.common.u32); + if (key_type == MLX5_SET_MATCHER_SW_M) + common.u32 = rte_be_to_cpu_32(ecpri_vv->hdr.common.u32); + else + common.u32 = rte_be_to_cpu_32(ecpri_v->hdr.common.u32); switch (common.type) { case RTE_ECPRI_MSG_TYPE_IQ_DATA: case RTE_ECPRI_MSG_TYPE_RTC_CTRL: case RTE_ECPRI_MSG_TYPE_DLY_MSR: - dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m, - prog_sample_field_value_1); dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v, prog_sample_field_value_1); - *(uint32_t *)dw_m = ecpri_m->hdr.dummy[0]; *(uint32_t *)dw_v = ecpri_v->hdr.dummy[0] & ecpri_m->hdr.dummy[0]; /* Sample#1, to match message body, offset 4. */ - MLX5_SET(fte_match_set_misc4, misc4_m, - prog_sample_field_id_1, samples[1]); MLX5_SET(fte_match_set_misc4, misc4_v, prog_sample_field_id_1, samples[1]); break; @@ -10542,7 +10446,7 @@ flow_dv_translate_item_aso_ct(struct rte_eth_dev *dev, reg_id = mlx5_flow_get_reg_id(dev, MLX5_ASO_CONNTRACK, 0, &error); if (reg_id == REG_NON) return; - flow_dv_match_meta_reg(matcher, key, (enum modify_reg)reg_id, + flow_dv_match_meta_reg_all(matcher, key, (enum modify_reg)reg_id, reg_value, reg_mask); } @@ -11328,42 +11232,48 @@ flow_dv_translate_create_counter(struct rte_eth_dev *dev, * * @param[in] dev * Pointer to the dev struct. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) + void *key, + const struct rte_flow_item *item, + uint32_t key_type) { const struct mlx5_rte_flow_item_tx_queue *queue_m; const struct mlx5_rte_flow_item_tx_queue *queue_v; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); - void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - struct mlx5_txq_ctrl *txq; - uint32_t queue, mask; + const struct mlx5_rte_flow_item_tx_queue queue_mask = { + .queue = UINT32_MAX, + }; + void *misc_v = + MLX5_ADDR_OF(fte_match_param, key, misc_parameters); + struct mlx5_txq_ctrl *txq = NULL; + uint32_t queue; - queue_m = (const void *)item->mask; - queue_v = (const void *)item->spec; - if (!queue_v) - return; - txq = mlx5_txq_get(dev, queue_v->queue); - if (!txq) + MLX5_ITEM_UPDATE(item, key_type, queue_v, queue_m, &queue_mask); + if (!queue_m || !queue_v) return; - if (txq->is_hairpin) - queue = txq->obj->sq->id; - else - queue = txq->obj->sq_obj.sq->id; - mask = queue_m == NULL ? UINT32_MAX : queue_m->queue; - MLX5_SET(fte_match_set_misc, misc_m, source_sqn, mask); - MLX5_SET(fte_match_set_misc, misc_v, source_sqn, queue & mask); - mlx5_txq_release(dev, queue_v->queue); + if (key_type & MLX5_SET_MATCHER_V) { + txq = mlx5_txq_get(dev, queue_v->queue); + if (!txq) + return; + if (txq->is_hairpin) + queue = txq->obj->sq->id; + else + queue = txq->obj->sq_obj.sq->id; + if (key_type == MLX5_SET_MATCHER_SW_V) + queue &= queue_m->queue; + } else { + queue = queue_m->queue; + } + MLX5_SET(fte_match_set_misc, misc_v, source_sqn, queue); + if (txq) + mlx5_txq_release(dev, queue_v->queue); } /** @@ -13029,7 +12939,298 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, } /** - * Translate the flow item to matcher. + * Fill the flow matcher with DV spec. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] items + * Pointer to the list of items. + * @param[in] wks + * Pointer to the matcher workspace. + * @param[in] key + * Pointer to the flow matcher key. + * @param[in] key_type + * Key type. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_translate_items(struct rte_eth_dev *dev, + const struct rte_flow_item *items, + struct mlx5_dv_matcher_workspace *wks, + void *key, uint32_t key_type, + struct rte_flow_error *error) +{ + struct mlx5_flow_rss_desc *rss_desc = wks->rss_desc; + uint8_t next_protocol = wks->next_protocol; + int tunnel = !!(wks->item_flags & MLX5_FLOW_LAYER_TUNNEL); + int item_type = items->type; + uint64_t last_item = wks->last_item; + int ret; + + switch (item_type) { + case RTE_FLOW_ITEM_TYPE_ESP: + flow_dv_translate_item_esp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_ITEM_ESP; + break; + case RTE_FLOW_ITEM_TYPE_PORT_ID: + flow_dv_translate_item_port_id + (dev, key, items, wks->attr, key_type); + last_item = MLX5_FLOW_ITEM_PORT_ID; + break; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + flow_dv_translate_item_represented_port + (dev, key, items, wks->attr, key_type); + last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; + break; + case RTE_FLOW_ITEM_TYPE_ETH: + flow_dv_translate_item_eth(key, items, tunnel, + wks->group, key_type); + wks->priority = wks->action_flags & + MLX5_FLOW_ACTION_DEFAULT_MISS && + !wks->external ? + MLX5_PRIORITY_MAP_L3 : + MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + flow_dv_translate_item_vlan(key, items, tunnel, wks, key_type); + wks->priority = MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | + MLX5_FLOW_LAYER_INNER_VLAN) : + (MLX5_FLOW_LAYER_OUTER_L2 | + MLX5_FLOW_LAYER_OUTER_VLAN); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &wks->item_flags, &tunnel); + flow_dv_translate_item_ipv4(key, items, tunnel, + wks->group, key_type); + wks->priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv4 *) + items->mask)->hdr.next_proto_id) { + next_protocol = + ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + next_protocol &= + ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &wks->item_flags, &tunnel); + flow_dv_translate_item_ipv6(key, items, tunnel, + wks->group, key_type); + wks->priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto) { + next_protocol = + ((const struct rte_flow_item_ipv6 *) + items->spec)->hdr.proto; + next_protocol &= + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6 *) + (items->mask))->hdr.proto; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6 *) + (items->spec))->hdr.proto; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: + flow_dv_translate_item_ipv6_frag_ext + (key, items, tunnel, key_type); + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : + MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header) { + next_protocol = + ((const struct rte_flow_item_ipv6_frag_ext *) + items->spec)->hdr.next_header; + next_protocol &= + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *) + (items->mask))->hdr.next_header; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *) + (items->spec))->hdr.next_header; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_TCP: + flow_dv_translate_item_tcp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; + break; + case RTE_FLOW_ITEM_TYPE_UDP: + flow_dv_translate_item_udp(key, items, tunnel, wks, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; + break; + case RTE_FLOW_ITEM_TYPE_GRE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + wks->gre_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + flow_dv_translate_item_gre_key(key, items, key_type); + last_item = MLX5_FLOW_LAYER_GRE_KEY; + break; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_NVGRE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + flow_dv_translate_item_vxlan(dev, wks->attr, key, + items, tunnel, wks, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_VXLAN_GPE; + break; + case RTE_FLOW_ITEM_TYPE_GENEVE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GENEVE; + break; + case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: + ret = flow_dv_translate_item_geneve_opt + (dev, key, items, key_type, error); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GENEVE TLV option"); + wks->geneve_tlv_option = 1; + last_item = MLX5_FLOW_LAYER_GENEVE_OPT; + break; + case RTE_FLOW_ITEM_TYPE_MPLS: + flow_dv_translate_item_mpls(key, items, last_item, + tunnel, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_MPLS; + break; + case RTE_FLOW_ITEM_TYPE_MARK: + flow_dv_translate_item_mark(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_MARK; + break; + case RTE_FLOW_ITEM_TYPE_META: + flow_dv_translate_item_meta + (dev, key, wks->attr, items, key_type); + last_item = MLX5_FLOW_ITEM_METADATA; + break; + case RTE_FLOW_ITEM_TYPE_ICMP: + flow_dv_translate_item_icmp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP; + break; + case RTE_FLOW_ITEM_TYPE_ICMP6: + flow_dv_translate_item_icmp6(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP6; + break; + case RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_item_tag(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TAG; + break; + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_mlx5_item_tag(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TAG; + break; + case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + flow_dv_translate_item_tx_queue(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TX_QUEUE; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + flow_dv_translate_item_gtp(key, items, tunnel, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = flow_dv_translate_item_gtp_psc(key, items, key_type); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GTP PSC item"); + last_item = MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_ECPRI: + if (!mlx5_flex_parser_ecpri_exist(dev)) { + /* Create it only the first time to be used. */ + ret = mlx5_flex_parser_ecpri_alloc(dev); + if (ret) + return rte_flow_error_set + (error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, + "cannot create eCPRI parser"); + } + flow_dv_translate_item_ecpri + (dev, key, items, last_item, key_type); + /* No other protocol should follow eCPRI layer. */ + last_item = MLX5_FLOW_LAYER_ECPRI; + break; + default: + break; + } + wks->item_flags |= last_item; + wks->last_item = last_item; + wks->next_protocol = next_protocol; + return 0; +} + +/** + * Fill the SW steering flow with DV spec. * * @param[in] dev * Pointer to rte_eth_dev structure. @@ -13039,7 +13240,7 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * Pointer to the flow attributes. * @param[in] items * Pointer to the list of items. - * @param[in] matcher + * @param[in, out] matcher * Pointer to the flow matcher. * @param[out] error * Pointer to the error structure. @@ -13048,287 +13249,41 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -flow_dv_translate_items(struct rte_eth_dev *dev, - struct mlx5_flow *dev_flow, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - struct mlx5_flow_dv_matcher *matcher, - struct rte_flow_error *error) +flow_dv_translate_items_sws(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item *items, + struct mlx5_flow_dv_matcher *matcher, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow *flow = dev_flow->flow; - struct mlx5_flow_handle *handle = dev_flow->handle; - struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); - struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc; - uint64_t item_flags = 0; - uint64_t last_item = 0; void *match_mask = matcher->mask.buf; void *match_value = dev_flow->dv.value.buf; - uint8_t next_protocol = 0xff; - uint16_t priority = 0; + struct mlx5_dv_matcher_workspace wks = { + .action_flags = dev_flow->act_flags, + .item_flags = 0, + .external = dev_flow->external, + .next_protocol = 0xff, + .group = dev_flow->dv.group, + .attr = attr, + .rss_desc = &((struct mlx5_flow_workspace *) + mlx5_flow_get_thread_workspace())->rss_desc, + }; + struct mlx5_dv_matcher_workspace wks_m = wks; const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; - const struct rte_flow_item *tunnel_item = NULL; - const struct rte_flow_item *gre_item = NULL; int ret = 0; + int tunnel; for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { - int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); - int item_type = items->type; - - if (!mlx5_flow_os_item_supported(item_type)) + if (!mlx5_flow_os_item_supported(items->type)) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, NULL, "item not supported"); - switch (item_type) { - case RTE_FLOW_ITEM_TYPE_ESP: - flow_dv_translate_item_esp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_ITEM_ESP; - break; - case RTE_FLOW_ITEM_TYPE_PORT_ID: - flow_dv_translate_item_port_id - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_PORT_ID; - break; - case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: - flow_dv_translate_item_represented_port - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; - break; - case RTE_FLOW_ITEM_TYPE_ETH: - flow_dv_translate_item_eth(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = dev_flow->act_flags & - MLX5_FLOW_ACTION_DEFAULT_MISS && - !dev_flow->external ? - MLX5_PRIORITY_MAP_L3 : - MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : - MLX5_FLOW_LAYER_OUTER_L2; - break; - case RTE_FLOW_ITEM_TYPE_VLAN: - flow_dv_translate_item_vlan(dev_flow, - match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | - MLX5_FLOW_LAYER_INNER_VLAN) : - (MLX5_FLOW_LAYER_OUTER_L2 | - MLX5_FLOW_LAYER_OUTER_VLAN); - break; - case RTE_FLOW_ITEM_TYPE_IPV4: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv4(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : - MLX5_FLOW_LAYER_OUTER_L3_IPV4; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv4 *) - items->mask)->hdr.next_proto_id) { - next_protocol = - ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - next_protocol &= - ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_IPV6: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv6(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : - MLX5_FLOW_LAYER_OUTER_L3_IPV6; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto) { - next_protocol = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol &= - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: - flow_dv_translate_item_ipv6_frag_ext(match_mask, - match_value, - items, tunnel); - last_item = tunnel ? - MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : - MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header) { - next_protocol = - ((const struct rte_flow_item_ipv6_frag_ext *) - items->spec)->hdr.next_header; - next_protocol &= - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_TCP: - flow_dv_translate_item_tcp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : - MLX5_FLOW_LAYER_OUTER_L4_TCP; - break; - case RTE_FLOW_ITEM_TYPE_UDP: - flow_dv_translate_item_udp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : - MLX5_FLOW_LAYER_OUTER_L4_UDP; - break; - case RTE_FLOW_ITEM_TYPE_GRE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - gre_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GRE_KEY: - flow_dv_translate_item_gre_key(match_mask, - match_value, items); - last_item = MLX5_FLOW_LAYER_GRE_KEY; - break; - case RTE_FLOW_ITEM_TYPE_GRE_OPTION: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_NVGRE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_VXLAN: - flow_dv_translate_item_vxlan(dev, attr, - match_mask, match_value, - items, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN; - break; - case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN_GPE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GENEVE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GENEVE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: - ret = flow_dv_translate_item_geneve_opt(dev, match_mask, - match_value, - items, error); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GENEVE TLV option"); - flow->geneve_tlv_option = 1; - last_item = MLX5_FLOW_LAYER_GENEVE_OPT; - break; - case RTE_FLOW_ITEM_TYPE_MPLS: - flow_dv_translate_item_mpls(match_mask, match_value, - items, last_item, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_MPLS; - break; - case RTE_FLOW_ITEM_TYPE_MARK: - flow_dv_translate_item_mark(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_MARK; - break; - case RTE_FLOW_ITEM_TYPE_META: - flow_dv_translate_item_meta(dev, match_mask, - match_value, attr, items); - last_item = MLX5_FLOW_ITEM_METADATA; - break; - case RTE_FLOW_ITEM_TYPE_ICMP: - flow_dv_translate_item_icmp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP; - break; - case RTE_FLOW_ITEM_TYPE_ICMP6: - flow_dv_translate_item_icmp6(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP6; - break; - case RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; - break; - case MLX5_RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_mlx5_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; - break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - flow_dv_translate_item_tx_queue(dev, match_mask, - match_value, - items); - last_item = MLX5_FLOW_ITEM_TX_QUEUE; - break; - case RTE_FLOW_ITEM_TYPE_GTP: - flow_dv_translate_item_gtp(match_mask, match_value, - items, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GTP; - break; - case RTE_FLOW_ITEM_TYPE_GTP_PSC: - ret = flow_dv_translate_item_gtp_psc(match_mask, - match_value, - items); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GTP PSC item"); - last_item = MLX5_FLOW_LAYER_GTP_PSC; - break; - case RTE_FLOW_ITEM_TYPE_ECPRI: - if (!mlx5_flex_parser_ecpri_exist(dev)) { - /* Create it only the first time to be used. */ - ret = mlx5_flex_parser_ecpri_alloc(dev); - if (ret) - return rte_flow_error_set - (error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, - "cannot create eCPRI parser"); - } - flow_dv_translate_item_ecpri(dev, match_mask, - match_value, items, - last_item); - /* No other protocol should follow eCPRI layer. */ - last_item = MLX5_FLOW_LAYER_ECPRI; - break; + tunnel = !!(wks.item_flags & MLX5_FLOW_LAYER_TUNNEL); + switch (items->type) { case RTE_FLOW_ITEM_TYPE_INTEGRITY: flow_dv_translate_item_integrity(items, integrity_items, - &last_item); + &wks.last_item); break; case RTE_FLOW_ITEM_TYPE_CONNTRACK: flow_dv_translate_item_aso_ct(dev, match_mask, @@ -13338,13 +13293,22 @@ flow_dv_translate_items(struct rte_eth_dev *dev, flow_dv_translate_item_flex(dev, match_mask, match_value, items, dev_flow, tunnel != 0); - last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : - MLX5_FLOW_ITEM_OUTER_FLEX; + wks.last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : + MLX5_FLOW_ITEM_OUTER_FLEX; break; + default: + ret = flow_dv_translate_items(dev, items, &wks_m, + match_mask, MLX5_SET_MATCHER_SW_M, error); + if (ret) + return ret; + ret = flow_dv_translate_items(dev, items, &wks, + match_value, MLX5_SET_MATCHER_SW_V, error); + if (ret) + return ret; break; } - item_flags |= last_item; + wks.item_flags |= wks.last_item; } /* * When E-Switch mode is enabled, we have two cases where we need to @@ -13354,48 +13318,82 @@ flow_dv_translate_items(struct rte_eth_dev *dev, * In both cases the source port is set according the current port * in use. */ - if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && - !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && + if (!(wks.item_flags & MLX5_FLOW_ITEM_PORT_ID) && + !(wks.item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && !(attr->egress && !attr->transfer)) { - if (flow_dv_translate_item_port_id(dev, match_mask, + if (flow_dv_translate_item_port_id_all(dev, match_mask, match_value, NULL, attr)) return -rte_errno; } - if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + if (wks.item_flags & MLX5_FLOW_ITEM_INTEGRITY) { flow_dv_translate_item_integrity_post(match_mask, match_value, integrity_items, - item_flags); - } - if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) - flow_dv_translate_item_vxlan_gpe(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GENEVE) - flow_dv_translate_item_geneve(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GRE) { - if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) - flow_dv_translate_item_gre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) - flow_dv_translate_item_nvgre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) - flow_dv_translate_item_gre_option(match_mask, match_value, - tunnel_item, gre_item, item_flags); - else + wks.item_flags); + } + if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { + flow_dv_translate_item_vxlan_gpe(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_vxlan_gpe(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GENEVE) { + flow_dv_translate_item_geneve(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_geneve(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GRE) { + if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) { + flow_dv_translate_item_gre(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_gre(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) { + flow_dv_translate_item_nvgre(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_nvgre(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) { + flow_dv_translate_item_gre_option(match_mask, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_gre_option(match_value, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else { MLX5_ASSERT(false); + } } - matcher->priority = priority; + dev_flow->handle->vf_vlan.tag = wks.vlan_tag; + matcher->priority = wks.priority; #ifdef RTE_LIBRTE_MLX5_DEBUG - MLX5_ASSERT(!flow_dv_check_valid_spec(matcher->mask.buf, - dev_flow->dv.value.buf)); + MLX5_ASSERT(!flow_dv_check_valid_spec(match_mask, match_value)); #endif /* * Layers may be already initialized from prefix flow if this dev_flow * is the suffix flow. */ - handle->layers |= item_flags; - return ret; + dev_flow->handle->layers |= wks.item_flags; + dev_flow->flow->geneve_tlv_option = wks.geneve_tlv_option; + return 0; } /** @@ -14124,7 +14122,7 @@ flow_dv_translate(struct rte_eth_dev *dev, modify_action_position = actions_n++; } dev_flow->act_flags = action_flags; - ret = flow_dv_translate_items(dev, dev_flow, attr, items, &matcher, + ret = flow_dv_translate_items_sws(dev, dev_flow, attr, items, &matcher, error); if (ret) return -rte_errno; @@ -16690,27 +16688,23 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev, struct mlx5_flow_dv_match_params value = { .size = sizeof(value.buf), }; - struct mlx5_flow_dv_match_params matcher = { - .size = sizeof(matcher.buf), - }; struct mlx5_priv *priv = dev->data->dev_private; uint8_t misc_mask; if (match_src_port && priv->sh->esw_mode) { if (item && item->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT) - ret = flow_dv_translate_item_represented_port(dev, matcher.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_represented_port(dev, value.buf, + item, attr, MLX5_SET_MATCHER_SW_V); else - ret = flow_dv_translate_item_port_id(dev, matcher.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_port_id(dev, value.buf, + item, attr, MLX5_SET_MATCHER_SW_V); if (ret) { DRV_LOG(ERR, "Failed to create meter policy%d flow's" " value with port.", color); return -1; } } - flow_dv_match_meta_reg(matcher.buf, value.buf, - (enum modify_reg)color_reg_c_idx, + flow_dv_match_meta_reg(value.buf, (enum modify_reg)color_reg_c_idx, rte_col_2_mlx5_col(color), UINT32_MAX); misc_mask = flow_dv_matcher_enable(value.buf); __flow_dv_adjust_buf_size(&value.size, misc_mask); @@ -16742,9 +16736,6 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, }, .tbl = tbl_rsc, }; - struct mlx5_flow_dv_match_params value = { - .size = sizeof(value.buf), - }; struct mlx5_flow_cb_ctx ctx = { .error = error, .data = &matcher, @@ -16757,10 +16748,10 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, if (match_src_port && priv->sh->esw_mode) { if (item && item->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT) ret = flow_dv_translate_item_represented_port(dev, matcher.mask.buf, - value.buf, item, attr); + item, attr, MLX5_SET_MATCHER_SW_M); else - ret = flow_dv_translate_item_port_id(dev, matcher.mask.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_port_id(dev, matcher.mask.buf, + item, attr, MLX5_SET_MATCHER_SW_M); if (ret) { DRV_LOG(ERR, "Failed to register meter policy%d matcher" " with port.", priority); @@ -16769,7 +16760,7 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, } tbl_data = container_of(tbl_rsc, struct mlx5_flow_tbl_data_entry, tbl); if (priority < RTE_COLOR_RED) - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg(matcher.mask.buf, (enum modify_reg)color_reg_c_idx, 0, color_mask); matcher.priority = priority; matcher.crc = rte_raw_cksum((const void *)matcher.mask.buf, @@ -17305,7 +17296,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, tbl_data = container_of(mtrmng->drop_tbl[domain], struct mlx5_flow_tbl_data_entry, tbl); if (!mtrmng->def_matcher[domain]) { - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg_all(matcher.mask.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, 0); matcher.priority = MLX5_MTRS_DEFAULT_RULE_PRIORITY; @@ -17325,7 +17316,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, if (!mtrmng->def_rule[domain]) { i = 0; actions[i++] = priv->sh->dr_drop_action; - flow_dv_match_meta_reg(matcher_para.buf, value.buf, + flow_dv_match_meta_reg_all(matcher_para.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, 0); misc_mask = flow_dv_matcher_enable(value.buf); __flow_dv_adjust_buf_size(&value.size, misc_mask); @@ -17344,7 +17335,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, MLX5_ASSERT(mtrmng->max_mtr_bits); if (!mtrmng->drop_matcher[domain][mtrmng->max_mtr_bits - 1]) { /* Create matchers for Drop. */ - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg_all(matcher.mask.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, (mtr_id_mask << mtr_id_offset)); matcher.priority = MLX5_REG_BITS - mtrmng->max_mtr_bits; @@ -17364,7 +17355,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, drop_matcher = mtrmng->drop_matcher[domain][mtrmng->max_mtr_bits - 1]; /* Create drop rule, matching meter_id only. */ - flow_dv_match_meta_reg(matcher_para.buf, value.buf, + flow_dv_match_meta_reg_all(matcher_para.buf, value.buf, (enum modify_reg)mtr_id_reg_c, (mtr_idx << mtr_id_offset), UINT32_MAX); i = 0; @@ -18846,8 +18837,12 @@ flow_dv_discover_priorities(struct rte_eth_dev *dev, flow.dv.actions[0] = action; flow.dv.actions_n = 1; memset(ð, 0, sizeof(eth)); - flow_dv_translate_item_eth(matcher.mask.buf, flow.dv.value.buf, - &item, /* inner */ false, /* group */ 0); + flow_dv_translate_item_eth(matcher.mask.buf, &item, + /* inner */ false, /* group */ 0, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_eth(flow.dv.value.buf, &item, + /* inner */ false, /* group */ 0, + MLX5_SET_MATCHER_SW_V); matcher.crc = rte_raw_cksum(matcher.mask.buf, matcher.mask.size); for (i = 0; i < vprio_n; i++) { /* Configure the next proposed maximum priority. */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 03/19] net/mlx5: add hardware steering item translation function 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker 2022-10-06 15:03 ` [v2 01/19] net/mlx5: split flow item translation Alex Vesker 2022-10-06 15:03 ` [v2 02/19] net/mlx5: split flow item matcher and value translation Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 04/19] net/mlx5: add port to metadata conversion Alex Vesker ` (15 subsequent siblings) 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> As hardware steering root table flows still work under FW steering mode. This commit provides shared item tranlsation code for hardware steering root table flows. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow.h | 17 ++++++ drivers/net/mlx5/mlx5_flow_dv.c | 93 +++++++++++++++++++++++++++++++++ 2 files changed, 110 insertions(+) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2ebb8496f2..86a08074dc 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1006,6 +1006,18 @@ flow_items_to_tunnel(const struct rte_flow_item items[]) return items[0].spec; } +/* HW steering flow attributes. */ +struct mlx5_flow_attr { + uint32_t port_id; /* Port index. */ + uint32_t group; /* Flow group. */ + uint32_t priority; /* Original Priority. */ + /* rss level, used by priority adjustment. */ + uint32_t rss_level; + /* Action flags, used by priority adjustment. */ + uint32_t act_flags; + uint32_t tbl_type; /* Flow table type. */ +}; + /* Flow structure. */ struct rte_flow { uint32_t dev_handles; @@ -2122,4 +2134,9 @@ int mlx5_flow_get_item_vport_id(struct rte_eth_dev *dev, bool *all_ports, struct rte_flow_error *error); +int flow_dv_translate_items_hws(const struct rte_flow_item *items, + struct mlx5_flow_attr *attr, void *key, + uint32_t key_type, uint64_t *item_flags, + uint8_t *match_criteria, + struct rte_flow_error *error); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index a2704f0b98..a4c59f3762 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -13229,6 +13229,99 @@ flow_dv_translate_items(struct rte_eth_dev *dev, return 0; } +/** + * Fill the HW steering flow with DV spec. + * + * @param[in] items + * Pointer to the list of items. + * @param[in] attr + * Pointer to the flow attributes. + * @param[in] key + * Pointer to the flow matcher key. + * @param[in] key_type + * Key type. + * @param[in, out] item_flags + * Pointer to the flow item flags. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +flow_dv_translate_items_hws(const struct rte_flow_item *items, + struct mlx5_flow_attr *attr, void *key, + uint32_t key_type, uint64_t *item_flags, + uint8_t *match_criteria, + struct rte_flow_error *error) +{ + struct mlx5_flow_rss_desc rss_desc = { .level = attr->rss_level }; + struct rte_flow_attr rattr = { + .group = attr->group, + .priority = attr->priority, + .ingress = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_NIC_RX), + .egress = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_NIC_TX), + .transfer = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_FDB), + }; + struct mlx5_dv_matcher_workspace wks = { + .action_flags = attr->act_flags, + .item_flags = item_flags ? *item_flags : 0, + .external = 0, + .next_protocol = 0xff, + .attr = &rattr, + .rss_desc = &rss_desc, + }; + int ret; + + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { + if (!mlx5_flow_os_item_supported(items->type)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "item not supported"); + ret = flow_dv_translate_items(&rte_eth_devices[attr->port_id], + items, &wks, key, key_type, NULL); + if (ret) + return ret; + } + if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { + flow_dv_translate_item_vxlan_gpe(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GENEVE) { + flow_dv_translate_item_geneve(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GRE) { + if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) { + flow_dv_translate_item_gre(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) { + flow_dv_translate_item_gre_option(key, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + key_type); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) { + flow_dv_translate_item_nvgre(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else { + MLX5_ASSERT(false); + } + } + + if (match_criteria) + *match_criteria = flow_dv_matcher_enable(key); + if (item_flags) + *item_flags = wks.item_flags; + return 0; +} + /** * Fill the SW steering flow with DV spec. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 04/19] net/mlx5: add port to metadata conversion 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (2 preceding siblings ...) 2022-10-06 15:03 ` [v2 03/19] net/mlx5: add hardware steering item translation function Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 05/19] common/mlx5: query set capability of registers Alex Vesker ` (14 subsequent siblings) 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Dariusz Sosnowski From: Dariusz Sosnowski <dsosnowski@nvidia.com> This patch initial version of functions used to: - convert between ethdev port_id and internal tag/mask value, - convert between IB context and internal tag/mask value. Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- drivers/net/mlx5/linux/mlx5_os.c | 2 ++ drivers/net/mlx5/mlx5.c | 1 + drivers/net/mlx5/mlx5_flow.c | 6 ++++ drivers/net/mlx5/mlx5_flow.h | 50 ++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 29 ++++++++++++++++++ 5 files changed, 88 insertions(+) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 60677eb8d7..1036b870de 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1541,6 +1541,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, if (!priv->hrxqs) goto error; rte_rwlock_init(&priv->ind_tbls_lock); + if (priv->vport_meta_mask) + flow_hw_set_port_info(eth_dev); if (priv->sh->config.dv_flow_en == 2) return eth_dev; /* Port representor shares the same max priority with pf port. */ diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 752b60d769..ad561bd86d 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1945,6 +1945,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) flow_hw_resource_release(dev); #endif + flow_hw_clear_port_info(dev); if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ rte_delay_us_sleep(1000); diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index e4744b0a67..acf1467bf6 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -33,6 +33,12 @@ #include "mlx5_common_os.h" #include "rte_pmd_mlx5.h" +/* + * Shared array for quick translation between port_id and vport mask/values + * used for HWS rules. + */ +struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; + struct tunnel_default_miss_ctx { uint16_t *queue; __extension__ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 86a08074dc..2eb2b46060 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1320,6 +1320,56 @@ struct mlx5_flow_split_info { uint64_t prefix_layers; /**< Prefix subflow layers. */ }; +struct flow_hw_port_info { + uint32_t regc_mask; + uint32_t regc_value; + uint32_t is_wire:1; +}; + +extern struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; + +/* + * Get metadata match tag and mask for given rte_eth_dev port. + * Used in HWS rule creation. + */ +static __rte_always_inline const struct flow_hw_port_info * +flow_hw_conv_port_id(const uint16_t port_id) +{ + struct flow_hw_port_info *port_info; + + if (port_id >= RTE_MAX_ETHPORTS) + return NULL; + port_info = &mlx5_flow_hw_port_infos[port_id]; + return !!port_info->regc_mask ? port_info : NULL; +} + +/* + * Get metadata match tag and mask for the uplink port represented + * by given IB context. Used in HWS context creation. + */ +static __rte_always_inline const struct flow_hw_port_info * +flow_hw_get_wire_port(struct ibv_context *ibctx) +{ + struct ibv_device *ibdev = ibctx->device; + uint16_t port_id; + + MLX5_ETH_FOREACH_DEV(port_id, NULL) { + const struct mlx5_priv *priv = + rte_eth_devices[port_id].data->dev_private; + + if (priv && priv->master) { + struct ibv_context *port_ibctx = priv->sh->cdev->ctx; + + if (port_ibctx->device == ibdev) + return flow_hw_conv_port_id(port_id); + } + } + return NULL; +} + +void flow_hw_set_port_info(struct rte_eth_dev *dev); +void flow_hw_clear_port_info(struct rte_eth_dev *dev); + typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 12498794a5..fe809a83b9 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2208,6 +2208,35 @@ flow_hw_resource_release(struct rte_eth_dev *dev) priv->nb_queue = 0; } +/* Sets vport tag and mask, for given port, used in HWS rules. */ +void +flow_hw_set_port_info(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint16_t port_id = dev->data->port_id; + struct flow_hw_port_info *info; + + MLX5_ASSERT(port_id < RTE_MAX_ETHPORTS); + info = &mlx5_flow_hw_port_infos[port_id]; + info->regc_mask = priv->vport_meta_mask; + info->regc_value = priv->vport_meta_tag; + info->is_wire = priv->master; +} + +/* Clears vport tag and mask used for HWS rules. */ +void +flow_hw_clear_port_info(struct rte_eth_dev *dev) +{ + uint16_t port_id = dev->data->port_id; + struct flow_hw_port_info *info; + + MLX5_ASSERT(port_id < RTE_MAX_ETHPORTS); + info = &mlx5_flow_hw_port_infos[port_id]; + info->regc_mask = 0; + info->regc_value = 0; + info->is_wire = 0; +} + /** * Create shared action. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 05/19] common/mlx5: query set capability of registers 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (3 preceding siblings ...) 2022-10-06 15:03 ` [v2 04/19] net/mlx5: add port to metadata conversion Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 06/19] net/mlx5: provide the available tag registers Alex Vesker ` (13 subsequent siblings) 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Bing Zhao From: Bing Zhao <bingz@nvidia.com> In the flow table capabilities, new fields are added to query the capability to set, add, copy to a REG_C_x. The set capability are queried and saved for the future usage. Signed-off-by: Bing Zhao <bingz@nvidia.com> --- drivers/common/mlx5/mlx5_devx_cmds.c | 30 +++++++++++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 2 ++ drivers/common/mlx5/mlx5_prm.h | 44 +++++++++++++++++++++++++--- 3 files changed, 72 insertions(+), 4 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index fb33023138..ac6891145d 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -1058,6 +1058,24 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->modify_outer_ip_ecn = MLX5_GET (flow_table_nic_cap, hcattr, ft_header_modify_nic_receive.outer_ip_ecn); + attr->set_reg_c = 0xff; + if (attr->nic_flow_table) { +#define GET_RX_REG_X_BITS \ + MLX5_GET(flow_table_nic_cap, hcattr, \ + ft_header_modify_nic_receive.metadata_reg_c_x) +#define GET_TX_REG_X_BITS \ + MLX5_GET(flow_table_nic_cap, hcattr, \ + ft_header_modify_nic_transmit.metadata_reg_c_x) + + uint32_t tx_reg, rx_reg; + + tx_reg = GET_TX_REG_X_BITS; + rx_reg = GET_RX_REG_X_BITS; + attr->set_reg_c &= (rx_reg & tx_reg); + +#undef GET_RX_REG_X_BITS +#undef GET_TX_REG_X_BITS + } attr->pkt_integrity_match = mlx5_devx_query_pkt_integrity_match(hcattr); attr->inner_ipv4_ihl = MLX5_GET (flow_table_nic_cap, hcattr, @@ -1157,6 +1175,18 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->esw_mgr_vport_id = MLX5_GET(esw_cap, hcattr, esw_manager_vport_number); } + if (attr->eswitch_manager) { + uint32_t esw_reg; + + hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + if (!hcattr) + return rc; + esw_reg = MLX5_GET(flow_table_esw_cap, hcattr, + ft_header_modify_esw_fdb.metadata_reg_c_x); + attr->set_reg_c &= esw_reg; + } return 0; error: rc = (rc > 0) ? -rc : rc; diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index af6053a788..d69dad613e 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -260,6 +260,8 @@ struct mlx5_hca_attr { uint32_t crypto_wrapped_import_method:1; uint16_t esw_mgr_vport_id; /* E-Switch Mgr vport ID . */ uint16_t max_wqe_sz_sq; + uint32_t set_reg_c:8; + uint32_t nic_flow_table:1; uint32_t modify_outer_ip_ecn:1; }; diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 4346279c81..12eb7b3b7f 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1892,6 +1892,7 @@ struct mlx5_ifc_roce_caps_bits { }; struct mlx5_ifc_ft_fields_support_bits { + /* set_action_field_support */ u8 outer_dmac[0x1]; u8 outer_smac[0x1]; u8 outer_ether_type[0x1]; @@ -1919,7 +1920,7 @@ struct mlx5_ifc_ft_fields_support_bits { u8 outer_gre_key[0x1]; u8 outer_vxlan_vni[0x1]; u8 reserved_at_1a[0x5]; - u8 source_eswitch_port[0x1]; + u8 source_eswitch_port[0x1]; /* end of DW0 */ u8 inner_dmac[0x1]; u8 inner_smac[0x1]; u8 inner_ether_type[0x1]; @@ -1943,8 +1944,33 @@ struct mlx5_ifc_ft_fields_support_bits { u8 inner_tcp_sport[0x1]; u8 inner_tcp_dport[0x1]; u8 inner_tcp_flags[0x1]; - u8 reserved_at_37[0x9]; - u8 reserved_at_40[0x40]; + u8 reserved_at_37[0x9]; /* end of DW1 */ + u8 reserved_at_40[0x20]; /* end of DW2 */ + u8 reserved_at_60[0x18]; + union { + struct { + u8 metadata_reg_c_7[0x1]; + u8 metadata_reg_c_6[0x1]; + u8 metadata_reg_c_5[0x1]; + u8 metadata_reg_c_4[0x1]; + u8 metadata_reg_c_3[0x1]; + u8 metadata_reg_c_2[0x1]; + u8 metadata_reg_c_1[0x1]; + u8 metadata_reg_c_0[0x1]; + }; + u8 metadata_reg_c_x[0x8]; + }; /* end of DW3 */ + /* set_action_field_support_2 */ + u8 reserved_at_80[0x80]; + /* add_action_field_support */ + u8 reserved_at_100[0x80]; + /* add_action_field_support_2 */ + u8 reserved_at_180[0x80]; + /* copy_action_field_support */ + u8 reserved_at_200[0x80]; + /* copy_action_field_support_2 */ + u8 reserved_at_280[0x80]; + u8 reserved_at_300[0x100]; }; /* @@ -1989,9 +2015,18 @@ struct mlx5_ifc_flow_table_nic_cap_bits { u8 reserved_at_e00[0x200]; struct mlx5_ifc_ft_fields_support_bits ft_header_modify_nic_receive; - u8 reserved_at_1080[0x380]; struct mlx5_ifc_ft_fields_support_2_bits ft_field_support_2_nic_receive; + u8 reserved_at_1480[0x780]; + struct mlx5_ifc_ft_fields_support_bits + ft_header_modify_nic_transmit; + u8 reserved_at_2000[0x6000]; +}; + +struct mlx5_ifc_flow_table_esw_cap_bits { + u8 reserved_at_0[0x800]; + struct mlx5_ifc_ft_fields_support_bits ft_header_modify_esw_fdb; + u8 reserved_at_C00[0x7400]; }; /* @@ -2041,6 +2076,7 @@ union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_qos_cap_bits qos_cap; struct mlx5_ifc_virtio_emulation_cap_bits vdpa_caps; struct mlx5_ifc_flow_table_nic_cap_bits flow_table_nic_cap; + struct mlx5_ifc_flow_table_esw_cap_bits flow_table_esw_cap; struct mlx5_ifc_esw_cap_bits esw_cap; struct mlx5_ifc_roce_caps_bits roce_caps; u8 reserved_at_0[0x8000]; -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 06/19] net/mlx5: provide the available tag registers 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (4 preceding siblings ...) 2022-10-06 15:03 ` [v2 05/19] common/mlx5: query set capability of registers Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 07/19] net/mlx5: Add additional glue functions for HWS Alex Vesker ` (12 subsequent siblings) 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Bing Zhao From: Bing Zhao <bingz@nvidia.com> The available tags that can be used by the application are fixed after startup. A global array is used to store the information and transfer the TAG item directly from the ID to the REG_C_x. Signed-off-by: Bing Zhao <bingz@nvidia.com> --- drivers/net/mlx5/linux/mlx5_os.c | 5 ++- drivers/net/mlx5/mlx5.c | 2 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_defs.h | 2 + drivers/net/mlx5/mlx5_flow.c | 11 +++++ drivers/net/mlx5/mlx5_flow.h | 27 ++++++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 76 ++++++++++++++++++++++++++++++++ 7 files changed, 123 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 1036b870de..1d77b49aac 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1543,8 +1543,11 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, rte_rwlock_init(&priv->ind_tbls_lock); if (priv->vport_meta_mask) flow_hw_set_port_info(eth_dev); - if (priv->sh->config.dv_flow_en == 2) + if (priv->sh->config.dv_flow_en == 2) { + /* Only HWS requires this information. */ + flow_hw_init_tags_set(eth_dev); return eth_dev; + } /* Port representor shares the same max priority with pf port. */ if (!priv->sh->flow_priority_check_flag) { /* Supported Verbs flow priority number detection. */ diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index ad561bd86d..cb1a670954 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1946,6 +1946,8 @@ mlx5_dev_close(struct rte_eth_dev *dev) flow_hw_resource_release(dev); #endif flow_hw_clear_port_info(dev); + if (priv->sh->config.dv_flow_en == 2) + flow_hw_clear_tags_set(dev); if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ rte_delay_us_sleep(1000); diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 95ecbea39e..ea63c29bf9 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1200,6 +1200,7 @@ struct mlx5_dev_ctx_shared { uint32_t drop_action_check_flag:1; /* Check Flag for drop action. */ uint32_t flow_priority_check_flag:1; /* Check Flag for flow priority. */ uint32_t metadata_regc_check_flag:1; /* Check Flag for metadata REGC. */ + uint32_t hws_tags:1; /* Check if tags info for HWS initialized. */ uint32_t max_port; /* Maximal IB device port index. */ struct mlx5_bond_info bond; /* Bonding information. */ struct mlx5_common_device *cdev; /* Backend mlx5 device. */ diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index 018d3f0f0c..585afb0a98 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -139,6 +139,8 @@ #define MLX5_XMETA_MODE_META32 2 /* Provide info on patrial hw miss. Implies MLX5_XMETA_MODE_META16 */ #define MLX5_XMETA_MODE_MISS_INFO 3 +/* Only valid in HWS, 32bits extended META without MARK support in FDB. */ +#define MLX5_XMETA_MODE_META32_HWS 4 /* Tx accurate scheduling on timestamps parameters. */ #define MLX5_TXPP_WAIT_INIT_TS 1000ul /* How long to wait timestamp. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index acf1467bf6..45109001ca 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -39,6 +39,17 @@ */ struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; +/* + * A global structure to save the available REG_C_x for tags usage. + * The Meter color REG (ASO) and the last available one will be reserved + * for PMD internal usage. + * Since there is no "port" concept in the driver, it is assumed that the + * available tags set will be the minimum intersection. + * 3 - in FDB mode / 5 - in legacy mode + */ +uint32_t mlx5_flow_hw_avl_tags_init_cnt; +enum modify_reg mlx5_flow_hw_avl_tags[MLX5_FLOW_HW_TAGS_MAX] = {REG_NON}; + struct tunnel_default_miss_ctx { uint16_t *queue; __extension__ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2eb2b46060..cae1a64def 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1328,6 +1328,10 @@ struct flow_hw_port_info { extern struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; +#define MLX5_FLOW_HW_TAGS_MAX 8 +extern uint32_t mlx5_flow_hw_avl_tags_init_cnt; +extern enum modify_reg mlx5_flow_hw_avl_tags[]; + /* * Get metadata match tag and mask for given rte_eth_dev port. * Used in HWS rule creation. @@ -1367,9 +1371,32 @@ flow_hw_get_wire_port(struct ibv_context *ibctx) return NULL; } +/* + * Convert metadata or tag to the actual register. + * META: Can only be used to match in the FDB in this stage, fixed C_1. + * TAG: C_x expect meter color reg and the reserved ones. + * TODO: Per port / device, FDB or NIC for Meta matching. + */ +static __rte_always_inline int +flow_hw_get_reg_id(enum rte_flow_item_type type, uint32_t id) +{ + switch (type) { + case RTE_FLOW_ITEM_TYPE_META: + return REG_C_1; + case RTE_FLOW_ITEM_TYPE_TAG: + MLX5_ASSERT(id < MLX5_FLOW_HW_TAGS_MAX); + return mlx5_flow_hw_avl_tags[id]; + default: + return REG_NON; + } +} + void flow_hw_set_port_info(struct rte_eth_dev *dev); void flow_hw_clear_port_info(struct rte_eth_dev *dev); +void flow_hw_init_tags_set(struct rte_eth_dev *dev); +void flow_hw_clear_tags_set(struct rte_eth_dev *dev); + typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index fe809a83b9..78c741bb91 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2237,6 +2237,82 @@ flow_hw_clear_port_info(struct rte_eth_dev *dev) info->is_wire = 0; } +/* + * Initialize the information of available tag registers and an intersection + * of all the probed devices' REG_C_Xs. + * PS. No port concept in steering part, right now it cannot be per port level. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + */ +void flow_hw_init_tags_set(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t meta_mode = priv->sh->config.dv_xmeta_en; + uint8_t masks = (uint8_t)priv->sh->cdev->config.hca_attr.set_reg_c; + uint32_t i, j; + enum modify_reg copy[MLX5_FLOW_HW_TAGS_MAX] = {REG_NON}; + uint8_t unset = 0; + uint8_t copy_masks = 0; + + /* + * The CAPA is global for common device but only used in net. + * It is shared per eswitch domain. + */ + if (!!priv->sh->hws_tags) + return; + unset |= 1 << (priv->mtr_color_reg - REG_C_0); + unset |= 1 << (REG_C_6 - REG_C_0); + if (meta_mode == MLX5_XMETA_MODE_META32_HWS) { + unset |= 1 << (REG_C_1 - REG_C_0); + unset |= 1 << (REG_C_0 - REG_C_0); + } + masks &= ~unset; + if (mlx5_flow_hw_avl_tags_init_cnt) { + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) { + if (mlx5_flow_hw_avl_tags[i] != REG_NON && !!((1 << i) & masks)) { + copy[mlx5_flow_hw_avl_tags[i] - REG_C_0] = + mlx5_flow_hw_avl_tags[i]; + copy_masks |= (1 << i); + } + } + if (copy_masks != masks) { + j = 0; + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) + if (!!((1 << i) & copy_masks)) + mlx5_flow_hw_avl_tags[j++] = copy[i]; + } + } else { + j = 0; + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) { + if (!!((1 << i) & masks)) + mlx5_flow_hw_avl_tags[j++] = + (enum modify_reg)(i + (uint32_t)REG_C_0); + } + } + priv->sh->hws_tags = 1; + mlx5_flow_hw_avl_tags_init_cnt++; +} + +/* + * Reset the available tag registers information to NONE. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + */ +void flow_hw_clear_tags_set(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + if (!priv->sh->hws_tags) + return; + priv->sh->hws_tags = 0; + mlx5_flow_hw_avl_tags_init_cnt--; + if (!mlx5_flow_hw_avl_tags_init_cnt) + memset(mlx5_flow_hw_avl_tags, REG_NON, + sizeof(enum modify_reg) * MLX5_FLOW_HW_TAGS_MAX); +} + /** * Create shared action. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 07/19] net/mlx5: Add additional glue functions for HWS 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (5 preceding siblings ...) 2022-10-06 15:03 ` [v2 06/19] net/mlx5: provide the available tag registers Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 08/19] net/mlx5: Remove stub HWS support Alex Vesker ` (11 subsequent siblings) 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Add missing glue support for HWS mlx5dr layer. The new glue functions are needed for mlx5dv create matcher and action, which are used as the kernel root table as well as for capabilities query like device name and ports info. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/common/mlx5/linux/mlx5_glue.c | 121 ++++++++++++++++++++++++-- drivers/common/mlx5/linux/mlx5_glue.h | 17 ++++ 2 files changed, 131 insertions(+), 7 deletions(-) diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c index 450dd6a06a..943d4bf833 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.c +++ b/drivers/common/mlx5/linux/mlx5_glue.c @@ -111,6 +111,12 @@ mlx5_glue_query_device_ex(struct ibv_context *context, return ibv_query_device_ex(context, input, attr); } +static const char * +mlx5_glue_get_device_name(struct ibv_device *device) +{ + return ibv_get_device_name(device); +} + static int mlx5_glue_query_rt_values_ex(struct ibv_context *context, struct ibv_values_ex *values) @@ -620,6 +626,20 @@ mlx5_glue_dv_create_qp(struct ibv_context *context, #endif } +static void * +__mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, + struct mlx5dv_flow_matcher_attr *matcher_attr) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_matcher(context, matcher_attr); +#else + (void)context; + (void)matcher_attr; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, struct mlx5dv_flow_matcher_attr *matcher_attr, @@ -633,7 +653,7 @@ mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, matcher_attr->match_mask); #else (void)tbl; - return mlx5dv_create_flow_matcher(context, matcher_attr); + return __mlx5_glue_dv_create_flow_matcher(context, matcher_attr); #endif #else (void)context; @@ -644,6 +664,26 @@ mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, #endif } +static void * +__mlx5_glue_dv_create_flow(void *matcher, + void *match_value, + size_t num_actions, + void *actions) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow(matcher, + match_value, + num_actions, + (struct mlx5dv_flow_action_attr *)actions); +#else + (void)matcher; + (void)match_value; + (void)num_actions; + (void)actions; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow(void *matcher, void *match_value, @@ -663,8 +703,8 @@ mlx5_glue_dv_create_flow(void *matcher, for (i = 0; i < num_actions; i++) actions_attr[i] = *((struct mlx5dv_flow_action_attr *)(actions[i])); - return mlx5dv_create_flow(matcher, match_value, - num_actions, actions_attr); + return __mlx5_glue_dv_create_flow(matcher, match_value, + num_actions, actions_attr); #endif #else (void)matcher; @@ -735,6 +775,26 @@ mlx5_glue_dv_create_flow_action_dest_devx_tir(void *tir) #endif } +static void * +__mlx5_glue_dv_create_flow_action_modify_header + (struct ibv_context *ctx, + size_t actions_sz, + uint64_t actions[], + enum mlx5dv_flow_table_type ft_type) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_action_modify_header + (ctx, actions_sz, actions, ft_type); +#else + (void)ctx; + (void)ft_type; + (void)actions_sz; + (void)actions; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_action_modify_header (struct ibv_context *ctx, @@ -758,7 +818,7 @@ mlx5_glue_dv_create_flow_action_modify_header if (!action) return NULL; action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; - action->action = mlx5dv_create_flow_action_modify_header + action->action = __mlx5_glue_dv_create_flow_action_modify_header (ctx, actions_sz, actions, ft_type); return action; #endif @@ -774,6 +834,27 @@ mlx5_glue_dv_create_flow_action_modify_header #endif } +static void * +__mlx5_glue_dv_create_flow_action_packet_reformat + (struct ibv_context *ctx, + size_t data_sz, void *data, + enum mlx5dv_flow_action_packet_reformat_type reformat_type, + enum mlx5dv_flow_table_type ft_type) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_action_packet_reformat + (ctx, data_sz, data, reformat_type, ft_type); +#else + (void)ctx; + (void)reformat_type; + (void)ft_type; + (void)data_sz; + (void)data; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_action_packet_reformat (struct ibv_context *ctx, @@ -798,7 +879,7 @@ mlx5_glue_dv_create_flow_action_packet_reformat if (!action) return NULL; action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; - action->action = mlx5dv_create_flow_action_packet_reformat + action->action = __mlx5_glue_dv_create_flow_action_packet_reformat (ctx, data_sz, data, reformat_type, ft_type); return action; #endif @@ -908,6 +989,18 @@ mlx5_glue_dv_destroy_flow(void *flow_id) #endif } +static int +__mlx5_glue_dv_destroy_flow_matcher(void *matcher) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_destroy_flow_matcher(matcher); +#else + (void)matcher; + errno = ENOTSUP; + return errno; +#endif +} + static int mlx5_glue_dv_destroy_flow_matcher(void *matcher) { @@ -915,7 +1008,7 @@ mlx5_glue_dv_destroy_flow_matcher(void *matcher) #ifdef HAVE_MLX5DV_DR return mlx5dv_dr_matcher_destroy(matcher); #else - return mlx5dv_destroy_flow_matcher(matcher); + return __mlx5_glue_dv_destroy_flow_matcher(matcher); #endif #else (void)matcher; @@ -1164,12 +1257,18 @@ mlx5_glue_devx_port_query(struct ibv_context *ctx, info->vport_id = devx_port.vport; info->query_flags |= MLX5_PORT_QUERY_VPORT; } + if (devx_port.flags & MLX5DV_QUERY_PORT_ESW_OWNER_VHCA_ID) { + info->esw_owner_vhca_id = devx_port.esw_owner_vhca_id; + info->query_flags |= MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID; + } #else #ifdef HAVE_MLX5DV_DR_DEVX_PORT /* The legacy DevX port query API is implemented (prior v35). */ struct mlx5dv_devx_port devx_port = { .comp_mask = MLX5DV_DEVX_PORT_VPORT | - MLX5DV_DEVX_PORT_MATCH_REG_C_0 + MLX5DV_DEVX_PORT_MATCH_REG_C_0 | + MLX5DV_DEVX_PORT_VPORT_VHCA_ID | + MLX5DV_DEVX_PORT_ESW_OWNER_VHCA_ID }; err = mlx5dv_query_devx_port(ctx, port_num, &devx_port); @@ -1449,6 +1548,7 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { .close_device = mlx5_glue_close_device, .query_device = mlx5_glue_query_device, .query_device_ex = mlx5_glue_query_device_ex, + .get_device_name = mlx5_glue_get_device_name, .query_rt_values_ex = mlx5_glue_query_rt_values_ex, .query_port = mlx5_glue_query_port, .create_comp_channel = mlx5_glue_create_comp_channel, @@ -1507,7 +1607,9 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { .dv_init_obj = mlx5_glue_dv_init_obj, .dv_create_qp = mlx5_glue_dv_create_qp, .dv_create_flow_matcher = mlx5_glue_dv_create_flow_matcher, + .dv_create_flow_matcher_root = __mlx5_glue_dv_create_flow_matcher, .dv_create_flow = mlx5_glue_dv_create_flow, + .dv_create_flow_root = __mlx5_glue_dv_create_flow, .dv_create_flow_action_counter = mlx5_glue_dv_create_flow_action_counter, .dv_create_flow_action_dest_ibv_qp = @@ -1516,8 +1618,12 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { mlx5_glue_dv_create_flow_action_dest_devx_tir, .dv_create_flow_action_modify_header = mlx5_glue_dv_create_flow_action_modify_header, + .dv_create_flow_action_modify_header_root = + __mlx5_glue_dv_create_flow_action_modify_header, .dv_create_flow_action_packet_reformat = mlx5_glue_dv_create_flow_action_packet_reformat, + .dv_create_flow_action_packet_reformat_root = + __mlx5_glue_dv_create_flow_action_packet_reformat, .dv_create_flow_action_tag = mlx5_glue_dv_create_flow_action_tag, .dv_create_flow_action_meter = mlx5_glue_dv_create_flow_action_meter, .dv_modify_flow_action_meter = mlx5_glue_dv_modify_flow_action_meter, @@ -1526,6 +1632,7 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { mlx5_glue_dr_create_flow_action_default_miss, .dv_destroy_flow = mlx5_glue_dv_destroy_flow, .dv_destroy_flow_matcher = mlx5_glue_dv_destroy_flow_matcher, + .dv_destroy_flow_matcher_root = __mlx5_glue_dv_destroy_flow_matcher, .dv_open_device = mlx5_glue_dv_open_device, .devx_obj_create = mlx5_glue_devx_obj_create, .devx_obj_destroy = mlx5_glue_devx_obj_destroy, diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h index c4903a6dce..ef7341a76a 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.h +++ b/drivers/common/mlx5/linux/mlx5_glue.h @@ -91,10 +91,12 @@ struct mlx5dv_port; #define MLX5_PORT_QUERY_VPORT (1u << 0) #define MLX5_PORT_QUERY_REG_C0 (1u << 1) +#define MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID (1u << 2) struct mlx5_port_info { uint16_t query_flags; uint16_t vport_id; /* Associated VF vport index (if any). */ + uint16_t esw_owner_vhca_id; /* Associated the esw_owner that this VF belongs to. */ uint32_t vport_meta_tag; /* Used for vport index match ove VF LAG. */ uint32_t vport_meta_mask; /* Used for vport index field match mask. */ }; @@ -164,6 +166,7 @@ struct mlx5_glue { int (*query_device_ex)(struct ibv_context *context, const struct ibv_query_device_ex_input *input, struct ibv_device_attr_ex *attr); + const char *(*get_device_name)(struct ibv_device *device); int (*query_rt_values_ex)(struct ibv_context *context, struct ibv_values_ex *values); int (*query_port)(struct ibv_context *context, uint8_t port_num, @@ -268,8 +271,13 @@ struct mlx5_glue { (struct ibv_context *context, struct mlx5dv_flow_matcher_attr *matcher_attr, void *tbl); + void *(*dv_create_flow_matcher_root) + (struct ibv_context *context, + struct mlx5dv_flow_matcher_attr *matcher_attr); void *(*dv_create_flow)(void *matcher, void *match_value, size_t num_actions, void *actions[]); + void *(*dv_create_flow_root)(void *matcher, void *match_value, + size_t num_actions, void *actions); void *(*dv_create_flow_action_counter)(void *obj, uint32_t offset); void *(*dv_create_flow_action_dest_ibv_qp)(void *qp); void *(*dv_create_flow_action_dest_devx_tir)(void *tir); @@ -277,12 +285,20 @@ struct mlx5_glue { (struct ibv_context *ctx, enum mlx5dv_flow_table_type ft_type, void *domain, uint64_t flags, size_t actions_sz, uint64_t actions[]); + void *(*dv_create_flow_action_modify_header_root) + (struct ibv_context *ctx, size_t actions_sz, uint64_t actions[], + enum mlx5dv_flow_table_type ft_type); void *(*dv_create_flow_action_packet_reformat) (struct ibv_context *ctx, enum mlx5dv_flow_action_packet_reformat_type reformat_type, enum mlx5dv_flow_table_type ft_type, struct mlx5dv_dr_domain *domain, uint32_t flags, size_t data_sz, void *data); + void *(*dv_create_flow_action_packet_reformat_root) + (struct ibv_context *ctx, + size_t data_sz, void *data, + enum mlx5dv_flow_action_packet_reformat_type reformat_type, + enum mlx5dv_flow_table_type ft_type); void *(*dv_create_flow_action_tag)(uint32_t tag); void *(*dv_create_flow_action_meter) (struct mlx5dv_dr_flow_meter_attr *attr); @@ -291,6 +307,7 @@ struct mlx5_glue { void *(*dr_create_flow_action_default_miss)(void); int (*dv_destroy_flow)(void *flow); int (*dv_destroy_flow_matcher)(void *matcher); + int (*dv_destroy_flow_matcher_root)(void *matcher); struct ibv_context *(*dv_open_device)(struct ibv_device *device); struct mlx5dv_var *(*dv_alloc_var)(struct ibv_context *context, uint32_t flags); -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 08/19] net/mlx5: Remove stub HWS support 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (6 preceding siblings ...) 2022-10-06 15:03 ` [v2 07/19] net/mlx5: Add additional glue functions for HWS Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 09/19] net/mlx5/hws: Add HWS command layer Alex Vesker ` (10 subsequent siblings) 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika This change brakes compilation, which is bad, but it will be fixed for the final submission. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/meson.build | 1 - drivers/net/mlx5/mlx5.h | 1 - drivers/net/mlx5/mlx5_dr.c | 383 ----------------------------- drivers/net/mlx5/mlx5_dr.h | 456 ----------------------------------- 4 files changed, 841 deletions(-) delete mode 100644 drivers/net/mlx5/mlx5_dr.c delete mode 100644 drivers/net/mlx5/mlx5_dr.h diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index 6a84d96380..c7ddd4b65c 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -14,7 +14,6 @@ sources = files( 'mlx5.c', 'mlx5_ethdev.c', 'mlx5_flow.c', - 'mlx5_dr.c', 'mlx5_flow_meter.c', 'mlx5_flow_dv.c', 'mlx5_flow_hw.c', diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index ea63c29bf9..29657ab273 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -34,7 +34,6 @@ #include "mlx5_os.h" #include "mlx5_autoconf.h" #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) -#include "mlx5_dr.h" #endif #define MLX5_SH(dev) (((struct mlx5_priv *)(dev)->data->dev_private)->sh) diff --git a/drivers/net/mlx5/mlx5_dr.c b/drivers/net/mlx5/mlx5_dr.c deleted file mode 100644 index 7218708986..0000000000 --- a/drivers/net/mlx5/mlx5_dr.c +++ /dev/null @@ -1,383 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved. - */ -#include <rte_flow.h> - -#include "mlx5_defs.h" -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) -#include "mlx5_dr.h" - -/* - * The following null stubs are prepared in order not to break the linkage - * before the HW steering low-level implementation is added. - */ - -/* Open a context used for direct rule insertion using hardware steering. - * Each context can contain multiple tables of different types. - * - * @param[in] ibv_ctx - * The ibv context to used for HWS. - * @param[in] attr - * Attributes used for context open. - * @return pointer to mlx5dr_context on success NULL otherwise. - */ -__rte_weak struct mlx5dr_context * -mlx5dr_context_open(void *ibv_ctx, - struct mlx5dr_context_attr *attr) -{ - (void)ibv_ctx; - (void)attr; - return NULL; -} - -/* Close a context used for direct hardware steering. - * - * @param[in] ctx - * mlx5dr context to close. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_context_close(struct mlx5dr_context *ctx) -{ - (void)ctx; - return 0; -} - -/* Create a new direct rule table. Each table can contain multiple matchers. - * - * @param[in] ctx - * The context in which the new table will be opened. - * @param[in] attr - * Attributes used for table creation. - * @return pointer to mlx5dr_table on success NULL otherwise. - */ -__rte_weak struct mlx5dr_table * -mlx5dr_table_create(struct mlx5dr_context *ctx, - struct mlx5dr_table_attr *attr) -{ - (void)ctx; - (void)attr; - return NULL; -} - -/* Destroy direct rule table. - * - * @param[in] tbl - * mlx5dr table to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int mlx5dr_table_destroy(struct mlx5dr_table *tbl) -{ - (void)tbl; - return 0; -} - -/* Create new match template based on items mask, the match template - * will be used for matcher creation. - * - * @param[in] items - * Describe the mask for template creation - * @param[in] flags - * Template creation flags - * @return pointer to mlx5dr_match_template on success NULL otherwise - */ -__rte_weak struct mlx5dr_match_template * -mlx5dr_match_template_create(const struct rte_flow_item items[], - enum mlx5dr_match_template_flags flags) -{ - (void)items; - (void)flags; - return NULL; -} - -/* Destroy match template. - * - * @param[in] mt - * Match template to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt) -{ - (void)mt; - return 0; -} - -/* Create a new direct rule matcher. Each matcher can contain multiple rules. - * Matchers on the table will be processed by priority. Matching fields and - * mask are described by the match template. In some cases multiple match - * templates can be used on the same matcher. - * - * @param[in] table - * The table in which the new matcher will be opened. - * @param[in] mt - * Array of match templates to be used on matcher. - * @param[in] num_of_mt - * Number of match templates in mt array. - * @param[in] attr - * Attributes used for matcher creation. - * @return pointer to mlx5dr_matcher on success NULL otherwise. - */ -__rte_weak struct mlx5dr_matcher * -mlx5dr_matcher_create(struct mlx5dr_table *table __rte_unused, - struct mlx5dr_match_template *mt[] __rte_unused, - uint8_t num_of_mt __rte_unused, - struct mlx5dr_matcher_attr *attr __rte_unused) -{ - return NULL; -} - -/* Destroy direct rule matcher. - * - * @param[in] matcher - * Matcher to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher __rte_unused) -{ - return 0; -} - -/* Enqueue create rule operation. - * - * @param[in] matcher - * The matcher in which the new rule will be created. - * @param[in] mt_idx - * Match template index to create the rule with. - * @param[in] items - * The items used for the value matching. - * @param[in] rule_actions - * Rule action to be executed on match. - * @param[in] num_of_actions - * Number of rule actions. - * @param[in] attr - * Rule creation attributes. - * @param[in, out] rule_handle - * A valid rule handle. The handle doesn't require any initialization. - * @return zero on successful enqueue non zero otherwise. - */ -__rte_weak int -mlx5dr_rule_create(struct mlx5dr_matcher *matcher __rte_unused, - uint8_t mt_idx __rte_unused, - const struct rte_flow_item items[] __rte_unused, - struct mlx5dr_rule_action rule_actions[] __rte_unused, - uint8_t num_of_actions __rte_unused, - struct mlx5dr_rule_attr *attr __rte_unused, - struct mlx5dr_rule *rule_handle __rte_unused) -{ - return 0; -} - -/* Enqueue destroy rule operation. - * - * @param[in] rule - * The rule destruction to enqueue. - * @param[in] attr - * Rule destruction attributes. - * @return zero on successful enqueue non zero otherwise. - */ -__rte_weak int -mlx5dr_rule_destroy(struct mlx5dr_rule *rule __rte_unused, - struct mlx5dr_rule_attr *attr __rte_unused) -{ - return 0; -} - -/* Create direct rule drop action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_drop(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule default miss action. - * Defaults are RX: Drop TX: Wire. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_default_miss(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule goto table action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] tbl - * Destination table. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx __rte_unused, - struct mlx5dr_table *tbl __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule goto TIR action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] obj - * Direct rule TIR devx object. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx __rte_unused, - struct mlx5dr_devx_obj *obj __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule TAG action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_tag(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule counter action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] obj - * Direct rule counter devx object. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_counter(struct mlx5dr_context *ctx, - struct mlx5dr_devx_obj *obj, - uint32_t flags); - -/* Create direct rule reformat action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] reformat_type - * Type of reformat. - * @param[in] data_sz - * Size in bytes of data. - * @param[in] inline_data - * Header data array in case of inline action. - * @param[in] log_bulk_size - * Number of unique values used with this pattern. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_reformat(struct mlx5dr_context *ctx __rte_unused, - enum mlx5dr_action_reformat_type reformat_type __rte_unused, - size_t data_sz __rte_unused, - void *inline_data __rte_unused, - uint32_t log_bulk_size __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule modify header action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] pattern_sz - * Byte size of the pattern array. - * @param[in] pattern - * PRM format modify pattern action array. - * @param[in] log_bulk_size - * Number of unique values used with this pattern. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, - size_t pattern_sz, - rte_be64_t pattern[], - uint32_t log_bulk_size, - uint32_t flags); - -/* Destroy direct rule action. - * - * @param[in] action - * The action to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_action_destroy(struct mlx5dr_action *action __rte_unused) -{ - return 0; -} - -/* Poll queue for rule creation and deletions completions. - * - * @param[in] ctx - * The context to which the queue belong to. - * @param[in] queue_id - * The id of the queue to poll. - * @param[in, out] res - * Completion array. - * @param[in] res_nb - * Maximum number of results to return. - * @return negative number on failure, the number of completions otherwise. - */ -__rte_weak int -mlx5dr_send_queue_poll(struct mlx5dr_context *ctx __rte_unused, - uint16_t queue_id __rte_unused, - struct rte_flow_op_result res[] __rte_unused, - uint32_t res_nb __rte_unused) -{ - return 0; -} - -/* Perform an action on the queue - * - * @param[in] ctx - * The context to which the queue belong to. - * @param[in] queue_id - * The id of the queue to perform the action on. - * @param[in] actions - * Actions to perform on the queue. (enum mlx5dr_send_queue_actions) - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_send_queue_action(struct mlx5dr_context *ctx __rte_unused, - uint16_t queue_id __rte_unused, - uint32_t actions __rte_unused) -{ - return 0; -} - -#endif diff --git a/drivers/net/mlx5/mlx5_dr.h b/drivers/net/mlx5/mlx5_dr.h deleted file mode 100644 index d0b2c15652..0000000000 --- a/drivers/net/mlx5/mlx5_dr.h +++ /dev/null @@ -1,456 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved. - */ - -#ifndef MLX5_DR_H_ -#define MLX5_DR_H_ - -#include <rte_flow.h> - -struct mlx5dr_context; -struct mlx5dr_table; -struct mlx5dr_matcher; -struct mlx5dr_rule; - -enum mlx5dr_table_type { - MLX5DR_TABLE_TYPE_NIC_RX, - MLX5DR_TABLE_TYPE_NIC_TX, - MLX5DR_TABLE_TYPE_FDB, - MLX5DR_TABLE_TYPE_MAX, -}; - -enum mlx5dr_matcher_resource_mode { - /* Allocate resources based on number of rules with minimal failure probability */ - MLX5DR_MATCHER_RESOURCE_MODE_RULE, - /* Allocate fixed size hash table based on given column and rows */ - MLX5DR_MATCHER_RESOURCE_MODE_HTABLE, -}; - -enum mlx5dr_action_flags { - MLX5DR_ACTION_FLAG_ROOT_RX = 1 << 0, - MLX5DR_ACTION_FLAG_ROOT_TX = 1 << 1, - MLX5DR_ACTION_FLAG_ROOT_FDB = 1 << 2, - MLX5DR_ACTION_FLAG_HWS_RX = 1 << 3, - MLX5DR_ACTION_FLAG_HWS_TX = 1 << 4, - MLX5DR_ACTION_FLAG_HWS_FDB = 1 << 5, - MLX5DR_ACTION_FLAG_INLINE = 1 << 6, -}; - -enum mlx5dr_action_reformat_type { - MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2, - MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2, - MLX5DR_ACTION_REFORMAT_TYPE_TNL_L3_TO_L2, - MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3, -}; - -enum mlx5dr_match_template_flags { - /* Allow relaxed matching by skipping derived dependent match fields. */ - MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH = 1, -}; - -enum mlx5dr_send_queue_actions { - /* Start executing all pending queued rules and write to HW */ - MLX5DR_SEND_QUEUE_ACTION_DRAIN = 1 << 0, -}; - -struct mlx5dr_context_attr { - uint16_t queues; - uint16_t queue_size; - size_t initial_log_ste_memory; - /* Optional PD used for allocating res ources */ - struct ibv_pd *pd; -}; - -struct mlx5dr_table_attr { - enum mlx5dr_table_type type; - uint32_t level; -}; - -struct mlx5dr_matcher_attr { - uint32_t priority; - enum mlx5dr_matcher_resource_mode mode; - union { - struct { - uint8_t sz_row_log; - uint8_t sz_col_log; - } table; - - struct { - uint8_t num_log; - } rule; - }; -}; - -struct mlx5dr_rule_attr { - uint16_t queue_id; - void *user_data; - uint32_t burst:1; -}; - -struct mlx5dr_devx_obj { - struct mlx5dv_devx_obj *obj; - uint32_t id; -}; - -struct mlx5dr_rule_action { - struct mlx5dr_action *action; - union { - struct { - uint32_t value; - } tag; - - struct { - uint32_t offset; - } counter; - - struct { - uint32_t offset; - uint8_t *data; - } modify_header; - - struct { - uint32_t offset; - uint8_t *data; - } reformat; - - struct { - rte_be32_t vlan_hdr; - } push_vlan; - }; -}; - -enum { - MLX5DR_MATCH_TAG_SZ = 32, - MLX5DR_JAMBO_TAG_SZ = 44, -}; - -enum mlx5dr_rule_status { - MLX5DR_RULE_STATUS_UNKNOWN, - MLX5DR_RULE_STATUS_CREATING, - MLX5DR_RULE_STATUS_CREATED, - MLX5DR_RULE_STATUS_DELETING, - MLX5DR_RULE_STATUS_DELETED, - MLX5DR_RULE_STATUS_FAILED, -}; - -struct mlx5dr_rule { - struct mlx5dr_matcher *matcher; - union { - uint8_t match_tag[MLX5DR_MATCH_TAG_SZ]; - struct ibv_flow *flow; - }; - enum mlx5dr_rule_status status; - uint32_t rtc_used; /* The RTC into which the STE was inserted */ -}; - -/* Open a context used for direct rule insertion using hardware steering. - * Each context can contain multiple tables of different types. - * - * @param[in] ibv_ctx - * The ibv context to used for HWS. - * @param[in] attr - * Attributes used for context open. - * @return pointer to mlx5dr_context on success NULL otherwise. - */ -struct mlx5dr_context * -mlx5dr_context_open(void *ibv_ctx, - struct mlx5dr_context_attr *attr); - -/* Close a context used for direct hardware steering. - * - * @param[in] ctx - * mlx5dr context to close. - * @return zero on success non zero otherwise. - */ -int mlx5dr_context_close(struct mlx5dr_context *ctx); - -/* Create a new direct rule table. Each table can contain multiple matchers. - * - * @param[in] ctx - * The context in which the new table will be opened. - * @param[in] attr - * Attributes used for table creation. - * @return pointer to mlx5dr_table on success NULL otherwise. - */ -struct mlx5dr_table * -mlx5dr_table_create(struct mlx5dr_context *ctx, - struct mlx5dr_table_attr *attr); - -/* Destroy direct rule table. - * - * @param[in] tbl - * mlx5dr table to destroy. - * @return zero on success non zero otherwise. - */ -int mlx5dr_table_destroy(struct mlx5dr_table *tbl); - -/* Create new match template based on items mask, the match template - * will be used for matcher creation. - * - * @param[in] items - * Describe the mask for template creation - * @param[in] flags - * Template creation flags - * @return pointer to mlx5dr_match_template on success NULL otherwise - */ -struct mlx5dr_match_template * -mlx5dr_match_template_create(const struct rte_flow_item items[], - enum mlx5dr_match_template_flags flags); - -/* Destroy match template. - * - * @param[in] mt - * Match template to destroy. - * @return zero on success non zero otherwise. - */ -int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt); - -/* Create a new direct rule matcher. Each matcher can contain multiple rules. - * Matchers on the table will be processed by priority. Matching fields and - * mask are described by the match template. In some cases multiple match - * templates can be used on the same matcher. - * - * @param[in] table - * The table in which the new matcher will be opened. - * @param[in] mt - * Array of match templates to be used on matcher. - * @param[in] num_of_mt - * Number of match templates in mt array. - * @param[in] attr - * Attributes used for matcher creation. - * @return pointer to mlx5dr_matcher on success NULL otherwise. - */ -struct mlx5dr_matcher * -mlx5dr_matcher_create(struct mlx5dr_table *table, - struct mlx5dr_match_template *mt[], - uint8_t num_of_mt, - struct mlx5dr_matcher_attr *attr); - -/* Destroy direct rule matcher. - * - * @param[in] matcher - * Matcher to destroy. - * @return zero on success non zero otherwise. - */ -int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher); - -/* Get the size of the rule handle (mlx5dr_rule) to be used on rule creation. - * - * @return size in bytes of rule handle struct. - */ -size_t mlx5dr_rule_get_handle_size(void); - -/* Enqueue create rule operation. - * - * @param[in] matcher - * The matcher in which the new rule will be created. - * @param[in] mt_idx - * Match template index to create the rule with. - * @param[in] items - * The items used for the value matching. - * @param[in] rule_actions - * Rule action to be executed on match. - * @param[in] num_of_actions - * Number of rule actions. - * @param[in] attr - * Rule creation attributes. - * @param[in, out] rule_handle - * A valid rule handle. The handle doesn't require any initialization. - * @return zero on successful enqueue non zero otherwise. - */ -int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, - uint8_t mt_idx, - const struct rte_flow_item items[], - struct mlx5dr_rule_action rule_actions[], - uint8_t num_of_actions, - struct mlx5dr_rule_attr *attr, - struct mlx5dr_rule *rule_handle); - -/* Enqueue destroy rule operation. - * - * @param[in] rule - * The rule destruction to enqueue. - * @param[in] attr - * Rule destruction attributes. - * @return zero on successful enqueue non zero otherwise. - */ -int mlx5dr_rule_destroy(struct mlx5dr_rule *rule, - struct mlx5dr_rule_attr *attr); - -/* Create direct rule drop action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -struct mlx5dr_action * -mlx5dr_action_create_dest_drop(struct mlx5dr_context *ctx, - uint32_t flags); - -/* Create direct rule default miss action. - * Defaults are RX: Drop TX: Wire. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -struct mlx5dr_action * -mlx5dr_action_create_default_miss(struct mlx5dr_context *ctx, - uint32_t flags); - -/* Create direct rule goto table action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] tbl - * Destination table. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -struct mlx5dr_action * -mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx, - struct mlx5dr_table *tbl, - uint32_t flags); - -/* Create direct rule goto TIR action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] obj - * Direct rule TIR devx object. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -struct mlx5dr_action * -mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx, - struct mlx5dr_devx_obj *obj, - uint32_t flags); - -/* Create direct rule TAG action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -struct mlx5dr_action * -mlx5dr_action_create_tag(struct mlx5dr_context *ctx, - uint32_t flags); - -/* Create direct rule counter action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] obj - * Direct rule counter devx object. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -struct mlx5dr_action * -mlx5dr_action_create_counter(struct mlx5dr_context *ctx, - struct mlx5dr_devx_obj *obj, - uint32_t flags); - -/* Create direct rule reformat action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] reformat_type - * Type of reformat. - * @param[in] data_sz - * Size in bytes of data. - * @param[in] inline_data - * Header data array in case of inline action. - * @param[in] log_bulk_size - * Number of unique values used with this pattern. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -struct mlx5dr_action * -mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, - enum mlx5dr_action_reformat_type reformat_type, - size_t data_sz, - void *inline_data, - uint32_t log_bulk_size, - uint32_t flags); - -/* Create direct rule modify header action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] pattern_sz - * Byte size of the pattern array. - * @param[in] pattern - * PRM format modify pattern action array. - * @param[in] log_bulk_size - * Number of unique values used with this pattern. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -struct mlx5dr_action * -mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, - size_t pattern_sz, - rte_be64_t pattern[], - uint32_t log_bulk_size, - uint32_t flags); - -/* Destroy direct rule action. - * - * @param[in] action - * The action to destroy. - * @return zero on success non zero otherwise. - */ -int mlx5dr_action_destroy(struct mlx5dr_action *action); - -/* Poll queue for rule creation and deletions completions. - * - * @param[in] ctx - * The context to which the queue belong to. - * @param[in] queue_id - * The id of the queue to poll. - * @param[in, out] res - * Completion array. - * @param[in] res_nb - * Maximum number of results to return. - * @return negative number on failure, the number of completions otherwise. - */ -int mlx5dr_send_queue_poll(struct mlx5dr_context *ctx, - uint16_t queue_id, - struct rte_flow_op_result res[], - uint32_t res_nb); - -/* Perform an action on the queue - * - * @param[in] ctx - * The context to which the queue belong to. - * @param[in] queue_id - * The id of the queue to perform the action on. - * @param[in] actions - * Actions to perform on the queue. (enum mlx5dr_send_queue_actions) - * @return zero on success non zero otherwise. - */ -int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, - uint16_t queue_id, - uint32_t actions); - -/* Dump HWS info - * - * @param[in] ctx - * The context which to dump the info from. - * @param[in] f - * The file to write the dump to. - * @return zero on success non zero otherwise. - */ -int mlx5dr_debug_dump(struct mlx5dr_context *ctx, FILE *f); - -#endif -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 09/19] net/mlx5/hws: Add HWS command layer 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (7 preceding siblings ...) 2022-10-06 15:03 ` [v2 08/19] net/mlx5: Remove stub HWS support Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 10/19] net/mlx5/hws: Add HWS pool and buddy Alex Vesker ` (9 subsequent siblings) 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit The command layer is used to communicate with the FW, query capabilities and allocate FW resources needed for HWS. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/common/mlx5/mlx5_prm.h | 609 ++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_cmd.c | 949 ++++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_cmd.h | 230 ++++++++ 3 files changed, 1777 insertions(+), 11 deletions(-) create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.h diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 12eb7b3b7f..d854fa88e9 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -289,6 +289,8 @@ /* The alignment needed for CQ buffer. */ #define MLX5_CQE_BUF_ALIGNMENT rte_mem_page_size() +#define MAX_ACTIONS_DATA_IN_HEADER_MODIFY 512 + /* Completion mode. */ enum mlx5_completion_mode { MLX5_COMP_ONLY_ERR = 0x0, @@ -677,6 +679,10 @@ enum { MLX5_MODIFICATION_TYPE_SET = 0x1, MLX5_MODIFICATION_TYPE_ADD = 0x2, MLX5_MODIFICATION_TYPE_COPY = 0x3, + MLX5_MODIFICATION_TYPE_INSERT = 0x4, + MLX5_MODIFICATION_TYPE_REMOVE = 0x5, + MLX5_MODIFICATION_TYPE_NOP = 0x6, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS = 0x7, }; /* The field of packet to be modified. */ @@ -1111,6 +1117,10 @@ enum { MLX5_CMD_OP_QUERY_TIS = 0x915, MLX5_CMD_OP_CREATE_RQT = 0x916, MLX5_CMD_OP_MODIFY_RQT = 0x917, + MLX5_CMD_OP_CREATE_FLOW_TABLE = 0x930, + MLX5_CMD_OP_CREATE_FLOW_GROUP = 0x933, + MLX5_CMD_OP_SET_FLOW_TABLE_ENTRY = 0x936, + MLX5_CMD_OP_MODIFY_FLOW_TABLE = 0x93c, MLX5_CMD_OP_ALLOC_FLOW_COUNTER = 0x939, MLX5_CMD_OP_QUERY_FLOW_COUNTER = 0x93b, MLX5_CMD_OP_CREATE_GENERAL_OBJECT = 0xa00, @@ -1295,9 +1305,11 @@ enum { MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP = 0xc << 1, MLX5_GET_HCA_CAP_OP_MOD_ROCE = 0x4 << 1, MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE = 0x7 << 1, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE = 0x8 << 1, MLX5_SET_HCA_CAP_OP_MOD_ESW = 0x9 << 1, MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1, MLX5_GET_HCA_CAP_OP_MOD_CRYPTO = 0x1A << 1, + MLX5_GET_HCA_CAP_OP_MOD_WQE_BASED_FLOW_TABLE = 0x1B << 1, MLX5_GET_HCA_CAP_OP_MOD_PARSE_GRAPH_NODE_CAP = 0x1C << 1, MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE_2 = 0x20 << 1, }; @@ -1316,6 +1328,14 @@ enum { (1ULL << MLX5_GENERAL_OBJ_TYPE_GENEVE_TLV_OPT) #define MLX5_GENERAL_OBJ_TYPES_CAP_CONN_TRACK_OFFLOAD \ (1ULL << MLX5_GENERAL_OBJ_TYPE_CONN_TRACK_OFFLOAD) +#define MLX5_GENERAL_OBJ_TYPES_CAP_RTC \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_RTC) +#define MLX5_GENERAL_OBJ_TYPES_CAP_STC \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_STC) +#define MLX5_GENERAL_OBJ_TYPES_CAP_STE \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_STE) +#define MLX5_GENERAL_OBJ_TYPES_CAP_DEFINER \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_DEFINER) #define MLX5_GENERAL_OBJ_TYPES_CAP_DEK \ (1ULL << MLX5_GENERAL_OBJ_TYPE_DEK) #define MLX5_GENERAL_OBJ_TYPES_CAP_IMPORT_KEK \ @@ -1372,6 +1392,11 @@ enum { #define MLX5_HCA_FLEX_VXLAN_GPE_ENABLED (1UL << 7) #define MLX5_HCA_FLEX_ICMP_ENABLED (1UL << 8) #define MLX5_HCA_FLEX_ICMPV6_ENABLED (1UL << 9) +#define MLX5_HCA_FLEX_GTPU_ENABLED (1UL << 11) +#define MLX5_HCA_FLEX_GTPU_DW_2_ENABLED (1UL << 16) +#define MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED (1UL << 17) +#define MLX5_HCA_FLEX_GTPU_DW_0_ENABLED (1UL << 18) +#define MLX5_HCA_FLEX_GTPU_TEID_ENABLED (1UL << 19) /* The device steering logic format. */ #define MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 0x0 @@ -1504,7 +1529,8 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 wol_u[0x1]; u8 wol_p[0x1]; u8 stat_rate_support[0x10]; - u8 reserved_at_1f0[0xc]; + u8 reserved_at_1ef[0xb]; + u8 wqe_based_flow_table_update_cap[0x1]; u8 cqe_version[0x4]; u8 compact_address_vector[0x1]; u8 striding_rq[0x1]; @@ -1680,7 +1706,9 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 cqe_compression[0x1]; u8 cqe_compression_timeout[0x10]; u8 cqe_compression_max_num[0x10]; - u8 reserved_at_5e0[0x10]; + u8 reserved_at_5e0[0x8]; + u8 flex_parser_id_gtpu_dw_0[0x4]; + u8 reserved_at_5ec[0x4]; u8 tag_matching[0x1]; u8 rndv_offload_rc[0x1]; u8 rndv_offload_dc[0x1]; @@ -1690,17 +1718,38 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 affiliate_nic_vport_criteria[0x8]; u8 native_port_num[0x8]; u8 num_vhca_ports[0x8]; - u8 reserved_at_618[0x6]; + u8 flex_parser_id_gtpu_teid[0x4]; + u8 reserved_at_61c[0x2]; u8 sw_owner_id[0x1]; u8 reserved_at_61f[0x6C]; u8 wait_on_data[0x1]; u8 wait_on_time[0x1]; - u8 reserved_at_68d[0xBB]; + u8 reserved_at_68d[0x37]; + u8 flex_parser_id_geneve_opt_0[0x4]; + u8 flex_parser_id_icmp_dw1[0x4]; + u8 flex_parser_id_icmp_dw0[0x4]; + u8 flex_parser_id_icmpv6_dw1[0x4]; + u8 flex_parser_id_icmpv6_dw0[0x4]; + u8 flex_parser_id_outer_first_mpls_over_gre[0x4]; + u8 flex_parser_id_outer_first_mpls_over_udp_label[0x4]; + u8 reserved_at_6e0[0x20]; + u8 flex_parser_id_gtpu_dw_2[0x4]; + u8 flex_parser_id_gtpu_first_ext_dw_0[0x4]; + u8 reserved_at_708[0x40]; u8 dma_mmo_qp[0x1]; u8 regexp_mmo_qp[0x1]; u8 compress_mmo_qp[0x1]; u8 decompress_mmo_qp[0x1]; - u8 reserved_at_624[0xd4]; + u8 reserved_at_74c[0x14]; + u8 reserved_at_760[0x3]; + u8 log_max_num_header_modify_argument[0x5]; + u8 log_header_modify_argument_granularity_offset[0x4]; + u8 log_header_modify_argument_granularity[0x4]; + u8 reserved_at_770[0x3]; + u8 log_header_modify_argument_max_alloc[0x5]; + u8 reserved_at_778[0x8]; + u8 reserved_at_780[0x40]; + u8 match_definer_format_supported[0x40]; }; struct mlx5_ifc_qos_cap_bits { @@ -1875,7 +1924,9 @@ struct mlx5_ifc_flow_table_prop_layout_bits { u8 log_max_ft_sampler_num[8]; u8 metadata_reg_b_width[0x8]; u8 metadata_reg_a_width[0x8]; - u8 reserved_at_60[0x18]; + u8 reserved_at_60[0xa]; + u8 reparse[0x1]; + u8 reserved_at_6b[0xd]; u8 log_max_ft_num[0x8]; u8 reserved_at_80[0x10]; u8 log_max_flow_counter[0x8]; @@ -2054,8 +2105,48 @@ struct mlx5_ifc_cmd_hca_cap_2_bits { u8 log_conn_track_max_alloc[0x5]; u8 reserved_at_d8[0x3]; u8 log_max_conn_track_offload[0x5]; - u8 reserved_at_e0[0x20]; /* End of DW7. */ - u8 reserved_at_100[0x700]; + u8 reserved_at_e0[0xc0]; + u8 reserved_at_1a0[0xb]; + u8 format_select_dw_8_6_ext[0x1]; + u8 reserved_at_1ac[0x14]; + u8 general_obj_types_127_64[0x40]; + u8 reserved_at_200[0x80]; + u8 format_select_dw_gtpu_dw_0[0x8]; + u8 format_select_dw_gtpu_dw_1[0x8]; + u8 format_select_dw_gtpu_dw_2[0x8]; + u8 format_select_dw_gtpu_first_ext_dw_0[0x8]; + u8 reserved_at_2a0[0x560]; +}; + +struct mlx5_ifc_wqe_based_flow_table_cap_bits { + u8 reserved_at_0[0x3]; + u8 log_max_num_ste[0x5]; + u8 reserved_at_8[0x3]; + u8 log_max_num_stc[0x5]; + u8 reserved_at_10[0x3]; + u8 log_max_num_rtc[0x5]; + u8 reserved_at_18[0x3]; + u8 log_max_num_header_modify_pattern[0x5]; + u8 reserved_at_20[0x3]; + u8 stc_alloc_log_granularity[0x5]; + u8 reserved_at_28[0x3]; + u8 stc_alloc_log_max[0x5]; + u8 reserved_at_30[0x3]; + u8 ste_alloc_log_granularity[0x5]; + u8 reserved_at_38[0x3]; + u8 ste_alloc_log_max[0x5]; + u8 reserved_at_40[0xb]; + u8 rtc_reparse_mode[0x5]; + u8 reserved_at_50[0x3]; + u8 rtc_index_mode[0x5]; + u8 reserved_at_58[0x3]; + u8 rtc_log_depth_max[0x5]; + u8 reserved_at_60[0x10]; + u8 ste_format[0x10]; + u8 stc_action_type[0x80]; + u8 header_insert_type[0x10]; + u8 header_remove_type[0x10]; + u8 trivial_match_definer[0x20]; }; struct mlx5_ifc_esw_cap_bits { @@ -2079,6 +2170,7 @@ union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_flow_table_esw_cap_bits flow_table_esw_cap; struct mlx5_ifc_esw_cap_bits esw_cap; struct mlx5_ifc_roce_caps_bits roce_caps; + struct mlx5_ifc_wqe_based_flow_table_cap_bits wqe_based_flow_table_cap; u8 reserved_at_0[0x8000]; }; @@ -2092,6 +2184,20 @@ struct mlx5_ifc_set_action_in_bits { u8 data[0x20]; }; +struct mlx5_ifc_copy_action_in_bits { + u8 action_type[0x4]; + u8 src_field[0xc]; + u8 reserved_at_10[0x3]; + u8 src_offset[0x5]; + u8 reserved_at_18[0x3]; + u8 length[0x5]; + u8 reserved_at_20[0x4]; + u8 dst_field[0xc]; + u8 reserved_at_30[0x3]; + u8 dst_offset[0x5]; + u8 reserved_at_38[0x8]; +}; + struct mlx5_ifc_query_hca_cap_out_bits { u8 status[0x8]; u8 reserved_at_8[0x18]; @@ -2958,6 +3064,7 @@ enum { MLX5_GENERAL_OBJ_TYPE_GENEVE_TLV_OPT = 0x000b, MLX5_GENERAL_OBJ_TYPE_DEK = 0x000c, MLX5_GENERAL_OBJ_TYPE_VIRTQ = 0x000d, + MLX5_GENERAL_OBJ_TYPE_DEFINER = 0x0018, MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS = 0x001c, MLX5_GENERAL_OBJ_TYPE_IMPORT_KEK = 0x001d, MLX5_GENERAL_OBJ_TYPE_CREDENTIAL = 0x001e, @@ -2966,6 +3073,11 @@ enum { MLX5_GENERAL_OBJ_TYPE_FLOW_METER_ASO = 0x0024, MLX5_GENERAL_OBJ_TYPE_FLOW_HIT_ASO = 0x0025, MLX5_GENERAL_OBJ_TYPE_CONN_TRACK_OFFLOAD = 0x0031, + MLX5_GENERAL_OBJ_TYPE_ARG = 0x0023, + MLX5_GENERAL_OBJ_TYPE_STC = 0x0040, + MLX5_GENERAL_OBJ_TYPE_RTC = 0x0041, + MLX5_GENERAL_OBJ_TYPE_STE = 0x0042, + MLX5_GENERAL_OBJ_TYPE_MODIFY_HEADER_PATTERN = 0x0043, }; struct mlx5_ifc_general_obj_in_cmd_hdr_bits { @@ -2973,9 +3085,14 @@ struct mlx5_ifc_general_obj_in_cmd_hdr_bits { u8 reserved_at_10[0x20]; u8 obj_type[0x10]; u8 obj_id[0x20]; - u8 reserved_at_60[0x3]; - u8 log_obj_range[0x5]; - u8 reserved_at_58[0x18]; + union { + struct { + u8 reserved_at_60[0x3]; + u8 log_obj_range[0x5]; + u8 reserved_at_58[0x18]; + }; + u8 obj_offset[0x20]; + }; }; struct mlx5_ifc_general_obj_out_cmd_hdr_bits { @@ -3009,6 +3126,243 @@ struct mlx5_ifc_geneve_tlv_option_bits { u8 reserved_at_80[0x180]; }; + +enum mlx5_ifc_rtc_update_mode { + MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH = 0x0, + MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET = 0x1, +}; + +enum mlx5_ifc_rtc_ste_format { + MLX5_IFC_RTC_STE_FORMAT_8DW = 0x4, + MLX5_IFC_RTC_STE_FORMAT_11DW = 0x5, +}; + +enum mlx5_ifc_rtc_reparse_mode { + MLX5_IFC_RTC_REPARSE_NEVER = 0x0, + MLX5_IFC_RTC_REPARSE_ALWAYS = 0x1, +}; + +struct mlx5_ifc_rtc_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x40]; + u8 update_index_mode[0x2]; + u8 reparse_mode[0x2]; + u8 reserved_at_84[0x4]; + u8 pd[0x18]; + u8 reserved_at_a0[0x13]; + u8 log_depth[0x5]; + u8 log_hash_size[0x8]; + u8 ste_format[0x8]; + u8 table_type[0x8]; + u8 reserved_at_d0[0x10]; + u8 match_definer_id[0x20]; + u8 stc_id[0x20]; + u8 ste_table_base_id[0x20]; + u8 ste_table_offset[0x20]; + u8 reserved_at_160[0x8]; + u8 miss_flow_table_id[0x18]; + u8 reserved_at_180[0x280]; +}; + +enum mlx5_ifc_stc_action_type { + MLX5_IFC_STC_ACTION_TYPE_NOP = 0x00, + MLX5_IFC_STC_ACTION_TYPE_COPY = 0x05, + MLX5_IFC_STC_ACTION_TYPE_SET = 0x06, + MLX5_IFC_STC_ACTION_TYPE_ADD = 0x07, + MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS = 0x08, + MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE = 0x09, + MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT = 0x0b, + MLX5_IFC_STC_ACTION_TYPE_TAG = 0x0c, + MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST = 0x0e, + MLX5_IFC_STC_ACTION_TYPE_ASO = 0x12, + MLX5_IFC_STC_ACTION_TYPE_COUNTER = 0x14, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE = 0x80, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR = 0x81, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT = 0x82, + MLX5_IFC_STC_ACTION_TYPE_DROP = 0x83, + MLX5_IFC_STC_ACTION_TYPE_ALLOW = 0x84, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT = 0x85, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK = 0x86, +}; + +struct mlx5_ifc_stc_ste_param_ste_table_bits { + u8 ste_obj_id[0x20]; + u8 match_definer_id[0x20]; + u8 reserved_at_40[0x3]; + u8 log_hash_size[0x5]; + u8 reserved_at_48[0x38]; +}; + +struct mlx5_ifc_stc_ste_param_tir_bits { + u8 reserved_at_0[0x8]; + u8 tirn[0x18]; + u8 reserved_at_20[0x60]; +}; + +struct mlx5_ifc_stc_ste_param_table_bits { + u8 reserved_at_0[0x8]; + u8 table_id[0x18]; + u8 reserved_at_20[0x60]; +}; + +struct mlx5_ifc_stc_ste_param_flow_counter_bits { + u8 flow_counter_id[0x20]; +}; + +enum { + MLX5_ASO_CT_NUM_PER_OBJ = 1, + MLX5_ASO_METER_NUM_PER_OBJ = 2, +}; + +struct mlx5_ifc_stc_ste_param_execute_aso_bits { + u8 aso_object_id[0x20]; + u8 return_reg_id[0x4]; + u8 aso_type[0x4]; + u8 reserved_at_28[0x18]; +}; + +struct mlx5_ifc_stc_ste_param_header_modify_list_bits { + u8 header_modify_pattern_id[0x20]; + u8 header_modify_argument_id[0x20]; +}; + +enum mlx5_ifc_header_anchors { + MLX5_HEADER_ANCHOR_PACKET_START = 0x0, + MLX5_HEADER_ANCHOR_FIRST_VLAN_START = 0x2, + MLX5_HEADER_ANCHOR_IPV6_IPV4 = 0x07, + MLX5_HEADER_ANCHOR_INNER_MAC = 0x13, + MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4 = 0x19, +}; + +struct mlx5_ifc_stc_ste_param_remove_bits { + u8 action_type[0x4]; + u8 decap[0x1]; + u8 reserved_at_5[0x5]; + u8 remove_start_anchor[0x6]; + u8 reserved_at_10[0x2]; + u8 remove_end_anchor[0x6]; + u8 reserved_at_18[0x8]; +}; + +struct mlx5_ifc_stc_ste_param_remove_words_bits { + u8 action_type[0x4]; + u8 reserved_at_4[0x6]; + u8 remove_start_anchor[0x6]; + u8 reserved_at_10[0x1]; + u8 remove_offset[0x7]; + u8 reserved_at_18[0x2]; + u8 remove_size[0x6]; +}; + +struct mlx5_ifc_stc_ste_param_insert_bits { + u8 action_type[0x4]; + u8 encap[0x1]; + u8 inline_data[0x1]; + u8 reserved_at_6[0x4]; + u8 insert_anchor[0x6]; + u8 reserved_at_10[0x1]; + u8 insert_offset[0x7]; + u8 reserved_at_18[0x1]; + u8 insert_size[0x7]; + u8 insert_argument[0x20]; +}; + +struct mlx5_ifc_stc_ste_param_vport_bits { + u8 eswitch_owner_vhca_id[0x10]; + u8 vport_number[0x10]; + u8 eswitch_owner_vhca_id_valid[0x1]; + u8 reserved_at_21[0x59]; +}; + +union mlx5_ifc_stc_param_bits { + struct mlx5_ifc_stc_ste_param_ste_table_bits ste_table; + struct mlx5_ifc_stc_ste_param_tir_bits tir; + struct mlx5_ifc_stc_ste_param_table_bits table; + struct mlx5_ifc_stc_ste_param_flow_counter_bits counter; + struct mlx5_ifc_stc_ste_param_header_modify_list_bits modify_header; + struct mlx5_ifc_stc_ste_param_execute_aso_bits aso; + struct mlx5_ifc_stc_ste_param_remove_bits remove_header; + struct mlx5_ifc_stc_ste_param_insert_bits insert_header; + struct mlx5_ifc_set_action_in_bits add; + struct mlx5_ifc_set_action_in_bits set; + struct mlx5_ifc_copy_action_in_bits copy; + struct mlx5_ifc_stc_ste_param_vport_bits vport; + u8 reserved_at_0[0x80]; +}; + +enum { + MLX5_IFC_MODIFY_STC_FIELD_SELECT_NEW_STC = 1 << 0, +}; + +struct mlx5_ifc_stc_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x48]; + u8 table_type[0x8]; + u8 ste_action_offset[0x8]; + u8 action_type[0x8]; + u8 reserved_at_a0[0x60]; + union mlx5_ifc_stc_param_bits stc_param; + u8 reserved_at_180[0x280]; +}; + +struct mlx5_ifc_ste_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x48]; + u8 table_type[0x8]; + u8 reserved_at_90[0x370]; +}; + +enum { + MLX5_IFC_DEFINER_FORMAT_ID_SELECT = 61, +}; + +struct mlx5_ifc_definer_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x50]; + u8 format_id[0x10]; + u8 reserved_at_60[0x60]; + u8 format_select_dw3[0x8]; + u8 format_select_dw2[0x8]; + u8 format_select_dw1[0x8]; + u8 format_select_dw0[0x8]; + u8 format_select_dw7[0x8]; + u8 format_select_dw6[0x8]; + u8 format_select_dw5[0x8]; + u8 format_select_dw4[0x8]; + u8 reserved_at_100[0x18]; + u8 format_select_dw8[0x8]; + u8 reserved_at_120[0x20]; + u8 format_select_byte3[0x8]; + u8 format_select_byte2[0x8]; + u8 format_select_byte1[0x8]; + u8 format_select_byte0[0x8]; + u8 format_select_byte7[0x8]; + u8 format_select_byte6[0x8]; + u8 format_select_byte5[0x8]; + u8 format_select_byte4[0x8]; + u8 reserved_at_180[0x40]; + u8 ctrl[0xa0]; + u8 match_mask[0x160]; +}; + +struct mlx5_ifc_arg_bits { + u8 rsvd0[0x88]; + u8 access_pd[0x18]; +}; + +struct mlx5_ifc_header_modify_pattern_in_bits { + u8 modify_field_select[0x40]; + + u8 reserved_at_40[0x40]; + + u8 pattern_length[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x60]; + + u8 pattern_data[MAX_ACTIONS_DATA_IN_HEADER_MODIFY * 8]; +}; + struct mlx5_ifc_create_virtio_q_counters_in_bits { struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters; @@ -3024,6 +3378,36 @@ struct mlx5_ifc_create_geneve_tlv_option_in_bits { struct mlx5_ifc_geneve_tlv_option_bits geneve_tlv_opt; }; +struct mlx5_ifc_create_rtc_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_rtc_bits rtc; +}; + +struct mlx5_ifc_create_stc_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_stc_bits stc; +}; + +struct mlx5_ifc_create_ste_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_ste_bits ste; +}; + +struct mlx5_ifc_create_definer_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_definer_bits definer; +}; + +struct mlx5_ifc_create_arg_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_arg_bits arg; +}; + +struct mlx5_ifc_create_header_modify_pattern_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_header_modify_pattern_in_bits pattern; +}; + enum { MLX5_CRYPTO_KEY_SIZE_128b = 0x0, MLX5_CRYPTO_KEY_SIZE_256b = 0x1, @@ -4233,6 +4617,209 @@ struct mlx5_ifc_query_q_counter_in_bits { u8 counter_set_id[0x8]; }; +enum { + FS_FT_NIC_RX = 0x0, + FS_FT_NIC_TX = 0x1, + FS_FT_FDB = 0x4, + FS_FT_FDB_RX = 0xa, + FS_FT_FDB_TX = 0xb, +}; + +struct mlx5_ifc_flow_table_context_bits { + u8 reformat_en[0x1]; + u8 decap_en[0x1]; + u8 sw_owner[0x1]; + u8 termination_table[0x1]; + u8 table_miss_action[0x4]; + u8 level[0x8]; + u8 rtc_valid[0x1]; + u8 reserved_at_11[0x7]; + u8 log_size[0x8]; + + u8 reserved_at_20[0x8]; + u8 table_miss_id[0x18]; + + u8 reserved_at_40[0x8]; + u8 lag_master_next_table_id[0x18]; + + u8 reserved_at_60[0x60]; + + u8 rtc_id_0[0x20]; + + u8 rtc_id_1[0x20]; + + u8 reserved_at_100[0x40]; +}; + +struct mlx5_ifc_create_flow_table_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + + u8 reserved_at_60[0x20]; + + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x20]; + + struct mlx5_ifc_flow_table_context_bits flow_table_context; +}; + +struct mlx5_ifc_create_flow_table_out_bits { + u8 status[0x8]; + u8 icm_address_63_40[0x18]; + u8 syndrome[0x20]; + u8 icm_address_39_32[0x8]; + u8 table_id[0x18]; + u8 icm_address_31_0[0x20]; +}; + +enum mlx5_flow_destination_type { + MLX5_FLOW_DESTINATION_TYPE_VPORT = 0x0, +}; + +enum { + MLX5_FLOW_CONTEXT_ACTION_FWD_DEST = 0x4, +}; + +struct mlx5_ifc_set_fte_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x40]; +}; + +struct mlx5_ifc_dest_format_bits { + u8 destination_type[0x8]; + u8 destination_id[0x18]; + u8 destination_eswitch_owner_vhca_id_valid[0x1]; + u8 packet_reformat[0x1]; + u8 reserved_at_22[0xe]; + u8 destination_eswitch_owner_vhca_id[0x10]; +}; + +struct mlx5_ifc_flow_counter_list_bits { + u8 flow_counter_id[0x20]; + u8 reserved_at_20[0x20]; +}; + +union mlx5_ifc_dest_format_flow_counter_list_auto_bits { + struct mlx5_ifc_dest_format_bits dest_format; + struct mlx5_ifc_flow_counter_list_bits flow_counter_list; + u8 reserved_at_0[0x40]; +}; + +struct mlx5_ifc_flow_context_bits { + u8 reserved_at_00[0x20]; + u8 group_id[0x20]; + u8 reserved_at_40[0x8]; + u8 flow_tag[0x18]; + u8 reserved_at_60[0x10]; + u8 action[0x10]; + u8 extended_destination[0x1]; + u8 reserved_at_81[0x7]; + u8 destination_list_size[0x18]; + u8 reserved_at_a0[0x8]; + u8 flow_counter_list_size[0x18]; + u8 reserved_at_c0[0x1740]; + /* Currently only one destnation */ + union mlx5_ifc_dest_format_flow_counter_list_auto_bits destination[1]; +}; + +struct mlx5_ifc_set_fte_in_bits { + u8 opcode[0x10]; + u8 reserved_at_10[0x10]; + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + u8 reserved_at_60[0x20]; + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + u8 ignore_flow_level[0x1]; + u8 reserved_at_c1[0x17]; + u8 modify_enable_mask[0x8]; + u8 reserved_at_e0[0x20]; + u8 flow_index[0x20]; + u8 reserved_at_120[0xe0]; + struct mlx5_ifc_flow_context_bits flow_context; +}; + +struct mlx5_ifc_create_flow_group_in_bits { + u8 opcode[0x10]; + u8 reserved_at_10[0x10]; + u8 reserved_at_20[0x20]; + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + u8 reserved_at_60[0x20]; + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + u8 reserved_at_c0[0x1f40]; +}; + +struct mlx5_ifc_create_flow_group_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x8]; + u8 group_id[0x18]; + u8 reserved_at_60[0x20]; +}; + +enum { + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION = 1 << 0, + MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID = 1 << 1, +}; + +enum { + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_DEFAULT = 0, + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_GOTO_TBL = 1, +}; + +struct mlx5_ifc_modify_flow_table_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 reserved_at_40[0x10]; + u8 vport_number[0x10]; + + u8 reserved_at_60[0x10]; + u8 modify_field_select[0x10]; + + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + + struct mlx5_ifc_flow_table_context_bits flow_table_context; +}; + +struct mlx5_ifc_modify_flow_table_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + + u8 syndrome[0x20]; + + u8 reserved_at_40[0x60]; +}; + /* CQE format mask. */ #define MLX5E_CQE_FORMAT_MASK 0xc diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c new file mode 100644 index 0000000000..31138948c9 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c @@ -0,0 +1,949 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +int mlx5dr_cmd_destroy_obj(struct mlx5dr_devx_obj *devx_obj) +{ + int ret; + + ret = mlx5_glue->devx_obj_destroy(devx_obj->obj); + simple_free(devx_obj); + + return ret; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_table_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(create_flow_table_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_flow_table_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *ft_ctx; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for flow table object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_flow_table_in, in, opcode, MLX5_CMD_OP_CREATE_FLOW_TABLE); + MLX5_SET(create_flow_table_in, in, table_type, ft_attr->type); + + ft_ctx = MLX5_ADDR_OF(create_flow_table_in, in, flow_table_context); + MLX5_SET(flow_table_context, ft_ctx, level, ft_attr->level); + MLX5_SET(flow_table_context, ft_ctx, rtc_valid, ft_attr->rtc_valid); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create FT"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_flow_table_out, out, table_id); + + return devx_obj; +} + +int +mlx5dr_cmd_flow_table_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_ft_modify_attr *ft_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(modify_flow_table_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(modify_flow_table_in)] = {0}; + void *ft_ctx; + int ret; + + MLX5_SET(modify_flow_table_in, in, opcode, MLX5_CMD_OP_MODIFY_FLOW_TABLE); + MLX5_SET(modify_flow_table_in, in, table_type, ft_attr->type); + MLX5_SET(modify_flow_table_in, in, modify_field_select, ft_attr->modify_fs); + MLX5_SET(modify_flow_table_in, in, table_id, devx_obj->id); + + ft_ctx = MLX5_ADDR_OF(modify_flow_table_in, in, flow_table_context); + + MLX5_SET(flow_table_context, ft_ctx, table_miss_action, ft_attr->table_miss_action); + MLX5_SET(flow_table_context, ft_ctx, table_miss_id, ft_attr->table_miss_id); + MLX5_SET(flow_table_context, ft_ctx, rtc_id_0, ft_attr->rtc_id_0); + MLX5_SET(flow_table_context, ft_ctx, rtc_id_1, ft_attr->rtc_id_1); + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify FT"); + rte_errno = errno; + } + + return ret; +} + +static struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_group_create(struct ibv_context *ctx, + struct mlx5dr_cmd_fg_attr *fg_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(create_flow_group_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_flow_group_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for flow group object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_flow_group_in, in, opcode, MLX5_CMD_OP_CREATE_FLOW_GROUP); + MLX5_SET(create_flow_group_in, in, table_type, fg_attr->table_type); + MLX5_SET(create_flow_group_in, in, table_id, fg_attr->table_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create Flow group"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_flow_group_out, out, group_id); + + return devx_obj; +} + +static struct mlx5dr_devx_obj * +mlx5dr_cmd_set_vport_fte(struct ibv_context *ctx, + uint32_t table_type, + uint32_t table_id, + uint32_t group_id, + uint32_t vport_id) +{ + uint32_t in[MLX5_ST_SZ_DW(set_fte_in) + MLX5_ST_SZ_DW(dest_format)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(set_fte_out)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *in_flow_context; + void *in_dests; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for fte object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(set_fte_in, in, opcode, MLX5_CMD_OP_SET_FLOW_TABLE_ENTRY); + MLX5_SET(set_fte_in, in, table_type, table_type); + MLX5_SET(set_fte_in, in, table_id, table_id); + + in_flow_context = MLX5_ADDR_OF(set_fte_in, in, flow_context); + MLX5_SET(flow_context, in_flow_context, group_id, group_id); + MLX5_SET(flow_context, in_flow_context, destination_list_size, 1); + MLX5_SET(flow_context, in_flow_context, action, MLX5_FLOW_CONTEXT_ACTION_FWD_DEST); + + in_dests = MLX5_ADDR_OF(flow_context, in_flow_context, destination); + MLX5_SET(dest_format, in_dests, destination_type, + MLX5_FLOW_DESTINATION_TYPE_VPORT); + MLX5_SET(dest_format, in_dests, destination_id, vport_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create FTE"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + return devx_obj; +} + +void mlx5dr_cmd_miss_ft_destroy(struct mlx5dr_cmd_forward_tbl *tbl) +{ + mlx5dr_cmd_destroy_obj(tbl->fte); + mlx5dr_cmd_destroy_obj(tbl->fg); + mlx5dr_cmd_destroy_obj(tbl->ft); +} + +struct mlx5dr_cmd_forward_tbl * +mlx5dr_cmd_miss_ft_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr, + uint32_t vport) +{ + struct mlx5dr_cmd_fg_attr fg_attr = {0}; + struct mlx5dr_cmd_forward_tbl *tbl; + + tbl = simple_calloc(1, sizeof(*tbl)); + if (!tbl) { + DR_LOG(ERR, "Failed to allocate memory for forward default"); + rte_errno = ENOMEM; + return NULL; + } + + tbl->ft = mlx5dr_cmd_flow_table_create(ctx, ft_attr); + if (!tbl->ft) { + DR_LOG(ERR, "Failed to create FT for miss-table"); + goto free_tbl; + } + + fg_attr.table_id = tbl->ft->id; + fg_attr.table_type = ft_attr->type; + + tbl->fg = mlx5dr_cmd_flow_group_create(ctx, &fg_attr); + if (!tbl->fg) { + DR_LOG(ERR, "Failed to create FG for miss-table"); + goto free_ft; + } + + tbl->fte = mlx5dr_cmd_set_vport_fte(ctx, ft_attr->type, tbl->ft->id, tbl->fg->id, vport); + if (!tbl->fte) { + DR_LOG(ERR, "Failed to create FTE for miss-table"); + goto free_fg; + } + + return tbl; + +free_fg: + mlx5dr_cmd_destroy_obj(tbl->fg); +free_ft: + mlx5dr_cmd_destroy_obj(tbl->ft); +free_tbl: + simple_free(tbl); + return NULL; +} + +void mlx5dr_cmd_set_attr_connect_miss_tbl(struct mlx5dr_context *ctx, + uint32_t fw_ft_type, + enum mlx5dr_table_type type, + struct mlx5dr_cmd_ft_modify_attr *ft_attr) +{ + struct mlx5dr_devx_obj *default_miss_tbl; + + if (type != MLX5DR_TABLE_TYPE_FDB) + return; + + default_miss_tbl = ctx->common_res[type].default_miss->ft; + if (!default_miss_tbl) { + assert(false); + return; + } + ft_attr->modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION; + ft_attr->type = fw_ft_type; + ft_attr->table_miss_action = MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_GOTO_TBL; + ft_attr->table_miss_id = default_miss_tbl->id; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_rtc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_rtc_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for RTC object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_rtc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_RTC); + + attr = MLX5_ADDR_OF(create_rtc_in, in, rtc); + MLX5_SET(rtc, attr, ste_format, rtc_attr->is_jumbo ? + MLX5_IFC_RTC_STE_FORMAT_11DW : + MLX5_IFC_RTC_STE_FORMAT_8DW); + MLX5_SET(rtc, attr, pd, rtc_attr->pd); + MLX5_SET(rtc, attr, update_index_mode, rtc_attr->update_index_mode); + MLX5_SET(rtc, attr, log_depth, rtc_attr->log_depth); + MLX5_SET(rtc, attr, log_hash_size, rtc_attr->log_size); + MLX5_SET(rtc, attr, table_type, rtc_attr->table_type); + MLX5_SET(rtc, attr, match_definer_id, rtc_attr->definer_id); + MLX5_SET(rtc, attr, stc_id, rtc_attr->stc_base); + MLX5_SET(rtc, attr, ste_table_base_id, rtc_attr->ste_base); + MLX5_SET(rtc, attr, ste_table_offset, rtc_attr->ste_offset); + MLX5_SET(rtc, attr, miss_flow_table_id, rtc_attr->miss_ft_id); + MLX5_SET(rtc, attr, reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create RTC"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_stc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_stc_create_attr *stc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_stc_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for STC object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_stc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STC); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, stc_attr->log_obj_range); + + attr = MLX5_ADDR_OF(create_stc_in, in, stc); + MLX5_SET(stc, attr, table_type, stc_attr->table_type); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create STC"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +static int +mlx5dr_cmd_stc_modify_set_stc_param(struct mlx5dr_cmd_stc_modify_attr *stc_attr, + void *stc_parm) +{ + switch (stc_attr->action_type) { + case MLX5_IFC_STC_ACTION_TYPE_COUNTER: + MLX5_SET(stc_ste_param_flow_counter, stc_parm, flow_counter_id, stc_attr->id); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR: + MLX5_SET(stc_ste_param_tir, stc_parm, tirn, stc_attr->dest_tir_num); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT: + MLX5_SET(stc_ste_param_table, stc_parm, table_id, stc_attr->dest_table_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST: + MLX5_SET(stc_ste_param_header_modify_list, stc_parm, + header_modify_pattern_id, stc_attr->modify_header.pattern_id); + MLX5_SET(stc_ste_param_header_modify_list, stc_parm, + header_modify_argument_id, stc_attr->modify_header.arg_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE: + MLX5_SET(stc_ste_param_remove, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_REMOVE); + MLX5_SET(stc_ste_param_remove, stc_parm, decap, + stc_attr->remove_header.decap); + MLX5_SET(stc_ste_param_remove, stc_parm, remove_start_anchor, + stc_attr->remove_header.start_anchor); + MLX5_SET(stc_ste_param_remove, stc_parm, remove_end_anchor, + stc_attr->remove_header.end_anchor); + break; + case MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT: + MLX5_SET(stc_ste_param_insert, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_INSERT); + MLX5_SET(stc_ste_param_insert, stc_parm, encap, + stc_attr->insert_header.encap); + MLX5_SET(stc_ste_param_insert, stc_parm, inline_data, + stc_attr->insert_header.is_inline); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_anchor, + stc_attr->insert_header.insert_anchor); + /* HW gets the next 2 sizes in words */ + MLX5_SET(stc_ste_param_insert, stc_parm, insert_size, + stc_attr->insert_header.header_size / 2); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_offset, + stc_attr->insert_header.insert_offset / 2); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_argument, + stc_attr->insert_header.arg_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_COPY: + case MLX5_IFC_STC_ACTION_TYPE_SET: + case MLX5_IFC_STC_ACTION_TYPE_ADD: + *(__be64 *)stc_parm = stc_attr->modify_action.data; + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT: + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK: + MLX5_SET(stc_ste_param_vport, stc_parm, vport_number, + stc_attr->vport.vport_num); + MLX5_SET(stc_ste_param_vport, stc_parm, eswitch_owner_vhca_id, + stc_attr->vport.esw_owner_vhca_id); + MLX5_SET(stc_ste_param_vport, stc_parm, eswitch_owner_vhca_id_valid, 1); + break; + case MLX5_IFC_STC_ACTION_TYPE_DROP: + case MLX5_IFC_STC_ACTION_TYPE_NOP: + case MLX5_IFC_STC_ACTION_TYPE_TAG: + case MLX5_IFC_STC_ACTION_TYPE_ALLOW: + break; + case MLX5_IFC_STC_ACTION_TYPE_ASO: + MLX5_SET(stc_ste_param_execute_aso, stc_parm, aso_object_id, + stc_attr->aso.devx_obj_id); + MLX5_SET(stc_ste_param_execute_aso, stc_parm, return_reg_id, + stc_attr->aso.return_reg_id); + MLX5_SET(stc_ste_param_execute_aso, stc_parm, aso_type, + stc_attr->aso.aso_type); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE: + MLX5_SET(stc_ste_param_ste_table, stc_parm, ste_obj_id, + stc_attr->ste_table.ste_obj_id); + MLX5_SET(stc_ste_param_ste_table, stc_parm, match_definer_id, + stc_attr->ste_table.match_definer_id); + MLX5_SET(stc_ste_param_ste_table, stc_parm, log_hash_size, + stc_attr->ste_table.log_hash_size); + break; + case MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS: + MLX5_SET(stc_ste_param_remove_words, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS); + MLX5_SET(stc_ste_param_remove_words, stc_parm, remove_start_anchor, + stc_attr->remove_words.start_anchor); + MLX5_SET(stc_ste_param_remove_words, stc_parm, + remove_size, stc_attr->remove_words.num_of_words); + break; + default: + DR_LOG(ERR, "Not supported type %d", stc_attr->action_type); + rte_errno = EINVAL; + return rte_errno; + } + return 0; +} + +int +mlx5dr_cmd_stc_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_stc_modify_attr *stc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_stc_in)] = {0}; + void *stc_parm; + void *attr; + int ret; + + attr = MLX5_ADDR_OF(create_stc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_MODIFY_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STC); + MLX5_SET(general_obj_in_cmd_hdr, in, obj_id, devx_obj->id); + MLX5_SET(general_obj_in_cmd_hdr, in, obj_offset, stc_attr->stc_offset); + + attr = MLX5_ADDR_OF(create_stc_in, in, stc); + MLX5_SET(stc, attr, ste_action_offset, stc_attr->action_offset); + MLX5_SET(stc, attr, action_type, stc_attr->action_type); + MLX5_SET64(stc, attr, modify_field_select, + MLX5_IFC_MODIFY_STC_FIELD_SELECT_NEW_STC); + + /* Set destination TIRN, TAG, FT ID, STE ID */ + stc_parm = MLX5_ADDR_OF(stc, attr, stc_param); + ret = mlx5dr_cmd_stc_modify_set_stc_param(stc_attr, stc_parm); + if (ret) + return ret; + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify STC FW action_type %d", stc_attr->action_type); + rte_errno = errno; + } + + return ret; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_arg_create(struct ibv_context *ctx, + uint16_t log_obj_range, + uint32_t pd) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_arg_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for ARG object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_arg_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_ARG); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, log_obj_range); + + attr = MLX5_ADDR_OF(create_arg_in, in, arg); + MLX5_SET(arg, attr, access_pd, pd); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create ARG"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_header_modify_pattern_create(struct ibv_context *ctx, + uint32_t pattern_length, + uint8_t *actions) +{ + uint32_t in[MLX5_ST_SZ_DW(create_header_modify_pattern_in)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *pattern_data; + void *pattern; + void *attr; + + if (pattern_length > MAX_ACTIONS_DATA_IN_HEADER_MODIFY) { + DR_LOG(ERR, "Pattern length %d exceeds limit %d", + pattern_length, MAX_ACTIONS_DATA_IN_HEADER_MODIFY); + rte_errno = EINVAL; + return NULL; + } + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for header_modify_pattern object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_header_modify_pattern_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_MODIFY_HEADER_PATTERN); + + pattern = MLX5_ADDR_OF(create_header_modify_pattern_in, in, pattern); + /* Pattern_length is in ddwords */ + MLX5_SET(header_modify_pattern_in, pattern, pattern_length, pattern_length / (2 * DW_SIZE)); + + pattern_data = MLX5_ADDR_OF(header_modify_pattern_in, pattern, pattern_data); + memcpy(pattern_data, actions, pattern_length); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create header_modify_pattern"); + rte_errno = errno; + goto free_obj; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; + +free_obj: + simple_free(devx_obj); + return NULL; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_ste_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ste_create_attr *ste_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_ste_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for STE object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_ste_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STE); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, ste_attr->log_obj_range); + + attr = MLX5_ADDR_OF(create_ste_in, in, ste); + MLX5_SET(ste, attr, table_type, ste_attr->table_type); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create STE"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_definer_create(struct ibv_context *ctx, + struct mlx5dr_cmd_definer_create_attr *def_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_definer_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *ptr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for definer object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(general_obj_in_cmd_hdr, + in, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + in, obj_type, MLX5_GENERAL_OBJ_TYPE_DEFINER); + + ptr = MLX5_ADDR_OF(create_definer_in, in, definer); + MLX5_SET(definer, ptr, format_id, MLX5_IFC_DEFINER_FORMAT_ID_SELECT); + + MLX5_SET(definer, ptr, format_select_dw0, def_attr->dw_selector[0]); + MLX5_SET(definer, ptr, format_select_dw1, def_attr->dw_selector[1]); + MLX5_SET(definer, ptr, format_select_dw2, def_attr->dw_selector[2]); + MLX5_SET(definer, ptr, format_select_dw3, def_attr->dw_selector[3]); + MLX5_SET(definer, ptr, format_select_dw4, def_attr->dw_selector[4]); + MLX5_SET(definer, ptr, format_select_dw5, def_attr->dw_selector[5]); + MLX5_SET(definer, ptr, format_select_dw6, def_attr->dw_selector[6]); + MLX5_SET(definer, ptr, format_select_dw7, def_attr->dw_selector[7]); + MLX5_SET(definer, ptr, format_select_dw8, def_attr->dw_selector[8]); + + MLX5_SET(definer, ptr, format_select_byte0, def_attr->byte_selector[0]); + MLX5_SET(definer, ptr, format_select_byte1, def_attr->byte_selector[1]); + MLX5_SET(definer, ptr, format_select_byte2, def_attr->byte_selector[2]); + MLX5_SET(definer, ptr, format_select_byte3, def_attr->byte_selector[3]); + MLX5_SET(definer, ptr, format_select_byte4, def_attr->byte_selector[4]); + MLX5_SET(definer, ptr, format_select_byte5, def_attr->byte_selector[5]); + MLX5_SET(definer, ptr, format_select_byte6, def_attr->byte_selector[6]); + MLX5_SET(definer, ptr, format_select_byte7, def_attr->byte_selector[7]); + + ptr = MLX5_ADDR_OF(definer, ptr, match_mask); + memcpy(ptr, def_attr->match_mask, MLX5_FLD_SZ_BYTES(definer, match_mask)); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create Definer"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_sq_create(struct ibv_context *ctx, + struct mlx5dr_cmd_sq_create_attr *attr) +{ + uint32_t out[DEVX_ST_SZ_DW(create_sq_out)] = {0}; + uint32_t in[DEVX_ST_SZ_DW(create_sq_in)] = {0}; + void *sqc = DEVX_ADDR_OF(create_sq_in, in, ctx); + void *wqc = DEVX_ADDR_OF(sqc, sqc, wq); + struct mlx5dr_devx_obj *devx_obj; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to create SQ"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_sq_in, in, opcode, MLX5_CMD_OP_CREATE_SQ); + MLX5_SET(sqc, sqc, cqn, attr->cqn); + MLX5_SET(sqc, sqc, flush_in_error_en, 1); + MLX5_SET(sqc, sqc, non_wire, 1); + MLX5_SET(wq, wqc, wq_type, MLX5_WQ_TYPE_CYCLIC); + MLX5_SET(wq, wqc, pd, attr->pdn); + MLX5_SET(wq, wqc, uar_page, attr->page_id); + MLX5_SET(wq, wqc, log_wq_stride, log2above(MLX5_SEND_WQE_BB)); + MLX5_SET(wq, wqc, log_wq_sz, attr->log_wq_sz); + MLX5_SET(wq, wqc, dbr_umem_id, attr->dbr_id); + MLX5_SET(wq, wqc, wq_umem_id, attr->wq_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_sq_out, out, sqn); + + return devx_obj; +} + +int mlx5dr_cmd_sq_modify_rdy(struct mlx5dr_devx_obj *devx_obj) +{ + uint32_t out[DEVX_ST_SZ_DW(modify_sq_out)] = {0}; + uint32_t in[DEVX_ST_SZ_DW(modify_sq_in)] = {0}; + void *sqc = DEVX_ADDR_OF(modify_sq_in, in, ctx); + int ret; + + MLX5_SET(modify_sq_in, in, opcode, MLX5_CMD_OP_MODIFY_SQ); + MLX5_SET(modify_sq_in, in, sqn, devx_obj->id); + MLX5_SET(modify_sq_in, in, sq_state, MLX5_SQC_STATE_RST); + MLX5_SET(sqc, sqc, state, MLX5_SQC_STATE_RDY); + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify SQ"); + rte_errno = errno; + } + + return ret; +} + +int mlx5dr_cmd_query_caps(struct ibv_context *ctx, + struct mlx5dr_cmd_query_caps *caps) +{ + uint32_t out[DEVX_ST_SZ_DW(query_hca_cap_out)] = {0}; + uint32_t in[DEVX_ST_SZ_DW(query_hca_cap_in)] = {0}; + const struct flow_hw_port_info *port_info; + struct ibv_device_attr_ex attr_ex; + int ret; + + MLX5_SET(query_hca_cap_in, in, opcode, MLX5_CMD_OP_QUERY_HCA_CAP); + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query device caps"); + rte_errno = errno; + return rte_errno; + } + + caps->wqe_based_update = + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.wqe_based_flow_table_update_cap); + + caps->eswitch_manager = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.eswitch_manager); + + caps->flex_protocols = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.flex_parser_protocols); + + caps->log_header_modify_argument_granularity = + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.log_header_modify_argument_granularity); + + caps->log_header_modify_argument_granularity -= + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap. + log_header_modify_argument_granularity_offset); + + caps->log_header_modify_argument_max_alloc = + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.log_header_modify_argument_max_alloc); + + caps->definer_format_sup = + MLX5_GET64(query_hca_cap_out, out, + capability.cmd_hca_cap.match_definer_format_supported); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE_2 | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query device caps"); + rte_errno = errno; + return rte_errno; + } + + caps->full_dw_jumbo_support = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_8_6_ext); + + caps->format_select_gtpu_dw_0 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_0); + + caps->format_select_gtpu_dw_1 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_1); + + caps->format_select_gtpu_dw_2 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_2); + + caps->format_select_gtpu_ext_dw_0 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_first_ext_dw_0); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query flow table caps"); + rte_errno = errno; + return rte_errno; + } + + caps->nic_ft.max_level = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.max_ft_level); + + caps->nic_ft.reparse = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.reparse); + + if (caps->wqe_based_update) { + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_WQE_BASED_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query WQE based FT caps"); + rte_errno = errno; + return rte_errno; + } + + caps->rtc_reparse_mode = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_reparse_mode); + + caps->ste_format = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_format); + + caps->rtc_index_mode = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_index_mode); + + caps->rtc_log_depth_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_log_depth_max); + + caps->ste_alloc_log_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_alloc_log_max); + + caps->ste_alloc_log_gran = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_alloc_log_granularity); + + caps->trivial_match_definer = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + trivial_match_definer); + + caps->stc_alloc_log_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + stc_alloc_log_max); + + caps->stc_alloc_log_gran = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + stc_alloc_log_granularity); + } + + if (caps->eswitch_manager) { + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query flow table esw caps"); + rte_errno = errno; + return rte_errno; + } + + caps->fdb_ft.max_level = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.max_ft_level); + + caps->fdb_ft.reparse = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.reparse); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_SET_HCA_CAP_OP_MOD_ESW | MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Query eswitch capabilities failed %d\n", ret); + rte_errno = errno; + return rte_errno; + } + + if (MLX5_GET(query_hca_cap_out, out, + capability.esw_cap.esw_manager_vport_number_valid)) + caps->eswitch_manager_vport_number = + MLX5_GET(query_hca_cap_out, out, + capability.esw_cap.esw_manager_vport_number); + } + + ret = mlx5_glue->query_device_ex(ctx, NULL, &attr_ex); + if (ret) { + DR_LOG(ERR, "Failed to query device attributes"); + rte_errno = ret; + return rte_errno; + } + + strlcpy(caps->fw_ver, attr_ex.orig_attr.fw_ver, sizeof(caps->fw_ver)); + + port_info = flow_hw_get_wire_port(ctx); + if (port_info) { + caps->wire_regc = port_info->regc_value; + caps->wire_regc_mask = port_info->regc_mask; + } else { + DR_LOG(INFO, "Failed to query wire port regc value"); + } + + return ret; +} + +int mlx5dr_cmd_query_ib_port(struct ibv_context *ctx, + struct mlx5dr_cmd_query_vport_caps *vport_caps, + uint32_t port_num) +{ + struct mlx5_port_info port_info = {0}; + uint32_t flags; + int ret; + + flags = MLX5_PORT_QUERY_VPORT | MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID; + + ret = mlx5_glue->devx_port_query(ctx, port_num, &port_info); + /* Check if query succeed and vport is enabled */ + if (ret || (port_info.query_flags & flags) != flags) { + rte_errno = ENOTSUP; + return rte_errno; + } + + vport_caps->vport_num = port_info.vport_id; + vport_caps->esw_owner_vhca_id = port_info.esw_owner_vhca_id; + + if (port_info.query_flags & MLX5_PORT_QUERY_REG_C0) { + vport_caps->metadata_c = port_info.vport_meta_tag; + vport_caps->metadata_c_mask = port_info.vport_meta_mask; + } + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h new file mode 100644 index 0000000000..2548b2b238 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h @@ -0,0 +1,230 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_CMD_H_ +#define MLX5DR_CMD_H_ + +struct mlx5dr_cmd_ft_create_attr { + uint8_t type; + uint8_t level; + bool rtc_valid; +}; + +struct mlx5dr_cmd_ft_modify_attr { + uint8_t type; + uint32_t rtc_id_0; + uint32_t rtc_id_1; + uint32_t table_miss_id; + uint8_t table_miss_action; + uint64_t modify_fs; +}; + +struct mlx5dr_cmd_fg_attr { + uint32_t table_id; + uint32_t table_type; +}; + +struct mlx5dr_cmd_forward_tbl { + struct mlx5dr_devx_obj *ft; + struct mlx5dr_devx_obj *fg; + struct mlx5dr_devx_obj *fte; + uint32_t refcount; +}; + +struct mlx5dr_cmd_rtc_create_attr { + uint32_t pd; + uint32_t stc_base; + uint32_t ste_base; + uint32_t ste_offset; + uint32_t miss_ft_id; + uint8_t update_index_mode; + uint8_t log_depth; + uint8_t log_size; + uint8_t table_type; + uint8_t definer_id; + bool is_jumbo; +}; + +struct mlx5dr_cmd_stc_create_attr { + uint8_t log_obj_range; + uint8_t table_type; +}; + +struct mlx5dr_cmd_stc_modify_attr { + uint32_t stc_offset; + uint8_t action_offset; + enum mlx5_ifc_stc_action_type action_type; + union { + uint32_t id; /* TIRN, TAG, FT ID, STE ID */ + struct { + uint8_t decap; + uint16_t start_anchor; + uint16_t end_anchor; + } remove_header; + struct { + uint32_t arg_id; + uint32_t pattern_id; + } modify_header; + struct { + __be64 data; + } modify_action; + struct { + uint32_t arg_id; + uint32_t header_size; + uint8_t is_inline; + uint8_t encap; + uint16_t insert_anchor; + uint16_t insert_offset; + } insert_header; + struct { + uint8_t aso_type; + uint32_t devx_obj_id; + uint8_t return_reg_id; + } aso; + struct { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + } vport; + struct { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_pool *ste_pool; + uint32_t ste_obj_id; /* Internal */ + uint32_t match_definer_id; + uint8_t log_hash_size; + } ste_table; + struct { + uint16_t start_anchor; + uint16_t num_of_words; + } remove_words; + + uint32_t dest_table_id; + uint32_t dest_tir_num; + }; +}; + +struct mlx5dr_cmd_ste_create_attr { + uint8_t log_obj_range; + uint8_t table_type; +}; + +struct mlx5dr_cmd_definer_create_attr { + uint8_t *dw_selector; + uint8_t *byte_selector; + uint8_t *match_mask; +}; + +struct mlx5dr_cmd_sq_create_attr { + uint32_t cqn; + uint32_t pdn; + uint32_t page_id; + uint32_t dbr_id; + uint32_t wq_id; + uint32_t log_wq_sz; +}; + +struct mlx5dr_cmd_query_ft_caps { + uint8_t max_level; + uint8_t reparse; +}; + +struct mlx5dr_cmd_query_vport_caps { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + uint32_t metadata_c; + uint32_t metadata_c_mask; +}; + +struct mlx5dr_cmd_query_caps { + uint32_t wire_regc; + uint32_t wire_regc_mask; + uint32_t flex_protocols; + uint8_t wqe_based_update; + uint8_t rtc_reparse_mode; + uint16_t ste_format; + uint8_t rtc_index_mode; + uint8_t ste_alloc_log_max; + uint8_t ste_alloc_log_gran; + uint8_t stc_alloc_log_max; + uint8_t stc_alloc_log_gran; + uint8_t rtc_log_depth_max; + uint8_t format_select_gtpu_dw_0; + uint8_t format_select_gtpu_dw_1; + uint8_t format_select_gtpu_dw_2; + uint8_t format_select_gtpu_ext_dw_0; + bool full_dw_jumbo_support; + struct mlx5dr_cmd_query_ft_caps nic_ft; + struct mlx5dr_cmd_query_ft_caps fdb_ft; + bool eswitch_manager; + uint32_t eswitch_manager_vport_number; + uint8_t log_header_modify_argument_granularity; + uint8_t log_header_modify_argument_max_alloc; + uint64_t definer_format_sup; + uint32_t trivial_match_definer; + char fw_ver[64]; +}; + +int mlx5dr_cmd_destroy_obj(struct mlx5dr_devx_obj *devx_obj); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_table_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr); + +int +mlx5dr_cmd_flow_table_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_ft_modify_attr *ft_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_rtc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_stc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_stc_create_attr *stc_attr); + +int +mlx5dr_cmd_stc_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_stc_modify_attr *stc_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_ste_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ste_create_attr *ste_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_definer_create(struct ibv_context *ctx, + struct mlx5dr_cmd_definer_create_attr *def_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_sq_create(struct ibv_context *ctx, + struct mlx5dr_cmd_sq_create_attr *attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_arg_create(struct ibv_context *ctx, + uint16_t log_obj_range, + uint32_t pd); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_header_modify_pattern_create(struct ibv_context *ctx, + uint32_t pattern_length, + uint8_t *actions); + +int mlx5dr_cmd_sq_modify_rdy(struct mlx5dr_devx_obj *devx_obj); + +int mlx5dr_cmd_query_ib_port(struct ibv_context *ctx, + struct mlx5dr_cmd_query_vport_caps *vport_caps, + uint32_t port_num); +int mlx5dr_cmd_query_caps(struct ibv_context *ctx, + struct mlx5dr_cmd_query_caps *caps); + +void mlx5dr_cmd_miss_ft_destroy(struct mlx5dr_cmd_forward_tbl *tbl); + +struct mlx5dr_cmd_forward_tbl * +mlx5dr_cmd_miss_ft_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr, + uint32_t vport); + +void mlx5dr_cmd_set_attr_connect_miss_tbl(struct mlx5dr_context *ctx, + uint32_t fw_ft_type, + enum mlx5dr_table_type type, + struct mlx5dr_cmd_ft_modify_attr *ft_attr); +#endif /* MLX5DR_CMD_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 10/19] net/mlx5/hws: Add HWS pool and buddy 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (8 preceding siblings ...) 2022-10-06 15:03 ` [v2 09/19] net/mlx5/hws: Add HWS command layer Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 11/19] net/mlx5/hws: Add HWS send layer Alex Vesker ` (8 subsequent siblings) 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit HWS needs to manage different types of device memory in an efficient and quick way. For this, memory pools are being used. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_buddy.c | 201 +++++++++ drivers/net/mlx5/hws/mlx5dr_buddy.h | 22 + drivers/net/mlx5/hws/mlx5dr_pool.c | 672 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_pool.h | 152 +++++++ 4 files changed, 1047 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.h diff --git a/drivers/net/mlx5/hws/mlx5dr_buddy.c b/drivers/net/mlx5/hws/mlx5dr_buddy.c new file mode 100644 index 0000000000..9dba95f0b1 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_buddy.c @@ -0,0 +1,201 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include <rte_bitmap.h> +#include <rte_malloc.h> +#include "mlx5dr_internal.h" +#include "mlx5dr_buddy.h" + +static struct rte_bitmap *bitmap_alloc0(int s) +{ + struct rte_bitmap *bitmap; + uint32_t bmp_size; + void *mem; + + bmp_size = rte_bitmap_get_memory_footprint(s); + mem = rte_zmalloc("create_bmap", bmp_size, RTE_CACHE_LINE_SIZE); + if (!mem) { + DR_LOG(ERR, "No mem for bitmap"); + rte_errno = ENOMEM; + return NULL; + } + + bitmap = rte_bitmap_init(s, mem, bmp_size); + if (!bitmap) { + DR_LOG(ERR, "%s Failed to initialize bitmap", __func__); + rte_errno = EINVAL; + goto err_mem_alloc; + } + + return bitmap; + +err_mem_alloc: + rte_free(mem); + return NULL; +} + +static void bitmap_set_bit(struct rte_bitmap *bmp, uint32_t pos) +{ + rte_bitmap_set(bmp, pos); +} + +static void bitmap_clear_bit(struct rte_bitmap *bmp, uint32_t pos) +{ + rte_bitmap_clear(bmp, pos); +} + +static bool bitmap_test_bit(struct rte_bitmap *bmp, unsigned long n) +{ + return !!rte_bitmap_get(bmp, n); +} + +static unsigned long bitmap_ffs(struct rte_bitmap *bmap, + unsigned long n, unsigned long m) +{ + uint64_t out_slab = 0; + uint32_t pos = 0; /* Compilation warn */ + + __rte_bitmap_scan_init(bmap); + if (!rte_bitmap_scan(bmap, &pos, &out_slab)) { + DR_LOG(ERR, "Failed to get slab from bitmap."); + return m; + } + pos = pos + __builtin_ctzll(out_slab); + + if (pos < n) { + DR_LOG(ERR, "Unexpected bit (%d < %"PRIx64") from bitmap", pos, n); + return m; + } + return pos; +} + +static unsigned long mlx5dr_buddy_find_first_bit(struct rte_bitmap *addr, + uint32_t size) +{ + return bitmap_ffs(addr, 0, size); +} + +static int mlx5dr_buddy_init(struct mlx5dr_buddy_mem *buddy, uint32_t max_order) +{ + int i, s; + + buddy->max_order = max_order; + + buddy->bits = simple_calloc(buddy->max_order + 1, sizeof(long *)); + if (!buddy->bits) { + rte_errno = ENOMEM; + return -1; + } + + buddy->num_free = simple_calloc(buddy->max_order + 1, sizeof(*buddy->num_free)); + if (!buddy->num_free) { + rte_errno = ENOMEM; + goto err_out_free_bits; + } + + for (i = 0; i <= (int)buddy->max_order; ++i) { + s = 1 << (buddy->max_order - i); + buddy->bits[i] = bitmap_alloc0(s); + if (!buddy->bits[i]) + goto err_out_free_num_free; + } + + bitmap_set_bit(buddy->bits[buddy->max_order], 0); + + buddy->num_free[buddy->max_order] = 1; + + return 0; + +err_out_free_num_free: + for (i = 0; i <= (int)buddy->max_order; ++i) + rte_free(buddy->bits[i]); + + simple_free(buddy->num_free); + +err_out_free_bits: + simple_free(buddy->bits); + return -1; +} + +struct mlx5dr_buddy_mem *mlx5dr_buddy_create(uint32_t max_order) +{ + struct mlx5dr_buddy_mem *buddy; + + buddy = simple_calloc(1, sizeof(*buddy)); + if (!buddy) { + rte_errno = ENOMEM; + return NULL; + } + + if (mlx5dr_buddy_init(buddy, max_order)) + goto free_buddy; + + return buddy; + +free_buddy: + simple_free(buddy); + return NULL; +} + +void mlx5dr_buddy_cleanup(struct mlx5dr_buddy_mem *buddy) +{ + int i; + + for (i = 0; i <= (int)buddy->max_order; ++i) { + rte_free(buddy->bits[i]); + } + + simple_free(buddy->num_free); + simple_free(buddy->bits); +} + +int mlx5dr_buddy_alloc_mem(struct mlx5dr_buddy_mem *buddy, int order) +{ + int seg; + int o, m; + + for (o = order; o <= (int)buddy->max_order; ++o) + if (buddy->num_free[o]) { + m = 1 << (buddy->max_order - o); + seg = mlx5dr_buddy_find_first_bit(buddy->bits[o], m); + if (m <= seg) + return -1; + + goto found; + } + + return -1; + +found: + bitmap_clear_bit(buddy->bits[o], seg); + --buddy->num_free[o]; + + while (o > order) { + --o; + seg <<= 1; + bitmap_set_bit(buddy->bits[o], seg ^ 1); + ++buddy->num_free[o]; + } + + seg <<= order; + + return seg; +} + +void mlx5dr_buddy_free_mem(struct mlx5dr_buddy_mem *buddy, uint32_t seg, int order) +{ + seg >>= order; + + while (bitmap_test_bit(buddy->bits[order], seg ^ 1)) { + bitmap_clear_bit(buddy->bits[order], seg ^ 1); + --buddy->num_free[order]; + seg >>= 1; + ++order; + } + + bitmap_set_bit(buddy->bits[order], seg); + + ++buddy->num_free[order]; +} + diff --git a/drivers/net/mlx5/hws/mlx5dr_buddy.h b/drivers/net/mlx5/hws/mlx5dr_buddy.h new file mode 100644 index 0000000000..b9ec446b99 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_buddy.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_BUDDY_H_ +#define MLX5DR_BUDDY_H_ + +struct mlx5dr_buddy_mem { + struct rte_bitmap **bits; + unsigned int *num_free; + uint32_t max_order; +}; + +struct mlx5dr_buddy_mem *mlx5dr_buddy_create(uint32_t max_order); + +void mlx5dr_buddy_cleanup(struct mlx5dr_buddy_mem *buddy); + +int mlx5dr_buddy_alloc_mem(struct mlx5dr_buddy_mem *buddy, int order); + +void mlx5dr_buddy_free_mem(struct mlx5dr_buddy_mem *buddy, uint32_t seg, int order); + +#endif /* MLX5DR_BUDDY_H_ */ diff --git a/drivers/net/mlx5/hws/mlx5dr_pool.c b/drivers/net/mlx5/hws/mlx5dr_pool.c new file mode 100644 index 0000000000..2bfda5b4a5 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pool.c @@ -0,0 +1,672 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include <rte_bitmap.h> +#include <rte_malloc.h> +#include "mlx5dr_buddy.h" +#include "mlx5dr_internal.h" + +static void mlx5dr_pool_free_one_resource(struct mlx5dr_pool_resource *resource) +{ + mlx5dr_cmd_destroy_obj(resource->devx_obj); + + simple_free(resource); +} + +static void mlx5dr_pool_resource_free(struct mlx5dr_pool *pool, + int resource_idx) +{ + mlx5dr_pool_free_one_resource(pool->resource[resource_idx]); + pool->resource[resource_idx] = NULL; + + if (pool->tbl_type == MLX5DR_TABLE_TYPE_FDB) { + mlx5dr_pool_free_one_resource(pool->mirror_resource[resource_idx]); + pool->mirror_resource[resource_idx] = NULL; + } +} + +static struct mlx5dr_pool_resource * +mlx5dr_pool_create_one_resource(struct mlx5dr_pool *pool, uint32_t log_range, + uint32_t fw_ft_type) +{ + struct mlx5dr_cmd_ste_create_attr ste_attr; + struct mlx5dr_cmd_stc_create_attr stc_attr; + struct mlx5dr_pool_resource *resource; + struct mlx5dr_devx_obj *devx_obj; + + resource = simple_malloc(sizeof(*resource)); + if (!resource) { + rte_errno = ENOMEM; + return NULL; + } + + switch (pool->type) { + case MLX5DR_POOL_TYPE_STE: + ste_attr.log_obj_range = log_range; + ste_attr.table_type = fw_ft_type; + devx_obj = mlx5dr_cmd_ste_create(pool->ctx->ibv_ctx, &ste_attr); + break; + case MLX5DR_POOL_TYPE_STC: + stc_attr.log_obj_range = log_range; + stc_attr.table_type = fw_ft_type; + devx_obj = mlx5dr_cmd_stc_create(pool->ctx->ibv_ctx, &stc_attr); + break; + default: + assert(0); + break; + } + + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate resource objects"); + goto free_resource; + } + + resource->pool = pool; + resource->devx_obj = devx_obj; + resource->range = 1 << log_range; + resource->base_id = devx_obj->id; + + return resource; + +free_resource: + simple_free(resource); + return NULL; +} + +static int +mlx5dr_pool_resource_alloc(struct mlx5dr_pool *pool, uint32_t log_range, int idx) +{ + struct mlx5dr_pool_resource *resource; + uint32_t fw_ft_type, opt_log_range; + + fw_ft_type = mlx5dr_table_get_res_fw_ft_type(pool->tbl_type, false); + opt_log_range = pool->opt_type == MLX5DR_POOL_OPTIMIZE_ORIG ? 0 : log_range; + resource = mlx5dr_pool_create_one_resource(pool, opt_log_range, fw_ft_type); + if (!resource) { + DR_LOG(ERR, "Failed allocating resource"); + return rte_errno; + } + pool->resource[idx] = resource; + + if (pool->tbl_type == MLX5DR_TABLE_TYPE_FDB) { + struct mlx5dr_pool_resource *mir_resource; + + fw_ft_type = mlx5dr_table_get_res_fw_ft_type(pool->tbl_type, true); + opt_log_range = pool->opt_type == MLX5DR_POOL_OPTIMIZE_MIRROR ? 0 : log_range; + mir_resource = mlx5dr_pool_create_one_resource(pool, opt_log_range, fw_ft_type); + if (!mir_resource) { + DR_LOG(ERR, "Failed allocating mirrored resource"); + mlx5dr_pool_free_one_resource(resource); + pool->resource[idx] = NULL; + return rte_errno; + } + pool->mirror_resource[idx] = mir_resource; + } + + return 0; +} + +static int mlx5dr_pool_bitmap_get_free_slot(struct rte_bitmap *bitmap, uint32_t *iidx) +{ + uint64_t slab = 0; + + __rte_bitmap_scan_init(bitmap); + + if (!rte_bitmap_scan(bitmap, iidx, &slab)) + return ENOMEM; + + *iidx += __builtin_ctzll(slab); + + rte_bitmap_clear(bitmap, *iidx); + + return 0; +} + +static struct rte_bitmap *mlx5dr_pool_create_and_init_bitmap(uint32_t log_range) +{ + struct rte_bitmap *cur_bmp; + uint32_t bmp_size; + void *mem; + + bmp_size = rte_bitmap_get_memory_footprint(1 << log_range); + mem = rte_zmalloc("create_stc_bmap", bmp_size, RTE_CACHE_LINE_SIZE); + if (!mem) { + DR_LOG(ERR, "No mem for bitmap"); + rte_errno = ENOMEM; + return NULL; + } + + cur_bmp = rte_bitmap_init_with_all_set(1 << log_range, mem, bmp_size); + if (!cur_bmp) { + rte_free(mem); + DR_LOG(ERR, "Failed to initialize stc bitmap."); + rte_errno = ENOMEM; + return NULL; + } + + return cur_bmp; +} + +static void mlx5dr_pool_buddy_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + struct mlx5dr_buddy_mem *buddy; + + buddy = pool->db.buddy_manager->buddies[chunk->resource_idx]; + if (!buddy) { + assert(false); + DR_LOG(ERR, "No such buddy (%d)", chunk->resource_idx); + return; + } + + mlx5dr_buddy_free_mem(buddy, chunk->offset, chunk->order); +} + +static struct mlx5dr_buddy_mem * +mlx5dr_pool_buddy_get_next_buddy(struct mlx5dr_pool *pool, int idx, + uint32_t order, bool *is_new_buddy) +{ + static struct mlx5dr_buddy_mem *buddy; + uint32_t new_buddy_size; + + buddy = pool->db.buddy_manager->buddies[idx]; + if (buddy) + return buddy; + + new_buddy_size = RTE_MAX(pool->alloc_log_sz, order); + *is_new_buddy = true; + buddy = mlx5dr_buddy_create(new_buddy_size); + if (!buddy) { + DR_LOG(ERR, "Failed to create buddy order: %d index: %d", + new_buddy_size, idx); + return NULL; + } + + if (mlx5dr_pool_resource_alloc(pool, new_buddy_size, idx) != 0) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, new_buddy_size, idx); + mlx5dr_buddy_cleanup(buddy); + return NULL; + } + + pool->db.buddy_manager->buddies[idx] = buddy; + + return buddy; +} + +static int mlx5dr_pool_buddy_get_mem_chunk(struct mlx5dr_pool *pool, + int order, + uint32_t *buddy_idx, + int *seg) +{ + struct mlx5dr_buddy_mem *buddy; + bool new_mem = false; + int err = 0; + int i; + + *seg = -1; + + /* Find the next free place from the buddy array */ + while (*seg == -1) { + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + buddy = mlx5dr_pool_buddy_get_next_buddy(pool, i, + order, + &new_mem); + if (!buddy) { + err = rte_errno; + goto out; + } + + *seg = mlx5dr_buddy_alloc_mem(buddy, order); + if (*seg != -1) + goto found; + + if (pool->flags & MLX5DR_POOL_FLAGS_ONE_RESOURCE) { + DR_LOG(ERR, "Fail to allocate seg for one resource pool"); + err = rte_errno; + goto out; + } + + if (new_mem) { + /* We have new memory pool, should be place for us */ + assert(false); + DR_LOG(ERR, "No memory for order: %d with buddy no: %d", + order, i); + rte_errno = ENOMEM; + err = ENOMEM; + goto out; + } + } + } + +found: + *buddy_idx = i; +out: + return err; +} + +static int mlx5dr_pool_buddy_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret = 0; + + /* Go over the buddies and find next free slot */ + ret = mlx5dr_pool_buddy_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_pool_buddy_db_uninit(struct mlx5dr_pool *pool) +{ + struct mlx5dr_buddy_mem *buddy; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + buddy = pool->db.buddy_manager->buddies[i]; + if (buddy) { + mlx5dr_buddy_cleanup(buddy); + simple_free(buddy); + pool->db.buddy_manager->buddies[i] = NULL; + } + } + + simple_free(pool->db.buddy_manager); +} + +static int mlx5dr_pool_buddy_db_init(struct mlx5dr_pool *pool, uint32_t log_range) +{ + pool->db.buddy_manager = simple_calloc(1, sizeof(*pool->db.buddy_manager)); + if (!pool->db.buddy_manager) { + DR_LOG(ERR, "No mem for buddy_manager with log_range: %d", log_range); + rte_errno = ENOMEM; + return rte_errno; + } + + if (pool->flags & MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE) { + bool new_buddy; + + if (!mlx5dr_pool_buddy_get_next_buddy(pool, 0, log_range, &new_buddy)) { + DR_LOG(ERR, "Failed allocating memory on create log_sz: %d", log_range); + simple_free(pool->db.buddy_manager); + return rte_errno; + } + } + + pool->p_db_uninit = &mlx5dr_pool_buddy_db_uninit; + pool->p_get_chunk = &mlx5dr_pool_buddy_db_get_chunk; + pool->p_put_chunk = &mlx5dr_pool_buddy_db_put_chunk; + + return 0; +} + +static int mlx5dr_pool_create_resource_on_index(struct mlx5dr_pool *pool, + uint32_t alloc_size, int idx) +{ + if (mlx5dr_pool_resource_alloc(pool, alloc_size, idx) != 0) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, alloc_size, idx); + return rte_errno; + } + + return 0; +} + +static struct mlx5dr_pool_elements * +mlx5dr_pool_element_create_new_elem(struct mlx5dr_pool *pool, uint32_t order, int idx) +{ + struct mlx5dr_pool_elements *elem; + uint32_t alloc_size; + + alloc_size = pool->alloc_log_sz; + + elem = simple_calloc(1, sizeof(*elem)); + if (!elem) { + DR_LOG(ERR, "Failed to create elem order: %d index: %d", + order, idx); + rte_errno = ENOMEM; + return NULL; + } + /*sharing the same resource, also means that all the elements are with size 1*/ + if ((pool->flags & MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS) && + !(pool->flags & MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK)) { + /* Currently all chunks in size 1 */ + elem->bitmap = mlx5dr_pool_create_and_init_bitmap(alloc_size - order); + if (!elem->bitmap) { + DR_LOG(ERR, "Failed to create bitmap type: %d: size %d index: %d", + pool->type, alloc_size, idx); + goto free_elem; + } + } + + if (mlx5dr_pool_create_resource_on_index(pool, alloc_size, idx)) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, alloc_size, idx); + goto free_db; + } + + pool->db.element_manager->elements[idx] = elem; + + return elem; + +free_db: + rte_free(elem->bitmap); +free_elem: + simple_free(elem); + return NULL; +} + +static int mlx5dr_pool_element_find_seg(struct mlx5dr_pool_elements *elem, int *seg) +{ + if (mlx5dr_pool_bitmap_get_free_slot(elem->bitmap, (uint32_t *)seg)) { + elem->is_full = true; + return ENOMEM; + } + return 0; +} + +static int +mlx5dr_pool_onesize_element_get_mem_chunk(struct mlx5dr_pool *pool, uint32_t order, + uint32_t *idx, int *seg) +{ + struct mlx5dr_pool_elements *elem; + + elem = pool->db.element_manager->elements[0]; + if (!elem) + elem = mlx5dr_pool_element_create_new_elem(pool, order, 0); + if (!elem) + goto err_no_elem; + + *idx = 0; + + if (mlx5dr_pool_element_find_seg(elem, seg) != 0) { + DR_LOG(ERR, "No more resources (last request order: %d)", order); + rte_errno = ENOMEM; + return ENOMEM; + } + + elem->num_of_elements++; + return 0; + +err_no_elem: + DR_LOG(ERR, "Failed to allocate element for order: %d", order); + return ENOMEM; +} + +static int +mlx5dr_pool_general_element_get_mem_chunk(struct mlx5dr_pool *pool, uint32_t order, + uint32_t *idx, int *seg) +{ + int ret; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + if (!pool->resource[i]) { + ret = mlx5dr_pool_create_resource_on_index(pool, order, i); + if (ret) + goto err_no_res; + *idx = i; + *seg = 0; /* One memory slot in that element */ + return 0; + } + } + + rte_errno = ENOMEM; + DR_LOG(ERR, "No more resources (last request order: %d)", order); + return ENOMEM; + +err_no_res: + DR_LOG(ERR, "Failed to allocate element for order: %d", order); + return ENOMEM; +} + +static int mlx5dr_pool_general_element_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret; + + /* Go over all memory elements and find/allocate free slot */ + ret = mlx5dr_pool_general_element_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_pool_general_element_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + assert(pool->resource[chunk->resource_idx]); + + if (pool->flags & MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE) + mlx5dr_pool_resource_free(pool, chunk->resource_idx); +} + +static void mlx5dr_pool_general_element_db_uninit(struct mlx5dr_pool *pool) +{ + (void)pool; +} + +/* This memory management works as the following: + * - At start doesn't allocate no mem at all. + * - When new request for chunk arrived: + * allocate resource and give it. + * - When free that chunk: + * the resource is freed. + */ +static int mlx5dr_pool_general_element_db_init(struct mlx5dr_pool *pool) +{ + pool->db.element_manager = simple_calloc(1, sizeof(*pool->db.element_manager)); + if (!pool->db.element_manager) { + DR_LOG(ERR, "No mem for general elemnt_manager"); + rte_errno = ENOMEM; + return rte_errno; + } + + pool->p_db_uninit = &mlx5dr_pool_general_element_db_uninit; + pool->p_get_chunk = &mlx5dr_pool_general_element_db_get_chunk; + pool->p_put_chunk = &mlx5dr_pool_general_element_db_put_chunk; + + return 0; +} + +static void mlx5dr_onesize_element_db_destroy_element(struct mlx5dr_pool *pool, + struct mlx5dr_pool_elements *elem, + struct mlx5dr_pool_chunk *chunk) +{ + assert(pool->resource[chunk->resource_idx]); + + mlx5dr_pool_resource_free(pool, chunk->resource_idx); + + simple_free(elem); + pool->db.element_manager->elements[chunk->resource_idx] = NULL; +} + +static void mlx5dr_onesize_element_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + struct mlx5dr_pool_elements *elem; + + assert(chunk->resource_idx == 0); + + elem = pool->db.element_manager->elements[chunk->resource_idx]; + if (!elem) { + assert(false); + DR_LOG(ERR, "No such element (%d)", chunk->resource_idx); + return; + } + + rte_bitmap_set(elem->bitmap, chunk->offset); + elem->is_full = false; + elem->num_of_elements--; + + if (pool->flags & MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE && + !elem->num_of_elements) + mlx5dr_onesize_element_db_destroy_element(pool, elem, chunk); +} + +static int mlx5dr_onesize_element_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret = 0; + + /* Go over all memory elements and find/allocate free slot */ + ret = mlx5dr_pool_onesize_element_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_onesize_element_db_uninit(struct mlx5dr_pool *pool) +{ + struct mlx5dr_pool_elements *elem; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + elem = pool->db.element_manager->elements[i]; + if (elem) { + if (elem->bitmap) + rte_free(elem->bitmap); + simple_free(elem); + pool->db.element_manager->elements[i] = NULL; + } + } + simple_free(pool->db.element_manager); +} + +/* This memory management works as the following: + * - At start doesn't allocate no mem at all. + * - When new request for chunk arrived: + * aloocate the first and only slot of memory/resource + * when it ended return error. + */ +static int mlx5dr_pool_onesize_element_db_init(struct mlx5dr_pool *pool) +{ + pool->db.element_manager = simple_calloc(1, sizeof(*pool->db.element_manager)); + if (!pool->db.element_manager) { + DR_LOG(ERR, "No mem for general elemnt_manager"); + rte_errno = ENOMEM; + return rte_errno; + } + + pool->p_db_uninit = &mlx5dr_onesize_element_db_uninit; + pool->p_get_chunk = &mlx5dr_onesize_element_db_get_chunk; + pool->p_put_chunk = &mlx5dr_onesize_element_db_put_chunk; + + return 0; +} + +static int mlx5dr_pool_db_init(struct mlx5dr_pool *pool, + enum mlx5dr_db_type db_type) +{ + int ret; + + if (db_type == MLX5DR_POOL_DB_TYPE_GENERAL_SIZE) + ret = mlx5dr_pool_general_element_db_init(pool); + else if (db_type == MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE) + ret = mlx5dr_pool_onesize_element_db_init(pool); + else + ret = mlx5dr_pool_buddy_db_init(pool, pool->alloc_log_sz); + + if (ret) { + DR_LOG(ERR, "Failed to init general db : %d (ret: %d)", db_type, ret); + return ret; + } + + return 0; +} + +static void mlx5dr_pool_db_unint(struct mlx5dr_pool *pool) +{ + pool->p_db_uninit(pool); +} + +int +mlx5dr_pool_chunk_alloc(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret; + + pthread_spin_lock(&pool->lock); + ret = pool->p_get_chunk(pool, chunk); + pthread_spin_unlock(&pool->lock); + + return ret; +} + +void mlx5dr_pool_chunk_free(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + pthread_spin_lock(&pool->lock); + pool->p_put_chunk(pool, chunk); + pthread_spin_unlock(&pool->lock); +} + +struct mlx5dr_pool * +mlx5dr_pool_create(struct mlx5dr_context *ctx, struct mlx5dr_pool_attr *pool_attr) +{ + enum mlx5dr_db_type res_db_type; + struct mlx5dr_pool *pool; + + pool = simple_calloc(1, sizeof(*pool)); + if (!pool) + return NULL; + + pool->ctx = ctx; + pool->type = pool_attr->pool_type; + pool->alloc_log_sz = pool_attr->alloc_log_sz; + pool->flags = pool_attr->flags; + pool->tbl_type = pool_attr->table_type; + pool->opt_type = pool_attr->opt_type; + + pthread_spin_init(&pool->lock, PTHREAD_PROCESS_PRIVATE); + + /* Support general db */ + if (pool->flags == (MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE | + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK)) + res_db_type = MLX5DR_POOL_DB_TYPE_GENERAL_SIZE; + else if (pool->flags == (MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS)) + res_db_type = MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE; + else + res_db_type = MLX5DR_POOL_DB_TYPE_BUDDY; + + pool->alloc_log_sz = pool_attr->alloc_log_sz; + + if (mlx5dr_pool_db_init(pool, res_db_type)) + goto free_pool; + + return pool; + +free_pool: + pthread_spin_destroy(&pool->lock); + simple_free(pool); + return NULL; +} + +int mlx5dr_pool_destroy(struct mlx5dr_pool *pool) +{ + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) + if (pool->resource[i]) + mlx5dr_pool_resource_free(pool, i); + + mlx5dr_pool_db_unint(pool); + + pthread_spin_destroy(&pool->lock); + simple_free(pool); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_pool.h b/drivers/net/mlx5/hws/mlx5dr_pool.h new file mode 100644 index 0000000000..cd12c3ab9a --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pool.h @@ -0,0 +1,152 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_POOL_H_ +#define MLX5DR_POOL_H_ + +enum mlx5dr_pool_type { + MLX5DR_POOL_TYPE_STE, + MLX5DR_POOL_TYPE_STC, +}; + +#define MLX5DR_POOL_STC_LOG_SZ 14 + +#define MLX5DR_POOL_RESOURCE_ARR_SZ 100 + +struct mlx5dr_pool_chunk { + uint32_t resource_idx; + /* Internal offset, relative to base index */ + int offset; + int order; +}; + +struct mlx5dr_pool_resource { + struct mlx5dr_pool *pool; + struct mlx5dr_devx_obj *devx_obj; + uint32_t base_id; + uint32_t range; +}; + +enum mlx5dr_pool_flags { + /* Only a one resource in that pool */ + MLX5DR_POOL_FLAGS_ONE_RESOURCE = 1 << 0, + MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE = 1 << 1, + /* No sharing resources between chunks */ + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK = 1 << 2, + /* All objects are in the same size */ + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS = 1 << 3, + /* Manged by buddy allocator */ + MLX5DR_POOL_FLAGS_BUDDY_MANAGED = 1 << 4, + /* Allocate pool_type memory on pool creation */ + MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE = 1 << 5, + + /* These values should be used by the caller */ + MLX5DR_POOL_FLAGS_FOR_STC_POOL = + MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS, + MLX5DR_POOL_FLAGS_FOR_MATCHER_STE_POOL = + MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE | + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK, + MLX5DR_POOL_FLAGS_FOR_STE_ACTION_POOL = + MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_BUDDY_MANAGED | + MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE, +}; + +enum mlx5dr_pool_optimize { + MLX5DR_POOL_OPTIMIZE_NONE = 0x0, + MLX5DR_POOL_OPTIMIZE_ORIG = 0x1, + MLX5DR_POOL_OPTIMIZE_MIRROR = 0x2, +}; + +struct mlx5dr_pool_attr { + enum mlx5dr_pool_type pool_type; + enum mlx5dr_table_type table_type; + enum mlx5dr_pool_flags flags; + enum mlx5dr_pool_optimize opt_type; + /* Allocation size once memory is depleted */ + size_t alloc_log_sz; +}; + +enum mlx5dr_db_type { + /* Uses for allocating chunk of big memory, each element has its own resource in the FW*/ + MLX5DR_POOL_DB_TYPE_GENERAL_SIZE, + /* One resource only, all the elements are with same one size */ + MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE, + /* Many resources, the memory allocated with buddy mechanism */ + MLX5DR_POOL_DB_TYPE_BUDDY, +}; + +struct mlx5dr_buddy_manager { + struct mlx5dr_buddy_mem *buddies[MLX5DR_POOL_RESOURCE_ARR_SZ]; +}; + +struct mlx5dr_pool_elements { + uint32_t num_of_elements; + struct rte_bitmap *bitmap; + bool is_full; +}; + +struct mlx5dr_element_manager { + struct mlx5dr_pool_elements *elements[MLX5DR_POOL_RESOURCE_ARR_SZ]; +}; + +struct mlx5dr_pool_db { + enum mlx5dr_db_type type; + union { + struct mlx5dr_element_manager *element_manager; + struct mlx5dr_buddy_manager *buddy_manager; + }; +}; + +typedef int (*mlx5dr_pool_db_get_chunk)(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); +typedef void (*mlx5dr_pool_db_put_chunk)(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); +typedef void (*mlx5dr_pool_unint_db)(struct mlx5dr_pool *pool); + +struct mlx5dr_pool { + struct mlx5dr_context *ctx; + enum mlx5dr_pool_type type; + enum mlx5dr_pool_flags flags; + pthread_spinlock_t lock; + size_t alloc_log_sz; + enum mlx5dr_table_type tbl_type; + enum mlx5dr_pool_optimize opt_type; + struct mlx5dr_pool_resource *resource[MLX5DR_POOL_RESOURCE_ARR_SZ]; + struct mlx5dr_pool_resource *mirror_resource[MLX5DR_POOL_RESOURCE_ARR_SZ]; + /* DB */ + struct mlx5dr_pool_db db; + /* Functions */ + mlx5dr_pool_unint_db p_db_uninit; + mlx5dr_pool_db_get_chunk p_get_chunk; + mlx5dr_pool_db_put_chunk p_put_chunk; +}; + +struct mlx5dr_pool * +mlx5dr_pool_create(struct mlx5dr_context *ctx, + struct mlx5dr_pool_attr *pool_attr); + +int mlx5dr_pool_destroy(struct mlx5dr_pool *pool); + +int mlx5dr_pool_chunk_alloc(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); + +void mlx5dr_pool_chunk_free(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); + +static inline struct mlx5dr_devx_obj * +mlx5dr_pool_chunk_get_base_devx_obj(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + return pool->resource[chunk->resource_idx]->devx_obj; +} + +static inline struct mlx5dr_devx_obj * +mlx5dr_pool_chunk_get_base_devx_obj_mirror(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + return pool->mirror_resource[chunk->resource_idx]->devx_obj; +} +#endif /* MLX5DR_POOL_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 11/19] net/mlx5/hws: Add HWS send layer 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (9 preceding siblings ...) 2022-10-06 15:03 ` [v2 10/19] net/mlx5/hws: Add HWS pool and buddy Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 12/19] net/mlx5/hws: Add HWS definer layer Alex Vesker ` (7 subsequent siblings) 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Mark Bloch HWS configures flows to the HW using a QP, each WQE has the details of the flow we want to offload. The send layer allocates the resources needed to send the request to the HW as well as managing the queues, getting completions and handling failures. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_send.c | 844 +++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_send.h | 275 ++++++++++ 2 files changed, 1119 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.h diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c new file mode 100644 index 0000000000..26904a9040 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_send.c @@ -0,0 +1,844 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +struct mlx5dr_send_ring_dep_wqe * +mlx5dr_send_add_new_dep_wqe(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *send_sq = &queue->send_ring->send_sq; + unsigned int idx = send_sq->head_dep_idx++ & (queue->num_entries - 1); + + memset(&send_sq->dep_wqe[idx].wqe_data.tag, 0, MLX5DR_MATCH_TAG_SZ); + + return &send_sq->dep_wqe[idx]; +} + +void mlx5dr_send_abort_new_dep_wqe(struct mlx5dr_send_engine *queue) +{ + queue->send_ring->send_sq.head_dep_idx--; +} + +void mlx5dr_send_all_dep_wqe(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *send_sq = &queue->send_ring->send_sq; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE; + + /* Fence first from previous depend WQEs */ + ste_attr.send_attr.fence = 1; + + while (send_sq->head_dep_idx != send_sq->tail_dep_idx) { + dep_wqe = &send_sq->dep_wqe[send_sq->tail_dep_idx++ & (queue->num_entries - 1)]; + + /* Notify HW on the last WQE */ + ste_attr.send_attr.notify_hw = (send_sq->tail_dep_idx == send_sq->head_dep_idx); + ste_attr.send_attr.user_data = dep_wqe->user_data; + ste_attr.send_attr.rule = dep_wqe->rule; + + ste_attr.rtc_0 = dep_wqe->rtc_0; + ste_attr.rtc_1 = dep_wqe->rtc_1; + ste_attr.retry_rtc_0 = dep_wqe->retry_rtc_0; + ste_attr.retry_rtc_1 = dep_wqe->retry_rtc_1; + ste_attr.used_id_rtc_0 = &dep_wqe->rule->rtc_0; + ste_attr.used_id_rtc_1 = &dep_wqe->rule->rtc_1; + ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; + ste_attr.wqe_data = &dep_wqe->wqe_data; + + mlx5dr_send_ste(queue, &ste_attr); + + /* Fencing is done only on the first WQE */ + ste_attr.send_attr.fence = 0; + } +} + +struct mlx5dr_send_engine_post_ctrl +mlx5dr_send_engine_post_start(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_engine_post_ctrl ctrl; + + ctrl.queue = queue; + /* Currently only one send ring is supported */ + ctrl.send_ring = &queue->send_ring[0]; + ctrl.num_wqebbs = 0; + + return ctrl; +} + +void mlx5dr_send_engine_post_req_wqe(struct mlx5dr_send_engine_post_ctrl *ctrl, + char **buf, size_t *len) +{ + struct mlx5dr_send_ring_sq *send_sq = &ctrl->send_ring->send_sq; + unsigned int idx; + + idx = (send_sq->cur_post + ctrl->num_wqebbs) & send_sq->buf_mask; + + *buf = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + *len = MLX5_SEND_WQE_BB; + + if (!ctrl->num_wqebbs) { + *buf += sizeof(struct mlx5dr_wqe_ctrl_seg); + *len -= sizeof(struct mlx5dr_wqe_ctrl_seg); + } + + ctrl->num_wqebbs++; +} + +static void mlx5dr_send_engine_post_ring(struct mlx5dr_send_ring_sq *sq, + struct mlx5dv_devx_uar *uar, + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl) +{ + rte_compiler_barrier(); + sq->db[MLX5_SND_DBR] = rte_cpu_to_be_32(sq->cur_post); + + rte_wmb(); + mlx5dr_uar_write64_relaxed(*((uint64_t *)wqe_ctrl), uar->reg_addr); + rte_wmb(); +} + +static void +mlx5dr_send_wqe_set_tag(struct mlx5dr_wqe_gta_data_seg_ste *wqe_data, + struct mlx5dr_rule_match_tag *tag, + bool is_jumbo) +{ + if (is_jumbo) { + /* Clear previous possibly dirty control */ + memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ); + memcpy(wqe_data->action, tag->jumbo, MLX5DR_JUMBO_TAG_SZ); + } else { + /* Clear previous possibly dirty control and actions */ + memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ + MLX5DR_ACTIONS_SZ); + memcpy(wqe_data->tag, tag->match, MLX5DR_MATCH_TAG_SZ); + } +} + +void mlx5dr_send_engine_post_end(struct mlx5dr_send_engine_post_ctrl *ctrl, + struct mlx5dr_send_engine_post_attr *attr) +{ + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_ring_sq *sq; + uint32_t flags = 0; + unsigned int idx; + + sq = &ctrl->send_ring->send_sq; + idx = sq->cur_post & sq->buf_mask; + sq->last_idx = idx; + + wqe_ctrl = (void *)(sq->buf + (idx << MLX5_SEND_WQE_SHIFT)); + + wqe_ctrl->opmod_idx_opcode = + rte_cpu_to_be_32((attr->opmod << 24) | + ((sq->cur_post & 0xffff) << 8) | + attr->opcode); + wqe_ctrl->qpn_ds = + rte_cpu_to_be_32((attr->len + sizeof(struct mlx5dr_wqe_ctrl_seg)) / 16 | + sq->sqn << 8); + + wqe_ctrl->imm = rte_cpu_to_be_32(attr->id); + + flags |= attr->notify_hw ? MLX5_WQE_CTRL_CQ_UPDATE : 0; + flags |= attr->fence ? MLX5_WQE_CTRL_INITIATOR_SMALL_FENCE : 0; + wqe_ctrl->flags = rte_cpu_to_be_32(flags); + + sq->wr_priv[idx].id = attr->id; + sq->wr_priv[idx].retry_id = attr->retry_id; + + sq->wr_priv[idx].rule = attr->rule; + sq->wr_priv[idx].user_data = attr->user_data; + sq->wr_priv[idx].num_wqebbs = ctrl->num_wqebbs; + + if (attr->rule) { + sq->wr_priv[idx].rule->pending_wqes++; + sq->wr_priv[idx].used_id = attr->used_id; + } + + sq->cur_post += ctrl->num_wqebbs; + + if (attr->notify_hw) + mlx5dr_send_engine_post_ring(sq, ctrl->queue->uar, wqe_ctrl); +} + +static void mlx5dr_send_wqe(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_engine_post_attr *send_attr, + struct mlx5dr_wqe_gta_ctrl_seg *send_wqe_ctrl, + void *send_wqe_data, + void *send_wqe_tag, + bool is_jumbo, + uint8_t gta_opcode, + uint32_t direct_index) +{ + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_engine_post_ctrl ctrl; + size_t wqe_len; + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_data, &wqe_len); + + wqe_ctrl->op_dirix = htobe32(gta_opcode << 28 | direct_index); + memcpy(wqe_ctrl->stc_ix, send_wqe_ctrl->stc_ix, sizeof(send_wqe_ctrl->stc_ix)); + + if (send_wqe_data) + memcpy(wqe_data, send_wqe_data, sizeof(*wqe_data)); + else + mlx5dr_send_wqe_set_tag(wqe_data, send_wqe_tag, is_jumbo); + + mlx5dr_send_engine_post_end(&ctrl, send_attr); +} + +void mlx5dr_send_ste(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ste_attr *ste_attr) +{ + struct mlx5dr_send_engine_post_attr *send_attr = &ste_attr->send_attr; + uint8_t notify_hw = send_attr->notify_hw; + uint8_t fence = send_attr->fence; + + if (ste_attr->rtc_1) { + send_attr->id = ste_attr->rtc_1; + send_attr->used_id = ste_attr->used_id_rtc_1; + send_attr->retry_id = ste_attr->retry_rtc_1; + send_attr->fence = fence; + send_attr->notify_hw = notify_hw && !ste_attr->rtc_0; + mlx5dr_send_wqe(queue, send_attr, + ste_attr->wqe_ctrl, + ste_attr->wqe_data, + ste_attr->wqe_tag, + ste_attr->wqe_tag_is_jumbo, + ste_attr->gta_opcode, + ste_attr->direct_index); + } + + if (ste_attr->rtc_0) { + send_attr->id = ste_attr->rtc_0; + send_attr->used_id = ste_attr->used_id_rtc_0; + send_attr->retry_id = ste_attr->retry_rtc_0; + send_attr->fence = fence && !ste_attr->rtc_1; + send_attr->notify_hw = notify_hw; + mlx5dr_send_wqe(queue, send_attr, + ste_attr->wqe_ctrl, + ste_attr->wqe_data, + ste_attr->wqe_tag, + ste_attr->wqe_tag_is_jumbo, + ste_attr->gta_opcode, + ste_attr->direct_index); + } + + /* Restore to ortginal requested values */ + send_attr->notify_hw = notify_hw; + send_attr->fence = fence; +} + +static void mlx5dr_send_engine_retry_post_send(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_priv *priv, + uint16_t wqe_cnt) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_send_ring_sq *send_sq; + unsigned int idx; + size_t wqe_len; + char *p; + + send_attr.rule = priv->rule; + send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + send_attr.len = MLX5_SEND_WQE_BB * 2 - sizeof(struct mlx5dr_wqe_ctrl_seg); + send_attr.notify_hw = 1; + send_attr.fence = 0; + send_attr.user_data = priv->user_data; + send_attr.id = priv->retry_id; + send_attr.used_id = priv->used_id; + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_data, &wqe_len); + + send_sq = &ctrl.send_ring->send_sq; + idx = wqe_cnt & send_sq->buf_mask; + p = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + + /* Copy old gta ctrl */ + memcpy(wqe_ctrl, p + sizeof(struct mlx5dr_wqe_ctrl_seg), + MLX5_SEND_WQE_BB - sizeof(struct mlx5dr_wqe_ctrl_seg)); + + idx = (wqe_cnt + 1) & send_sq->buf_mask; + p = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + + /* Copy old gta data */ + memcpy(wqe_data, p, MLX5_SEND_WQE_BB); + + mlx5dr_send_engine_post_end(&ctrl, &send_attr); +} + +void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *sq = &queue->send_ring[0].send_sq; + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl; + + wqe_ctrl = (void *)(sq->buf + (sq->last_idx << MLX5_SEND_WQE_SHIFT)); + + wqe_ctrl->flags |= rte_cpu_to_be_32(MLX5_WQE_CTRL_CQ_UPDATE); + + mlx5dr_send_engine_post_ring(sq, queue->uar, wqe_ctrl); +} + +static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_priv *priv, + uint16_t wqe_cnt, + enum rte_flow_op_status *status) +{ + priv->rule->pending_wqes--; + + if (*status == RTE_FLOW_OP_ERROR) { + if (priv->retry_id) { + mlx5dr_send_engine_retry_post_send(queue, priv, wqe_cnt); + return; + } + /* Some part of the rule failed */ + priv->rule->status = MLX5DR_RULE_STATUS_FAILING; + *priv->used_id = 0; + } else { + *priv->used_id = priv->id; + } + + /* Update rule status for the last completion */ + if (!priv->rule->pending_wqes) { + if (unlikely(priv->rule->status == MLX5DR_RULE_STATUS_FAILING)) { + /* Rule completely failed and doesn't require cleanup */ + if (!priv->rule->rtc_0 && !priv->rule->rtc_1) + priv->rule->status = MLX5DR_RULE_STATUS_FAILED; + + *status = RTE_FLOW_OP_ERROR; + } else { + /* Increase the status, this only works on good flow as the enum + * is arrange it away creating -> created -> deleting -> deleted + */ + priv->rule->status++; + *status = RTE_FLOW_OP_SUCCESS; + /* Rule was deleted now we can safely release action STEs */ + if (priv->rule->status == MLX5DR_RULE_STATUS_DELETED) + mlx5dr_rule_free_action_ste_idx(priv->rule); + } + } +} + +static void mlx5dr_send_engine_update(struct mlx5dr_send_engine *queue, + struct mlx5_cqe64 *cqe, + struct mlx5dr_send_ring_priv *priv, + struct rte_flow_op_result res[], + int64_t *i, + uint32_t res_nb, + uint16_t wqe_cnt) +{ + enum rte_flow_op_status status; + + if (!cqe || (likely(rte_be_to_cpu_32(cqe->byte_cnt) >> 31 == 0) && + likely(mlx5dv_get_cqe_opcode(cqe) == MLX5_CQE_REQ))) { + status = RTE_FLOW_OP_SUCCESS; + } else { + status = RTE_FLOW_OP_ERROR; + } + + if (priv->user_data) { + if (priv->rule) { + mlx5dr_send_engine_update_rule(queue, priv, wqe_cnt, &status); + /* Completion is provided on the last rule WQE */ + if (priv->rule->pending_wqes) + return; + } + + if (*i < res_nb) { + res[*i].user_data = priv->user_data; + res[*i].status = status; + (*i)++; + mlx5dr_send_engine_dec_rule(queue); + } else { + mlx5dr_send_engine_gen_comp(queue, priv->user_data, status); + } + } +} + +static void mlx5dr_send_engine_poll_cq(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring *send_ring, + struct rte_flow_op_result res[], + int64_t *i, + uint32_t res_nb) +{ + struct mlx5dr_send_ring_cq *cq = &send_ring->send_cq; + struct mlx5dr_send_ring_sq *sq = &send_ring->send_sq; + uint32_t cq_idx = cq->cons_index & cq->ncqe_mask; + struct mlx5dr_send_ring_priv *priv; + struct mlx5_cqe64 *cqe; + uint32_t offset_cqe64; + uint8_t cqe_opcode; + uint8_t cqe_owner; + uint16_t wqe_cnt; + uint8_t sw_own; + + offset_cqe64 = RTE_CACHE_LINE_SIZE - sizeof(struct mlx5_cqe64); + cqe = (void *)(cq->buf + (cq_idx << cq->cqe_log_sz) + offset_cqe64); + + sw_own = (cq->cons_index & cq->ncqe) ? 1 : 0; + cqe_opcode = mlx5dv_get_cqe_opcode(cqe); + cqe_owner = mlx5dv_get_cqe_owner(cqe); + + if (cqe_opcode == MLX5_CQE_INVALID || + cqe_owner != sw_own) + return; + + if (unlikely(mlx5dv_get_cqe_opcode(cqe) != MLX5_CQE_REQ)) + queue->err = true; + + rte_io_rmb(); + + wqe_cnt = be16toh(cqe->wqe_counter) & sq->buf_mask; + + while (cq->poll_wqe != wqe_cnt) { + priv = &sq->wr_priv[cq->poll_wqe]; + mlx5dr_send_engine_update(queue, NULL, priv, res, i, res_nb, 0); + cq->poll_wqe = (cq->poll_wqe + priv->num_wqebbs) & sq->buf_mask; + } + + priv = &sq->wr_priv[wqe_cnt]; + cq->poll_wqe = (wqe_cnt + priv->num_wqebbs) & sq->buf_mask; + mlx5dr_send_engine_update(queue, cqe, priv, res, i, res_nb, wqe_cnt); + cq->cons_index++; +} + +static void mlx5dr_send_engine_poll_cqs(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + int64_t *polled, + uint32_t res_nb) +{ + int j; + + for (j = 0; j < MLX5DR_NUM_SEND_RINGS; j++) { + mlx5dr_send_engine_poll_cq(queue, &queue->send_ring[j], + res, polled, res_nb); + + *queue->send_ring[j].send_cq.db = + htobe32(queue->send_ring[j].send_cq.cons_index & 0xffffff); + } +} + +static void mlx5dr_send_engine_poll_list(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + int64_t *polled, + uint32_t res_nb) +{ + struct mlx5dr_completed_poll *comp = &queue->completed; + + while (comp->ci != comp->pi) { + if (*polled < res_nb) { + res[*polled].status = + comp->entries[comp->ci].status; + res[*polled].user_data = + comp->entries[comp->ci].user_data; + (*polled)++; + comp->ci = (comp->ci + 1) & comp->mask; + mlx5dr_send_engine_dec_rule(queue); + } else { + return; + } + } +} + +static int mlx5dr_send_engine_poll(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + uint32_t res_nb) +{ + int64_t polled = 0; + + mlx5dr_send_engine_poll_list(queue, res, &polled, res_nb); + + if (polled >= res_nb) + return polled; + + mlx5dr_send_engine_poll_cqs(queue, res, &polled, res_nb); + + return polled; +} + +int mlx5dr_send_queue_poll(struct mlx5dr_context *ctx, + uint16_t queue_id, + struct rte_flow_op_result res[], + uint32_t res_nb) +{ + return mlx5dr_send_engine_poll(&ctx->send_queue[queue_id], + res, res_nb); +} + +static int mlx5dr_send_ring_create_sq_obj(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_sq *sq, + struct mlx5dr_send_ring_cq *cq, + size_t log_wq_sz) +{ + struct mlx5dr_cmd_sq_create_attr attr = {0}; + int err; + + attr.cqn = cq->cqn; + attr.pdn = ctx->pd_num; + attr.page_id = queue->uar->page_id; + attr.dbr_id = sq->db_umem->umem_id; + attr.wq_id = sq->buf_umem->umem_id; + attr.log_wq_sz = log_wq_sz; + + sq->obj = mlx5dr_cmd_sq_create(ctx->ibv_ctx, &attr); + if (!sq->obj) + return rte_errno; + + sq->sqn = sq->obj->id; + + err = mlx5dr_cmd_sq_modify_rdy(sq->obj); + if (err) + goto free_sq; + + return 0; + +free_sq: + mlx5dr_cmd_destroy_obj(sq->obj); + + return err; +} + +static inline unsigned long align(unsigned long val, unsigned long align) +{ + return (val + align - 1) & ~(align - 1); +} + +static int mlx5dr_send_ring_open_sq(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_sq *sq, + struct mlx5dr_send_ring_cq *cq) +{ + size_t sq_log_buf_sz; + size_t buf_aligned; + size_t sq_buf_sz; + size_t buf_sz; + int err; + + buf_sz = queue->num_entries * MAX_WQES_PER_RULE; + sq_log_buf_sz = log2above(buf_sz); + sq_buf_sz = 1 << (sq_log_buf_sz + log2above(MLX5_SEND_WQE_BB)); + sq->reg_addr = queue->uar->reg_addr; + + buf_aligned = align(sq_buf_sz, sysconf(_SC_PAGESIZE)); + err = posix_memalign((void **)&sq->buf, sysconf(_SC_PAGESIZE), buf_aligned); + if (err) { + rte_errno = ENOMEM; + return err; + } + memset(sq->buf, 0, buf_aligned); + + err = posix_memalign((void **)&sq->db, 8, 8); + if (err) + goto free_buf; + + sq->buf_umem = mlx5_glue->devx_umem_reg(ctx->ibv_ctx, sq->buf, sq_buf_sz, 0); + + if (!sq->buf_umem) { + err = errno; + goto free_db; + } + + sq->db_umem = mlx5_glue->devx_umem_reg(ctx->ibv_ctx, sq->db, 8, 0); + if (!sq->db_umem) { + err = errno; + goto free_buf_umem; + } + + err = mlx5dr_send_ring_create_sq_obj(ctx, queue, sq, cq, sq_log_buf_sz); + + if (err) + goto free_db_umem; + + sq->wr_priv = simple_malloc(sizeof(*sq->wr_priv) * buf_sz); + if (!sq->wr_priv) { + err = ENOMEM; + goto destroy_sq_obj; + } + + sq->dep_wqe = simple_calloc(queue->num_entries, sizeof(*sq->dep_wqe)); + if (!sq->dep_wqe) { + err = ENOMEM; + goto destroy_wr_priv; + } + + sq->buf_mask = buf_sz - 1; + + return 0; + +destroy_wr_priv: + simple_free(sq->wr_priv); +destroy_sq_obj: + mlx5dr_cmd_destroy_obj(sq->obj); +free_db_umem: + mlx5_glue->devx_umem_dereg(sq->db_umem); +free_buf_umem: + mlx5_glue->devx_umem_dereg(sq->buf_umem); +free_db: + free(sq->db); +free_buf: + free(sq->buf); + rte_errno = err; + return err; +} + +static void mlx5dr_send_ring_close_sq(struct mlx5dr_send_ring_sq *sq) +{ + simple_free(sq->dep_wqe); + mlx5dr_cmd_destroy_obj(sq->obj); + mlx5_glue->devx_umem_dereg(sq->db_umem); + mlx5_glue->devx_umem_dereg(sq->buf_umem); + simple_free(sq->wr_priv); + free(sq->db); + free(sq->buf); +} + +static int mlx5dr_send_ring_open_cq(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_cq *cq) +{ + struct mlx5dv_cq mlx5_cq = {0}; + struct mlx5dv_obj obj; + struct ibv_cq *ibv_cq; + size_t cq_size; + int err; + + cq_size = queue->num_entries; + ibv_cq = mlx5_glue->create_cq(ctx->ibv_ctx, cq_size, NULL, NULL, 0); + if (!ibv_cq) { + DR_LOG(ERR, "Failed to create CQ"); + rte_errno = errno; + return rte_errno; + } + + obj.cq.in = ibv_cq; + obj.cq.out = &mlx5_cq; + err = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_CQ); + if (err) { + err = errno; + goto close_cq; + } + + cq->buf = mlx5_cq.buf; + cq->db = mlx5_cq.dbrec; + cq->ncqe = mlx5_cq.cqe_cnt; + cq->cqe_sz = mlx5_cq.cqe_size; + cq->cqe_log_sz = log2above(cq->cqe_sz); + cq->ncqe_mask = cq->ncqe - 1; + cq->buf_sz = cq->cqe_sz * cq->ncqe; + cq->cqn = mlx5_cq.cqn; + cq->ibv_cq = ibv_cq; + + return 0; + +close_cq: + mlx5_glue->destroy_cq(ibv_cq); + rte_errno = err; + return err; +} + +static void mlx5dr_send_ring_close_cq(struct mlx5dr_send_ring_cq *cq) +{ + mlx5_glue->destroy_cq(cq->ibv_cq); +} + +static void mlx5dr_send_ring_close(struct mlx5dr_send_ring *ring) +{ + mlx5dr_send_ring_close_sq(&ring->send_sq); + mlx5dr_send_ring_close_cq(&ring->send_cq); +} + +static int mlx5dr_send_ring_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring *ring) +{ + int err; + + err = mlx5dr_send_ring_open_cq(ctx, queue, &ring->send_cq); + if (err) + return err; + + err = mlx5dr_send_ring_open_sq(ctx, queue, &ring->send_sq, &ring->send_cq); + if (err) + goto close_cq; + + return err; + +close_cq: + mlx5dr_send_ring_close_cq(&ring->send_cq); + + return err; +} + +static void __mlx5dr_send_rings_close(struct mlx5dr_send_engine *queue, + uint16_t i) +{ + while (i--) + mlx5dr_send_ring_close(&queue->send_ring[i]); +} + +static void mlx5dr_send_rings_close(struct mlx5dr_send_engine *queue) +{ + __mlx5dr_send_rings_close(queue, queue->rings); +} + +static int mlx5dr_send_rings_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue) +{ + uint16_t i; + int err; + + for (i = 0; i < queue->rings; i++) { + err = mlx5dr_send_ring_open(ctx, queue, &queue->send_ring[i]); + if (err) + goto free_rings; + } + + return 0; + +free_rings: + __mlx5dr_send_rings_close(queue, i); + + return err; +} + +void mlx5dr_send_queue_close(struct mlx5dr_send_engine *queue) +{ + mlx5dr_send_rings_close(queue); + simple_free(queue->completed.entries); + mlx5_glue->devx_free_uar(queue->uar); +} + +int mlx5dr_send_queue_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + uint16_t queue_size) +{ + struct mlx5dv_devx_uar *uar; + int err; + +#ifdef MLX5DV_UAR_ALLOC_TYPE_NC + uar = mlx5_glue->devx_alloc_uar(ctx->ibv_ctx, MLX5_IB_UAPI_UAR_ALLOC_TYPE_NC); + if (!uar) { + rte_errno = errno; + return rte_errno; + } +#else + uar = NULL; + rte_errno = ENOTSUP; + return rte_errno; +#endif + + queue->uar = uar; + queue->rings = MLX5DR_NUM_SEND_RINGS; + queue->num_entries = roundup_pow_of_two(queue_size); + queue->used_entries = 0; + queue->th_entries = queue->num_entries; + + queue->completed.entries = simple_calloc(queue->num_entries, + sizeof(queue->completed.entries[0])); + if (!queue->completed.entries) { + rte_errno = ENOMEM; + goto free_uar; + } + queue->completed.pi = 0; + queue->completed.ci = 0; + queue->completed.mask = queue->num_entries - 1; + + err = mlx5dr_send_rings_open(ctx, queue); + if (err) + goto free_completed_entries; + + return 0; + +free_completed_entries: + simple_free(queue->completed.entries); +free_uar: + mlx5_glue->devx_free_uar(uar); + return rte_errno; +} + +static void __mlx5dr_send_queues_close(struct mlx5dr_context *ctx, uint16_t queues) +{ + struct mlx5dr_send_engine *queue; + + while (queues--) { + queue = &ctx->send_queue[queues]; + + mlx5dr_send_queue_close(queue); + } +} + +void mlx5dr_send_queues_close(struct mlx5dr_context *ctx) +{ + __mlx5dr_send_queues_close(ctx, ctx->queues); + simple_free(ctx->send_queue); +} + +int mlx5dr_send_queues_open(struct mlx5dr_context *ctx, + uint16_t queues, + uint16_t queue_size) +{ + int err = 0; + uint32_t i; + + /* Open one extra queue for control path */ + ctx->queues = queues + 1; + + ctx->send_queue = simple_calloc(ctx->queues, sizeof(*ctx->send_queue)); + if (!ctx->send_queue) { + rte_errno = ENOMEM; + return rte_errno; + } + + for (i = 0; i < ctx->queues; i++) { + err = mlx5dr_send_queue_open(ctx, &ctx->send_queue[i], queue_size); + if (err) + goto close_send_queues; + } + + return 0; + +close_send_queues: + __mlx5dr_send_queues_close(ctx, i); + + simple_free(ctx->send_queue); + + return err; +} + +int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, + uint16_t queue_id, + uint32_t actions) +{ + struct mlx5dr_send_ring_sq *send_sq; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[queue_id]; + send_sq = &queue->send_ring->send_sq; + + if (actions == MLX5DR_SEND_QUEUE_ACTION_DRAIN) { + if (send_sq->head_dep_idx != send_sq->tail_dep_idx) + /* Send dependent WQEs to drain the queue */ + mlx5dr_send_all_dep_wqe(queue); + else + /* Signal on the last posted WQE */ + mlx5dr_send_engine_flush_queue(queue); + } else { + rte_errno = -EINVAL; + return rte_errno; + } + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h new file mode 100644 index 0000000000..8d4769495d --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_send.h @@ -0,0 +1,275 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_SEND_H_ +#define MLX5DR_SEND_H_ + +#define MLX5DR_NUM_SEND_RINGS 1 + +/* As a single operation requires at least two WQEBBS. + * This means a maximum of 16 such operations per rule. + */ +#define MAX_WQES_PER_RULE 32 + +/* WQE Control segment. */ +struct mlx5dr_wqe_ctrl_seg { + __be32 opmod_idx_opcode; + __be32 qpn_ds; + __be32 flags; + __be32 imm; +}; + +enum mlx5dr_wqe_opcode { + MLX5DR_WQE_OPCODE_TBL_ACCESS = 0x2c, +}; + +enum mlx5dr_wqe_opmod { + MLX5DR_WQE_OPMOD_GTA_STE = 0, + MLX5DR_WQE_OPMOD_GTA_MOD_ARG = 1, +}; + +enum mlx5dr_wqe_gta_opcode { + MLX5DR_WQE_GTA_OP_ACTIVATE = 0, + MLX5DR_WQE_GTA_OP_DEACTIVATE = 1, +}; + +enum mlx5dr_wqe_gta_opmod { + MLX5DR_WQE_GTA_OPMOD_STE = 0, + MLX5DR_WQE_GTA_OPMOD_MOD_ARG = 1, +}; + +enum mlx5dr_wqe_gta_sz { + MLX5DR_WQE_SZ_GTA_CTRL = 48, + MLX5DR_WQE_SZ_GTA_DATA = 64, +}; + +struct mlx5dr_wqe_gta_ctrl_seg { + __be32 op_dirix; + __be32 stc_ix[5]; + __be32 rsvd0[6]; +}; + +struct mlx5dr_wqe_gta_data_seg_ste { + __be32 rsvd0_ctr_id; + __be32 rsvd1[4]; + __be32 action[3]; + __be32 tag[8]; +}; + +struct mlx5dr_wqe_gta_data_seg_arg { + __be32 action_args[8]; +}; + +struct mlx5dr_wqe_gta { + struct mlx5dr_wqe_gta_ctrl_seg gta_ctrl; + union { + struct mlx5dr_wqe_gta_data_seg_ste seg_ste; + struct mlx5dr_wqe_gta_data_seg_arg seg_arg; + }; +}; + +struct mlx5dr_send_ring_cq { + uint8_t *buf; + uint32_t cons_index; + uint32_t ncqe_mask; + uint32_t buf_sz; + uint32_t ncqe; + uint32_t cqe_log_sz; + __be32 *db; + uint16_t poll_wqe; + struct ibv_cq *ibv_cq; + uint32_t cqn; + uint32_t cqe_sz; +}; + +struct mlx5dr_send_ring_priv { + struct mlx5dr_rule *rule; + void *user_data; + uint32_t num_wqebbs; + uint32_t id; + uint32_t retry_id; + uint32_t *used_id; +}; + +struct mlx5dr_send_ring_dep_wqe { + struct mlx5dr_wqe_gta_ctrl_seg wqe_ctrl; + struct mlx5dr_wqe_gta_data_seg_ste wqe_data; + struct mlx5dr_rule *rule; + uint32_t rtc_0; + uint32_t rtc_1; + uint32_t retry_rtc_0; + uint32_t retry_rtc_1; + void *user_data; +}; + +struct mlx5dr_send_ring_sq { + char *buf; + uint32_t sqn; + __be32 *db; + void *reg_addr; + uint16_t cur_post; + uint16_t buf_mask; + struct mlx5dr_send_ring_priv *wr_priv; + unsigned int last_idx; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + unsigned int head_dep_idx; + unsigned int tail_dep_idx; + struct mlx5dr_devx_obj *obj; + struct mlx5dv_devx_umem *buf_umem; + struct mlx5dv_devx_umem *db_umem; +}; + +struct mlx5dr_send_ring { + struct mlx5dr_send_ring_cq send_cq; + struct mlx5dr_send_ring_sq send_sq; +}; + +struct mlx5dr_completed_poll_entry { + void *user_data; + enum rte_flow_op_status status; +}; + +struct mlx5dr_completed_poll { + struct mlx5dr_completed_poll_entry *entries; + uint16_t ci; + uint16_t pi; + uint16_t mask; +}; + +struct mlx5dr_send_engine { + struct mlx5dr_send_ring send_ring[MLX5DR_NUM_SEND_RINGS]; /* For now 1:1 mapping */ + struct mlx5dv_devx_uar *uar; /* Uar is shared between rings of a queue */ + struct mlx5dr_completed_poll completed; + uint16_t used_entries; + uint16_t th_entries; + uint16_t rings; + uint16_t num_entries; + bool err; +} __rte_cache_aligned; + +struct mlx5dr_send_engine_post_ctrl { + struct mlx5dr_send_engine *queue; + struct mlx5dr_send_ring *send_ring; + size_t num_wqebbs; +}; + +struct mlx5dr_send_engine_post_attr { + uint8_t opcode; + uint8_t opmod; + uint8_t notify_hw; + uint8_t fence; + size_t len; + struct mlx5dr_rule *rule; + uint32_t id; + uint32_t retry_id; + uint32_t *used_id; + void *user_data; +}; + +struct mlx5dr_send_ste_attr { + /* rtc / retry_rtc / used_id_rtc override send_attr */ + uint32_t rtc_0; + uint32_t rtc_1; + uint32_t retry_rtc_0; + uint32_t retry_rtc_1; + uint32_t *used_id_rtc_0; + uint32_t *used_id_rtc_1; + bool wqe_tag_is_jumbo; + uint8_t gta_opcode; + uint32_t direct_index; + struct mlx5dr_send_engine_post_attr send_attr; + struct mlx5dr_rule_match_tag *wqe_tag; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; +}; + +/** + * Provide safe 64bit store operation to mlx5 UAR region for + * both 32bit and 64bit architectures. + * + * @param val + * value to write in CPU endian format. + * @param addr + * Address to write to. + * @param lock + * Address of the lock to use for that UAR access. + */ +static __rte_always_inline void +mlx5dr_uar_write64_relaxed(uint64_t val, void *addr) +{ +#ifdef RTE_ARCH_64 + *(uint64_t *)addr = val; +#else /* !RTE_ARCH_64 */ + *(uint32_t *)addr = val; + rte_io_wmb(); + *((uint32_t *)addr + 1) = val >> 32; +#endif +} + +struct mlx5dr_send_ring_dep_wqe * +mlx5dr_send_add_new_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_abort_new_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_all_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_queue_close(struct mlx5dr_send_engine *queue); + +int mlx5dr_send_queue_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + uint16_t queue_size); + +void mlx5dr_send_queues_close(struct mlx5dr_context *ctx); + +int mlx5dr_send_queues_open(struct mlx5dr_context *ctx, + uint16_t queues, + uint16_t queue_size); + +struct mlx5dr_send_engine_post_ctrl +mlx5dr_send_engine_post_start(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_engine_post_req_wqe(struct mlx5dr_send_engine_post_ctrl *ctrl, + char **buf, size_t *len); + +void mlx5dr_send_engine_post_end(struct mlx5dr_send_engine_post_ctrl *ctrl, + struct mlx5dr_send_engine_post_attr *attr); + +void mlx5dr_send_ste(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ste_attr *ste_attr); + +void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue); + +static inline bool mlx5dr_send_engine_full(struct mlx5dr_send_engine *queue) +{ + return queue->used_entries >= queue->th_entries; +} + +static inline void mlx5dr_send_engine_inc_rule(struct mlx5dr_send_engine *queue) +{ + queue->used_entries++; +} + +static inline void mlx5dr_send_engine_dec_rule(struct mlx5dr_send_engine *queue) +{ + queue->used_entries--; +} + +static inline void mlx5dr_send_engine_gen_comp(struct mlx5dr_send_engine *queue, + void *user_data, + int comp_status) +{ + struct mlx5dr_completed_poll *comp = &queue->completed; + + comp->entries[comp->pi].status = comp_status; + comp->entries[comp->pi].user_data = user_data; + + comp->pi = (comp->pi + 1) & comp->mask; +} + +static inline bool mlx5dr_send_engine_err(struct mlx5dr_send_engine *queue) +{ + return queue->err; +} + +#endif /* MLX5DR_SEND_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 12/19] net/mlx5/hws: Add HWS definer layer 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (10 preceding siblings ...) 2022-10-06 15:03 ` [v2 11/19] net/mlx5/hws: Add HWS send layer Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 13/19] net/mlx5/hws: Add HWS context object Alex Vesker ` (6 subsequent siblings) 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Mark Bloch Definers are HW objects that are used for matching, rte items are translated to definers, each definer holds the fields and bit-masks used for HW flow matching. The definer layer is used for finding the most efficient definer for each set of items. In addition to definer creation we also calculate the field copy (fc) array used for efficient items to WQE conversion. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_definer.c | 1970 +++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_definer.h | 577 ++++++++ 2 files changed, 2547 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.h diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c new file mode 100644 index 0000000000..2a43fa808e --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -0,0 +1,1970 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +#define GTP_PDU_SC 0x85 +#define BAD_PORT 0xBAD +#define ETH_TYPE_IPV4_VXLAN 0x0800 +#define ETH_TYPE_IPV6_VXLAN 0x86DD +#define ETH_VXLAN_DEFAULT_PORT 4789 + +#define STE_NO_VLAN 0x0 +#define STE_SVLAN 0x1 +#define STE_CVLAN 0x2 +#define STE_IPV4 0x1 +#define STE_IPV6 0x2 +#define STE_TCP 0x1 +#define STE_UDP 0x2 +#define STE_ICMP 0x3 + +/* Setter function based on bit offset and mask, for 32bit DW*/ +#define _DR_SET_32(p, v, byte_off, bit_off, mask) \ + do { \ + u32 _v = v; \ + *((rte_be32_t *)(p) + ((byte_off) / 4)) = \ + rte_cpu_to_be_32((rte_be_to_cpu_32(*((u32 *)(p) + \ + ((byte_off) / 4))) & \ + (~((mask) << (bit_off)))) | \ + (((_v) & (mask)) << \ + (bit_off))); \ + } while (0) + +/* Setter function based on bit offset and mask */ +#define DR_SET(p, v, byte_off, bit_off, mask) \ + do { \ + if (unlikely((bit_off) < 0)) { \ + u32 _bit_off = -1 * (bit_off); \ + u32 second_dw_mask = (mask) & ((1 << _bit_off) - 1); \ + _DR_SET_32(p, (v) >> _bit_off, byte_off, 0, (mask) >> _bit_off); \ + _DR_SET_32(p, (v) & second_dw_mask, (byte_off) + DW_SIZE, \ + (bit_off) % BITS_IN_DW, second_dw_mask); \ + } else { \ + _DR_SET_32(p, v, byte_off, (bit_off), (mask)); \ + } \ + } while (0) + +/* Setter function based on byte offset to directly set FULL BE32 value */ +#define DR_SET_BE32(p, v, byte_off, bit_off, mask) \ + (*((rte_be32_t *)((uint8_t *)(p) + (byte_off))) = (v)) + +/* Setter function based on byte offset to directly set FULL BE32 value from ptr */ +#define DR_SET_BE32P(p, v_ptr, byte_off, bit_off, mask) \ + memcpy((uint8_t *)(p) + (byte_off), v_ptr, 4) + +/* Setter function based on byte offset to directly set FULL BE16 value */ +#define DR_SET_BE16(p, v, byte_off, bit_off, mask) \ + (*((rte_be16_t *)((uint8_t *)(p) + (byte_off))) = (v)) + +/* Setter function based on byte offset to directly set FULL BE16 value from ptr */ +#define DR_SET_BE16P(p, v_ptr, byte_off, bit_off, mask) \ + memcpy((uint8_t *)(p) + (byte_off), v_ptr, 2) + +#define DR_CALC_FNAME(field, inner) \ + ((inner) ? MLX5DR_DEFINER_FNAME_##field##_I : \ + MLX5DR_DEFINER_FNAME_##field##_O) + +#define DR_CALC_SET_HDR(fc, hdr, field) \ + do { \ + (fc)->bit_mask = __mlx5_mask(definer_hl, hdr.field); \ + (fc)->bit_off = __mlx5_dw_bit_off(definer_hl, hdr.field); \ + (fc)->byte_off = MLX5_BYTE_OFF(definer_hl, hdr.field); \ + } while (0) + +/* Helper to calculate data used by DR_SET */ +#define DR_CALC_SET(fc, hdr, field, is_inner) \ + do { \ + if (is_inner) { \ + DR_CALC_SET_HDR(fc, hdr##_inner, field); \ + } else { \ + DR_CALC_SET_HDR(fc, hdr##_outer, field); \ + } \ + } while (0) + + #define DR_GET(typ, p, fld) \ + ((rte_be_to_cpu_32(*((const rte_be32_t *)(p) + \ + __mlx5_dw_off(typ, fld))) >> __mlx5_dw_bit_off(typ, fld)) & \ + __mlx5_mask(typ, fld)) + +struct mlx5dr_definer_sel_ctrl { + uint8_t allowed_full_dw; /* Full DW selectors cover all offsets */ + uint8_t allowed_lim_dw; /* Limited DW selectors cover offset < 64 */ + uint8_t allowed_bytes; /* Bytes selectors, up to offset 255 */ + uint8_t used_full_dw; + uint8_t used_lim_dw; + uint8_t used_bytes; + uint8_t full_dw_selector[DW_SELECTORS]; + uint8_t lim_dw_selector[DW_SELECTORS_LIMITED]; + uint8_t byte_selector[BYTE_SELECTORS]; +}; + +struct mlx5dr_definer_conv_data { + struct mlx5dr_cmd_query_caps *caps; + struct mlx5dr_definer_fc *fc; + uint8_t relaxed; + uint8_t tunnel; + uint8_t *hl; +}; + +/* Xmacro used to create generic item setter from items */ +#define LIST_OF_FIELDS_INFO \ + X(SET_BE16, eth_type, v->type, rte_flow_item_eth) \ + X(SET_BE32P, eth_smac_47_16, &v->src.addr_bytes[0], rte_flow_item_eth) \ + X(SET_BE16P, eth_smac_15_0, &v->src.addr_bytes[4], rte_flow_item_eth) \ + X(SET_BE32P, eth_dmac_47_16, &v->dst.addr_bytes[0], rte_flow_item_eth) \ + X(SET_BE16P, eth_dmac_15_0, &v->dst.addr_bytes[4], rte_flow_item_eth) \ + X(SET_BE16, tci, v->tci, rte_flow_item_vlan) \ + X(SET, ipv4_ihl, v->ihl, rte_ipv4_hdr) \ + X(SET, ipv4_tos, v->type_of_service, rte_ipv4_hdr) \ + X(SET, ipv4_time_to_live, v->time_to_live, rte_ipv4_hdr) \ + X(SET_BE32, ipv4_dst_addr, v->dst_addr, rte_ipv4_hdr) \ + X(SET_BE32, ipv4_src_addr, v->src_addr, rte_ipv4_hdr) \ + X(SET, ipv4_next_proto, v->next_proto_id, rte_ipv4_hdr) \ + X(SET, ipv4_version, STE_IPV4, rte_ipv4_hdr) \ + X(SET_BE16, ipv4_frag, v->fragment_offset, rte_ipv4_hdr) \ + X(SET_BE16, ipv6_payload_len, v->hdr.payload_len, rte_flow_item_ipv6) \ + X(SET, ipv6_proto, v->hdr.proto, rte_flow_item_ipv6) \ + X(SET, ipv6_hop_limits, v->hdr.hop_limits, rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_127_96, &v->hdr.src_addr[0], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_95_64, &v->hdr.src_addr[4], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_63_32, &v->hdr.src_addr[8], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_31_0, &v->hdr.src_addr[12], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_127_96, &v->hdr.dst_addr[0], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_95_64, &v->hdr.dst_addr[4], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_63_32, &v->hdr.dst_addr[8], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_31_0, &v->hdr.dst_addr[12], rte_flow_item_ipv6) \ + X(SET, ipv6_version, STE_IPV6, rte_flow_item_ipv6) \ + X(SET, ipv6_frag, v->has_frag_ext, rte_flow_item_ipv6) \ + X(SET, icmp_protocol, STE_ICMP, rte_flow_item_icmp) \ + X(SET, udp_protocol, STE_UDP, rte_flow_item_udp) \ + X(SET_BE16, udp_src_port, v->hdr.src_port, rte_flow_item_udp) \ + X(SET_BE16, udp_dst_port, v->hdr.dst_port, rte_flow_item_udp) \ + X(SET, tcp_flags, v->hdr.tcp_flags, rte_flow_item_tcp) \ + X(SET, tcp_protocol, STE_TCP, rte_flow_item_tcp) \ + X(SET_BE16, tcp_src_port, v->hdr.src_port, rte_flow_item_tcp) \ + X(SET_BE16, tcp_dst_port, v->hdr.dst_port, rte_flow_item_tcp) \ + X(SET, gtp_udp_port, RTE_GTPU_UDP_PORT, rte_flow_item_gtp) \ + X(SET_BE32, gtp_teid, v->teid, rte_flow_item_gtp) \ + X(SET, gtp_msg_type, v->msg_type, rte_flow_item_gtp) \ + X(SET, gtp_ext_flag, !!v->v_pt_rsv_flags, rte_flow_item_gtp) \ + X(SET, gtp_next_ext_hdr, GTP_PDU_SC, rte_flow_item_gtp_psc) \ + X(SET, gtp_ext_hdr_pdu, v->pdu_type, rte_flow_item_gtp_psc) \ + X(SET, gtp_ext_hdr_qfi, v->qfi, rte_flow_item_gtp_psc) \ + X(SET, vxlan_flags, v->flags, rte_flow_item_vxlan) \ + X(SET, vxlan_udp_port, ETH_VXLAN_DEFAULT_PORT, rte_flow_item_vxlan) \ + X(SET, source_qp, v->queue, mlx5_rte_flow_item_sq) \ + X(SET, tag, v->data, rte_flow_item_tag) \ + X(SET, metadata, v->data, rte_flow_item_meta) \ + X(SET_BE16, gre_c_ver, v->c_rsvd0_ver, rte_flow_item_gre) \ + X(SET_BE16, gre_protocol_type, v->protocol, rte_flow_item_gre) \ + X(SET, ipv4_protocol_gre, IPPROTO_GRE, rte_flow_item_gre) \ + X(SET_BE32, gre_opt_key, v->key.key, rte_flow_item_gre_opt) \ + X(SET_BE32, gre_opt_seq, v->sequence.sequence, rte_flow_item_gre_opt) \ + X(SET_BE16, gre_opt_checksum, v->checksum_rsvd.checksum, rte_flow_item_gre_opt) \ + X(SET, meter_color, rte_col_2_mlx5_col(v->color), rte_flow_item_meter_color) + +/* Item set function format */ +#define X(set_type, func_name, value, item_type) \ +static void mlx5dr_definer_##func_name##_set( \ + struct mlx5dr_definer_fc *fc, \ + const void *item_spec, \ + uint8_t *tag) \ +{ \ + __rte_unused const struct item_type *v = item_spec; \ + DR_##set_type(tag, value, fc->byte_off, fc->bit_off, fc->bit_mask); \ +} +LIST_OF_FIELDS_INFO +#undef X + +static void +mlx5dr_definer_ones_set(struct mlx5dr_definer_fc *fc, + __rte_unused const void *item_spec, + __rte_unused uint8_t *tag) +{ + DR_SET(tag, -1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_eth_first_vlan_q_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_eth *v = item_spec; + uint8_t vlan_type; + + vlan_type = v->has_vlan ? STE_CVLAN : STE_NO_VLAN; + + DR_SET(tag, vlan_type, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_first_vlan_q_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_vlan *v = item_spec; + uint8_t vlan_type; + + vlan_type = v->has_more_vlan ? STE_SVLAN : STE_CVLAN; + + DR_SET(tag, vlan_type, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_conntrack_mask(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_conntrack *m = item_spec; + uint32_t reg_mask = 0; + + if (m->flags & (RTE_FLOW_CONNTRACK_PKT_STATE_VALID | + RTE_FLOW_CONNTRACK_PKT_STATE_INVALID | + RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED)) + reg_mask |= (MLX5_CT_SYNDROME_VALID | MLX5_CT_SYNDROME_INVALID | + MLX5_CT_SYNDROME_TRAP); + + if (m->flags & RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED) + reg_mask |= MLX5_CT_SYNDROME_STATE_CHANGE; + + if (m->flags & RTE_FLOW_CONNTRACK_PKT_STATE_BAD) + reg_mask |= MLX5_CT_SYNDROME_BAD_PACKET; + + DR_SET(tag, reg_mask, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_conntrack_tag(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_conntrack *v = item_spec; + uint32_t reg_value = 0; + + /* The conflict should be checked in the validation. */ + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_VALID) + reg_value |= MLX5_CT_SYNDROME_VALID; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED) + reg_value |= MLX5_CT_SYNDROME_STATE_CHANGE; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_INVALID) + reg_value |= MLX5_CT_SYNDROME_INVALID; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED) + reg_value |= MLX5_CT_SYNDROME_TRAP; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_BAD) + reg_value |= MLX5_CT_SYNDROME_BAD_PACKET; + + DR_SET(tag, reg_value, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_integrity_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + bool inner = (fc->fname == MLX5DR_DEFINER_FNAME_INTEGRITY_I); + const struct rte_flow_item_integrity *v = item_spec; + uint32_t ok1_bits = 0; + + if (v->l3_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L3_OK) | + BIT(MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_L3_OK) | + BIT(MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK); + + if (v->ipv4_csum_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK); + + if (v->l4_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_OK) | + BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_OK) | + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK); + + if (v->l4_csum_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK); + + DR_SET(tag, ok1_bits, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_gre_key_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const rte_be32_t *v = item_spec; + + DR_SET_BE32(tag, *v, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_vxlan_vni_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_vxlan *v = item_spec; + + memcpy(tag + fc->byte_off, v->vni, sizeof(v->vni)); +} + +static void +mlx5dr_definer_ipv6_tos_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ipv6 *v = item_spec; + uint8_t tos = DR_GET(header_ipv6_vtc, &v->hdr.vtc_flow, tos); + + DR_SET(tag, tos, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp_dw1_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp *v = item_spec; + rte_be32_t icmp_dw1; + + icmp_dw1 = (v->hdr.icmp_type << __mlx5_dw_bit_off(header_icmp, type)) | + (v->hdr.icmp_code << __mlx5_dw_bit_off(header_icmp, code)) | + (v->hdr.icmp_cksum << __mlx5_dw_bit_off(header_icmp, cksum)); + + DR_SET_BE32(tag, icmp_dw1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp_dw2_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp *v = item_spec; + rte_be32_t icmp_dw2; + + icmp_dw2 = (v->hdr.icmp_ident << __mlx5_dw_bit_off(header_icmp, ident)) | + (v->hdr.icmp_seq_nb << __mlx5_dw_bit_off(header_icmp, seq_nb)); + + DR_SET_BE32(tag, icmp_dw2, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp6_dw1_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp6 *v = item_spec; + rte_be32_t icmp_dw1; + + icmp_dw1 = (v->type << __mlx5_dw_bit_off(header_icmp, type)) | + (v->code << __mlx5_dw_bit_off(header_icmp, code)) | + (v->checksum << __mlx5_dw_bit_off(header_icmp, cksum)); + + DR_SET_BE32(tag, icmp_dw1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_ipv6_flow_label_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ipv6 *v = item_spec; + uint32_t flow_label = DR_GET(header_ipv6_vtc, &v->hdr.vtc_flow, flow_label); + + DR_SET(tag, flow_label, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_vport_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ethdev *v = item_spec; + const struct flow_hw_port_info *port_info; + uint32_t regc_value; + + port_info = flow_hw_conv_port_id(v->port_id); + if (unlikely(!port_info)) + regc_value = BAD_PORT; + else + regc_value = port_info->regc_value >> fc->bit_off; + + /* Bit offset is set to 0 to since regc value is 32bit */ + DR_SET(tag, regc_value, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static int +mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_eth *m = item->mask; + uint8_t empty_mac[RTE_ETHER_ADDR_LEN] = {0}; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->reserved) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->type) { + fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_type_set; + DR_CALC_SET(fc, eth_l2, l3_ethertype, inner); + } + + /* Check SMAC 47_16 */ + if (memcmp(m->src.addr_bytes, empty_mac, 4)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_48_16, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_smac_47_16_set; + DR_CALC_SET(fc, eth_l2_src, smac_47_16, inner); + } + + /* Check SMAC 15_0 */ + if (memcmp(m->src.addr_bytes + 4, empty_mac + 4, 2)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_15_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_smac_15_0_set; + DR_CALC_SET(fc, eth_l2_src, smac_15_0, inner); + } + + /* Check DMAC 47_16 */ + if (memcmp(m->dst.addr_bytes, empty_mac, 4)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_48_16, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_dmac_47_16_set; + DR_CALC_SET(fc, eth_l2, dmac_47_16, inner); + } + + /* Check DMAC 15_0 */ + if (memcmp(m->dst.addr_bytes + 4, empty_mac + 4, 2)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_15_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_dmac_15_0_set; + DR_CALC_SET(fc, eth_l2, dmac_15_0, inner); + } + + if (m->has_vlan) { + /* Mark packet as tagged (CVLAN) */ + fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_eth_first_vlan_q_set; + DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_vlan *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->reserved) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (!cd->relaxed || m->has_more_vlan) { + /* Mark packet as tagged (CVLAN or SVLAN) even if TCI is not specified.*/ + fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_first_vlan_q_set; + DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner); + } + + if (m->tci) { + fc = &cd->fc[DR_CALC_FNAME(VLAN_TCI, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tci_set; + DR_CALC_SET(fc, eth_l2, tci, inner); + } + + if (m->inner_type) { + fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_type_set; + DR_CALC_SET(fc, eth_l2, l3_ethertype, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_ipv4(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_ipv4_hdr *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_VERSION, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_version_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l3_type, inner); + + /* Overwrite - Unset ethertype if present */ + memset(&cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)], 0, sizeof(*fc)); + } + + if (!m) + return 0; + + if (m->total_length || m->packet_id || + m->hdr_checksum) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->fragment_offset) { + fc = &cd->fc[DR_CALC_FNAME(IP_FRAG, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_frag_set; + DR_CALC_SET(fc, eth_l3, fragment_offset, inner); + } + + if (m->next_proto_id) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_next_proto_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (m->dst_addr) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_DST, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_dst_addr_set; + DR_CALC_SET(fc, ipv4_src_dest, destination_address, inner); + } + + if (m->src_addr) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_SRC, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_src_addr_set; + DR_CALC_SET(fc, ipv4_src_dest, source_address, inner); + } + + if (m->ihl) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_IHL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_ihl_set; + DR_CALC_SET(fc, eth_l3, ihl, inner); + } + + if (m->time_to_live) { + fc = &cd->fc[DR_CALC_FNAME(IP_TTL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_time_to_live_set; + DR_CALC_SET(fc, eth_l3, time_to_live_hop_limit, inner); + } + + if (m->type_of_service) { + fc = &cd->fc[DR_CALC_FNAME(IP_TOS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_tos_set; + DR_CALC_SET(fc, eth_l3, tos, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_ipv6(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_ipv6 *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_VERSION, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_version_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l3_type, inner); + + /* Overwrite - Unset ethertype if present */ + memset(&cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)], 0, sizeof(*fc)); + } + + if (!m) + return 0; + + if (m->has_hop_ext || m->has_route_ext || m->has_auth_ext || + m->has_esp_ext || m->has_dest_ext || m->has_mobil_ext || + m->has_hip_ext || m->has_shim6_ext) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->has_frag_ext) { + fc = &cd->fc[DR_CALC_FNAME(IP_FRAG, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_frag_set; + DR_CALC_SET(fc, eth_l4, ip_fragmented, inner); + } + + if (DR_GET(header_ipv6_vtc, &m->hdr.vtc_flow, tos)) { + fc = &cd->fc[DR_CALC_FNAME(IP_TOS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_tos_set; + DR_CALC_SET(fc, eth_l3, tos, inner); + } + + if (DR_GET(header_ipv6_vtc, &m->hdr.vtc_flow, flow_label)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_FLOW_LABEL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_flow_label_set; + DR_CALC_SET(fc, eth_l3, flow_label, inner); + } + + if (m->hdr.payload_len) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_PAYLOAD_LEN, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_payload_len_set; + DR_CALC_SET(fc, eth_l3, ipv6_payload_length, inner); + } + + if (m->hdr.proto) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_proto_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (m->hdr.hop_limits) { + fc = &cd->fc[DR_CALC_FNAME(IP_TTL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_hop_limits_set; + DR_CALC_SET(fc, eth_l3, time_to_live_hop_limit, inner); + } + + if (!is_mem_zero(m->hdr.src_addr, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_127_96, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_127_96_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_127_96, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 4, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_95_64, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_95_64_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_95_64, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 8, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_63_32, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_63_32_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_63_32, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 12, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_31_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_31_0_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_31_0, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_127_96, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_127_96_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_127_96, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 4, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_95_64, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_95_64_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_95_64, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 8, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_63_32, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_63_32_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_63_32, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 12, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_31_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_31_0_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_31_0, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_udp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_udp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Set match on L4 type UDP */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + if (!m) + return 0; + + if (m->hdr.dgram_cksum || m->hdr.dgram_len) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->hdr.src_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_SPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_src_port_set; + DR_CALC_SET(fc, eth_l4, source_port, inner); + } + + if (m->hdr.dst_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_dst_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_tcp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_tcp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on L4 type TCP */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + if (!m) + return 0; + + if (m->hdr.tcp_flags) { + fc = &cd->fc[DR_CALC_FNAME(TCP_FLAGS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_flags_set; + DR_CALC_SET(fc, eth_l4, tcp_flags, inner); + } + + if (m->hdr.src_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_SPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_src_port_set; + DR_CALC_SET(fc, eth_l4, source_port, inner); + } + + if (m->hdr.dst_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_dst_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gtp *m = item->mask; + struct mlx5dr_definer_fc *fc; + + /* Overwrite GTPU dest port if not present */ + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, false)]; + if (!fc->tag_set && !cd->relaxed) { + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_udp_port_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l4, destination_port, false); + } + + if (!m) + return 0; + + if (m->msg_len || m->v_pt_rsv_flags & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->teid) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_TEID]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_teid_set; + fc->bit_mask = __mlx5_mask(header_gtp, teid); + fc->byte_off = cd->caps->format_select_gtpu_dw_1 * DW_SIZE; + } + + if (m->v_pt_rsv_flags) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_ext_flag_set; + fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + + if (m->msg_type) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_MSG_TYPE]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_msg_type_set; + fc->bit_mask = __mlx5_mask(header_gtp, msg_type); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, msg_type); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gtp_psc *m = item->mask; + struct mlx5dr_definer_fc *fc; + + /* Overwrite GTP extension flag to be 1 */ + if (!cd->relaxed) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ones_set; + fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + /* Overwrite next extension header type */ + if (!cd->relaxed) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_2_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_NEXT_EXT_HDR]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_next_ext_hdr_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->bit_mask = __mlx5_mask(header_opt_gtp, next_ext_hdr_type); + fc->bit_off = __mlx5_dw_bit_off(header_opt_gtp, next_ext_hdr_type); + fc->byte_off = cd->caps->format_select_gtpu_dw_2 * DW_SIZE; + } + + if (!m) + return 0; + + if (m->pdu_type) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_PDU]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_ext_hdr_pdu_set; + fc->bit_mask = __mlx5_mask(header_gtp_psc, pdu_type); + fc->bit_off = __mlx5_dw_bit_off(header_gtp_psc, pdu_type); + fc->byte_off = cd->caps->format_select_gtpu_ext_dw_0 * DW_SIZE; + } + + if (m->qfi) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_QFI]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_ext_hdr_qfi_set; + fc->bit_mask = __mlx5_mask(header_gtp_psc, qfi); + fc->bit_off = __mlx5_dw_bit_off(header_gtp_psc, qfi); + fc->byte_off = cd->caps->format_select_gtpu_ext_dw_0 * DW_SIZE; + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_port(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_ethdev *m = item->mask; + struct mlx5dr_definer_fc *fc; + uint8_t bit_offset = 0; + + if (m->port_id) { + if (!cd->caps->wire_regc_mask) { + DR_LOG(ERR, "Port ID item not supported, missing wire REGC mask"); + rte_errno = ENOTSUP; + return rte_errno; + } + + while (!(cd->caps->wire_regc_mask & (1 << bit_offset))) + bit_offset++; + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VPORT_REG_C_0]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_vport_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET_HDR(fc, registers, register_c_0); + fc->bit_off = bit_offset; + fc->bit_mask = cd->caps->wire_regc_mask >> bit_offset; + } else { + DR_LOG(ERR, "Pord ID item mask must specify ID mask"); + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_vxlan(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_vxlan *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* In order to match on VXLAN we must match on ether_type, ip_protocol + * and l4_dport. + */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_udp_protocol_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_vxlan_udp_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + } + + if (!m) + return 0; + + if (m->flags) { + if (inner) { + DR_LOG(ERR, "Inner VXLAN flags item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VXLAN_FLAGS]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_vxlan_flags_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_vxlan, flags); + fc->bit_off = __mlx5_dw_bit_off(header_vxlan, flags); + } + + if (!is_mem_zero(m->vni, 3)) { + if (inner) { + DR_LOG(ERR, "Inner VXLAN vni item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VXLAN_VNI]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_vxlan_vni_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_1); + fc->bit_mask = __mlx5_mask(header_vxlan, vni); + fc->bit_off = __mlx5_dw_bit_off(header_vxlan, vni); + } + + return 0; +} + +static struct mlx5dr_definer_fc * +mlx5dr_definer_get_register_fc(struct mlx5dr_definer_conv_data *cd, int reg) +{ + struct mlx5dr_definer_fc *fc; + + switch (reg) { + case REG_C_0: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_0]; + DR_CALC_SET_HDR(fc, registers, register_c_0); + break; + case REG_C_1: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_1]; + DR_CALC_SET_HDR(fc, registers, register_c_1); + break; + case REG_C_2: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_2]; + DR_CALC_SET_HDR(fc, registers, register_c_2); + break; + case REG_C_3: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_3]; + DR_CALC_SET_HDR(fc, registers, register_c_3); + break; + case REG_C_4: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_4]; + DR_CALC_SET_HDR(fc, registers, register_c_4); + break; + case REG_C_5: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_5]; + DR_CALC_SET_HDR(fc, registers, register_c_5); + break; + case REG_C_6: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_6]; + DR_CALC_SET_HDR(fc, registers, register_c_6); + break; + case REG_C_7: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_7]; + DR_CALC_SET_HDR(fc, registers, register_c_7); + break; + case REG_A: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_A]; + DR_CALC_SET_HDR(fc, metadata, general_purpose); + break; + case REG_B: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_B]; + DR_CALC_SET_HDR(fc, metadata, metadata_to_cqe); + break; + default: + rte_errno = ENOTSUP; + return NULL; + } + + return fc; +} + +static int +mlx5dr_definer_conv_item_tag(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_tag *m = item->mask; + const struct rte_flow_item_tag *v = item->spec; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m || !v) + return 0; + + if (item->type == RTE_FLOW_ITEM_TYPE_TAG) + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_TAG, v->index); + else + reg = (int)v->index; + + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item tag"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tag_set; + return 0; +} + +static int +mlx5dr_definer_conv_item_metadata(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_meta *m = item->mask; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m) + return 0; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_META, -1); + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item metadata"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_metadata_set; + return 0; +} + +static int +mlx5dr_definer_conv_item_sq(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct mlx5_rte_flow_item_sq *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!m) + return 0; + + if (m->queue) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_SOURCE_QP]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_source_qp_set; + DR_CALC_SET_HDR(fc, source_qp_gvmi, source_qp); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gre *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (inner) { + DR_LOG(ERR, "Inner GRE item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (!m) + return 0; + + if (m->c_rsvd0_ver) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_C_VER]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_c_ver_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_gre, c_rsvd0_ver); + fc->bit_off = __mlx5_dw_bit_off(header_gre, c_rsvd0_ver); + } + + if (m->protocol) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_PROTOCOL]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_protocol_type_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->byte_off += MLX5_BYTE_OFF(header_gre, gre_protocol); + fc->bit_mask = __mlx5_mask(header_gre, gre_protocol); + fc->bit_off = __mlx5_dw_bit_off(header_gre, gre_protocol); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gre_opt(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gre_opt *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, false)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, false); + } + } + + if (!m) + return 0; + + if (m->checksum_rsvd.checksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_CHECKSUM]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_checksum_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_1); + } + + if (m->key.key) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_KEY]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_key_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_2); + } + + if (m->sequence.sequence) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_SEQ]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_seq_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_3); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gre_key(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const rte_be32_t *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!cd->relaxed) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_KEY_PRESENT]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ones_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_gre, gre_k_present); + fc->bit_off = __mlx5_dw_bit_off(header_gre, gre_k_present); + + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, false)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, false); + } + } + + if (!m) + return 0; + + if (*m) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_KEY]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_key_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_2); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_integrity(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_integrity *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->packet_ok || m->l2_ok || m->l2_crc_ok || m->l3_len_ok) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->l3_ok || m->ipv4_csum_ok || m->l4_ok || m->l4_csum_ok) { + fc = &cd->fc[DR_CALC_FNAME(INTEGRITY, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_integrity_set; + DR_CALC_SET_HDR(fc, oks1, oks1_bits); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_conntrack(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_conntrack *m = item->mask; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m) + return 0; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_CONNTRACK, -1); + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item conntrack"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_conntrack_mask; + fc->tag_set = &mlx5dr_definer_conntrack_tag; + + return 0; +} + +static int +mlx5dr_definer_conv_item_icmp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_icmp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on outer L4 type ICMP */ + if (!cd->relaxed) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_IP_PROTOCOL_O]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type, inner); + } + + if (!m) + return 0; + + if (m->hdr.icmp_type || m->hdr.icmp_code || m->hdr.icmp_cksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW1]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_dw1_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw1); + } + + if (m->hdr.icmp_ident || m->hdr.icmp_seq_nb) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW2]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_dw2_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw2); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_icmp6(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_icmp6 *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on outer L4 type ICMP6 */ + if (!cd->relaxed) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_IP_PROTOCOL_O]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type, inner); + } + + if (!m) + return 0; + + if (m->type || m->code || m->checksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW1]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp6_dw1_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw1); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_meter_color(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_meter_color *m = item->mask; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m) + return 0; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_METER_COLOR, 0); + MLX5_ASSERT(reg > 0); + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_meter_color_set; + return 0; +} + +static int +mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt, + uint8_t *hl) +{ + struct mlx5dr_definer_fc fc[MLX5DR_DEFINER_FNAME_MAX] = {{0}}; + struct mlx5dr_definer_conv_data cd = {0}; + struct rte_flow_item *items = mt->items; + uint64_t item_flags = 0; + uint32_t total = 0; + int i, j; + int ret; + + cd.fc = fc; + cd.hl = hl; + cd.caps = ctx->caps; + cd.relaxed = mt->flags & MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH; + + /* Collect all RTE fields to the field array and set header layout */ + for (i = 0; items->type != RTE_FLOW_ITEM_TYPE_END; i++, items++) { + cd.tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + + switch ((int)items->type) { + case RTE_FLOW_ITEM_TYPE_ETH: + ret = mlx5dr_definer_conv_item_eth(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + ret = mlx5dr_definer_conv_item_vlan(&cd, items, i); + item_flags |= cd.tunnel ? + (MLX5_FLOW_LAYER_INNER_VLAN | MLX5_FLOW_LAYER_INNER_L2) : + (MLX5_FLOW_LAYER_OUTER_VLAN | MLX5_FLOW_LAYER_OUTER_L2); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + ret = mlx5dr_definer_conv_item_ipv4(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + ret = mlx5dr_definer_conv_item_ipv6(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + break; + case RTE_FLOW_ITEM_TYPE_UDP: + ret = mlx5dr_definer_conv_item_udp(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; + break; + case RTE_FLOW_ITEM_TYPE_TCP: + ret = mlx5dr_definer_conv_item_tcp(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + ret = mlx5dr_definer_conv_item_gtp(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = mlx5dr_definer_conv_item_gtp_psc(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + ret = mlx5dr_definer_conv_item_port(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_REPRESENTED_PORT; + mt->vport_item_id = i; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + ret = mlx5dr_definer_conv_item_vxlan(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_VXLAN; + break; + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: + ret = mlx5dr_definer_conv_item_sq(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_SQ; + break; + case RTE_FLOW_ITEM_TYPE_TAG: + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + ret = mlx5dr_definer_conv_item_tag(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_TAG; + break; + case RTE_FLOW_ITEM_TYPE_META: + ret = mlx5dr_definer_conv_item_metadata(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_METADATA; + break; + case RTE_FLOW_ITEM_TYPE_GRE: + ret = mlx5dr_definer_conv_item_gre(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + ret = mlx5dr_definer_conv_item_gre_opt(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + ret = mlx5dr_definer_conv_item_gre_key(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE_KEY; + break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + ret = mlx5dr_definer_conv_item_integrity(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_INTEGRITY : + MLX5_FLOW_ITEM_OUTER_INTEGRITY; + break; + case RTE_FLOW_ITEM_TYPE_CONNTRACK: + ret = mlx5dr_definer_conv_item_conntrack(&cd, items, i); + break; + case RTE_FLOW_ITEM_TYPE_ICMP: + ret = mlx5dr_definer_conv_item_icmp(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_ICMP; + break; + case RTE_FLOW_ITEM_TYPE_ICMP6: + ret = mlx5dr_definer_conv_item_icmp6(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_ICMP6; + break; + case RTE_FLOW_ITEM_TYPE_METER_COLOR: + ret = mlx5dr_definer_conv_item_meter_color(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_METER_COLOR; + break; + default: + DR_LOG(ERR, "Unsupported item type %d", items->type); + rte_errno = ENOTSUP; + return rte_errno; + } + + if (ret) { + DR_LOG(ERR, "Failed processing item type: %d", items->type); + return ret; + } + } + + mt->item_flags = item_flags; + + /* Fill in headers layout and calculate total number of fields */ + for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) { + if (fc[i].tag_set) { + total++; + DR_SET(hl, -1, fc[i].byte_off, fc[i].bit_off, fc[i].bit_mask); + } + } + + mt->fc_sz = total; + mt->fc = simple_calloc(total, sizeof(*mt->fc)); + if (!mt->fc) { + DR_LOG(ERR, "Failed to allocate field copy array"); + rte_errno = ENOMEM; + return rte_errno; + } + + j = 0; + for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) { + if (fc[i].tag_set) { + memcpy(&mt->fc[j], &fc[i], sizeof(*mt->fc)); + mt->fc[j].fname = i; + j++; + } + } + + return 0; +} + +static int +mlx5dr_definer_find_byte_in_tag(struct mlx5dr_definer *definer, + uint32_t hl_byte_off, + uint32_t *tag_byte_off) +{ + uint8_t byte_offset; + int i; + + /* Add offset since each DW covers multiple BYTEs */ + byte_offset = hl_byte_off % DW_SIZE; + for (i = 0; i < DW_SELECTORS; i++) { + if (definer->dw_selector[i] == hl_byte_off / DW_SIZE) { + *tag_byte_off = byte_offset + DW_SIZE * (DW_SELECTORS - i - 1); + return 0; + } + } + + /* Add offset to skip DWs in definer */ + byte_offset = DW_SIZE * DW_SELECTORS; + /* Iterate in reverse since the code uses bytes from 7 -> 0 */ + for (i = BYTE_SELECTORS; i-- > 0 ;) { + if (definer->byte_selector[i] == hl_byte_off) { + *tag_byte_off = byte_offset + (BYTE_SELECTORS - i - 1); + return 0; + } + } + + /* The hl byte offset must be part of the definer */ + DR_LOG(INFO, "Failed to map to definer, HL byte [%d] not found", byte_offset); + rte_errno = EINVAL; + return rte_errno; +} + +static int +mlx5dr_definer_fc_bind(struct mlx5dr_definer *definer, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz) +{ + uint32_t tag_offset = 0; + int ret, byte_diff; + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + /* Map header layout byte offset to byte offset in tag */ + ret = mlx5dr_definer_find_byte_in_tag(definer, fc->byte_off, &tag_offset); + if (ret) + return ret; + + /* Move setter based on the location in the definer */ + byte_diff = fc->byte_off % DW_SIZE - tag_offset % DW_SIZE; + fc->bit_off = fc->bit_off + byte_diff * BITS_IN_BYTE; + + /* Update offset in headers layout to offset in tag */ + fc->byte_off = tag_offset; + fc++; + } + + return 0; +} + +static bool +mlx5dr_definer_best_hl_fit_recu(struct mlx5dr_definer_sel_ctrl *ctrl, + uint32_t cur_dw, + uint32_t *data) +{ + uint8_t bytes_set; + int byte_idx; + bool ret; + int i; + + /* Reached end, nothing left to do */ + if (cur_dw == MLX5_ST_SZ_DW(definer_hl)) + return true; + + /* No data set, can skip to next DW */ + while (!*data) { + cur_dw++; + data++; + + /* Reached end, nothing left to do */ + if (cur_dw == MLX5_ST_SZ_DW(definer_hl)) + return true; + } + + /* Used all DW selectors and Byte selectors, no possible solution */ + if (ctrl->allowed_full_dw == ctrl->used_full_dw && + ctrl->allowed_lim_dw == ctrl->used_lim_dw && + ctrl->allowed_bytes == ctrl->used_bytes) + return false; + + /* Try to use limited DW selectors */ + if (ctrl->allowed_lim_dw > ctrl->used_lim_dw && cur_dw < 64) { + ctrl->lim_dw_selector[ctrl->used_lim_dw++] = cur_dw; + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + ctrl->lim_dw_selector[--ctrl->used_lim_dw] = 0; + } + + /* Try to use DW selectors */ + if (ctrl->allowed_full_dw > ctrl->used_full_dw) { + ctrl->full_dw_selector[ctrl->used_full_dw++] = cur_dw; + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + ctrl->full_dw_selector[--ctrl->used_full_dw] = 0; + } + + /* No byte selector for offset bigger than 255 */ + if (cur_dw * DW_SIZE > 255) + return false; + + bytes_set = !!(0x000000ff & *data) + + !!(0x0000ff00 & *data) + + !!(0x00ff0000 & *data) + + !!(0xff000000 & *data); + + /* Check if there are enough byte selectors left */ + if (bytes_set + ctrl->used_bytes > ctrl->allowed_bytes) + return false; + + /* Try to use Byte selectors */ + for (i = 0; i < DW_SIZE; i++) + if ((0xff000000 >> (i * BITS_IN_BYTE)) & rte_be_to_cpu_32(*data)) { + /* Use byte selectors high to low */ + byte_idx = ctrl->allowed_bytes - ctrl->used_bytes - 1; + ctrl->byte_selector[byte_idx] = cur_dw * DW_SIZE + i; + ctrl->used_bytes++; + } + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + for (i = 0; i < DW_SIZE; i++) + if ((0xff << (i * BITS_IN_BYTE)) & rte_be_to_cpu_32(*data)) { + ctrl->used_bytes--; + byte_idx = ctrl->allowed_bytes - ctrl->used_bytes - 1; + ctrl->byte_selector[byte_idx] = 0; + } + + return false; +} + +static void +mlx5dr_definer_apply_sel_ctrl(struct mlx5dr_definer_sel_ctrl *ctrl, + struct mlx5dr_definer *definer) +{ + memcpy(definer->byte_selector, ctrl->byte_selector, ctrl->allowed_bytes); + memcpy(definer->dw_selector, ctrl->full_dw_selector, ctrl->allowed_full_dw); + memcpy(definer->dw_selector + ctrl->allowed_full_dw, + ctrl->lim_dw_selector, ctrl->allowed_lim_dw); +} + +static int +mlx5dr_definer_find_best_hl_fit(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt, + uint8_t *hl) +{ + struct mlx5dr_definer_sel_ctrl ctrl = {0}; + bool found; + + /* Try to create a match definer */ + ctrl.allowed_full_dw = DW_SELECTORS_MATCH; + ctrl.allowed_lim_dw = 0; + ctrl.allowed_bytes = BYTE_SELECTORS; + + found = mlx5dr_definer_best_hl_fit_recu(&ctrl, 0, (uint32_t *)hl); + if (found) { + mlx5dr_definer_apply_sel_ctrl(&ctrl, mt->definer); + mt->definer->type = MLX5DR_DEFINER_TYPE_MATCH; + return 0; + } + + /* Try to create a full/limited jumbo definer */ + ctrl.allowed_full_dw = ctx->caps->full_dw_jumbo_support ? DW_SELECTORS : + DW_SELECTORS_MATCH; + ctrl.allowed_lim_dw = ctx->caps->full_dw_jumbo_support ? 0 : + DW_SELECTORS_LIMITED; + ctrl.allowed_bytes = BYTE_SELECTORS; + + found = mlx5dr_definer_best_hl_fit_recu(&ctrl, 0, (uint32_t *)hl); + if (found) { + mlx5dr_definer_apply_sel_ctrl(&ctrl, mt->definer); + mt->definer->type = MLX5DR_DEFINER_TYPE_JUMBO; + return 0; + } + + DR_LOG(ERR, "Unable to find supporting match/jumbo definer combination"); + rte_errno = ENOTSUP; + return rte_errno; +} + +static void +mlx5dr_definer_create_tag_mask(struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag) +{ + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + if (fc->tag_mask_set) + fc->tag_mask_set(fc, items[fc->item_idx].mask, tag); + else + fc->tag_set(fc, items[fc->item_idx].mask, tag); + fc++; + } +} + +void mlx5dr_definer_create_tag(const struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag) +{ + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + fc->tag_set(fc, items[fc->item_idx].spec, tag); + fc++; + } +} + +int mlx5dr_definer_get_id(struct mlx5dr_definer *definer) +{ + return definer->obj->id; +} + +int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b) +{ + int i; + + if (definer_a->type != definer_b->type) + return 1; + + for (i = 0; i < BYTE_SELECTORS; i++) + if (definer_a->byte_selector[i] != definer_b->byte_selector[i]) + return 1; + + for (i = 0; i < DW_SELECTORS; i++) + if (definer_a->dw_selector[i] != definer_b->dw_selector[i]) + return 1; + + for (i = 0; i < MLX5DR_JUMBO_TAG_SZ; i++) + if (definer_a->mask.jumbo[i] != definer_b->mask.jumbo[i]) + return 1; + + return 0; +} + +int mlx5dr_definer_get(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt) +{ + struct mlx5dr_cmd_definer_create_attr def_attr = {0}; + struct ibv_context *ibv_ctx = ctx->ibv_ctx; + uint8_t *hl; + int ret; + + if (mt->refcount++) + return 0; + + mt->definer = simple_calloc(1, sizeof(*mt->definer)); + if (!mt->definer) { + DR_LOG(ERR, "Failed to allocate memory for definer"); + rte_errno = ENOMEM; + goto dec_refcount; + } + + /* Header layout (hl) holds full bit mask per field */ + hl = simple_calloc(1, MLX5_ST_SZ_BYTES(definer_hl)); + if (!hl) { + DR_LOG(ERR, "Failed to allocate memory for header layout"); + rte_errno = ENOMEM; + goto free_definer; + } + + /* Convert items to hl and allocate the field copy array (fc) */ + ret = mlx5dr_definer_conv_items_to_hl(ctx, mt, hl); + if (ret) { + DR_LOG(ERR, "Failed to convert items to hl"); + goto free_hl; + } + + /* Find the definer for given header layout */ + ret = mlx5dr_definer_find_best_hl_fit(ctx, mt, hl); + if (ret) { + DR_LOG(ERR, "Failed to create definer from header layout"); + goto free_field_copy; + } + + /* Align field copy array based on the new definer */ + ret = mlx5dr_definer_fc_bind(mt->definer, + mt->fc, + mt->fc_sz); + if (ret) { + DR_LOG(ERR, "Failed to bind field copy to definer"); + goto free_field_copy; + } + + /* Create the tag mask used for definer creation */ + mlx5dr_definer_create_tag_mask(mt->items, + mt->fc, + mt->fc_sz, + mt->definer->mask.jumbo); + + /* Create definer based on the bitmask tag */ + def_attr.match_mask = mt->definer->mask.jumbo; + def_attr.dw_selector = mt->definer->dw_selector; + def_attr.byte_selector = mt->definer->byte_selector; + mt->definer->obj = mlx5dr_cmd_definer_create(ibv_ctx, &def_attr); + if (!mt->definer->obj) + goto free_field_copy; + + simple_free(hl); + + return 0; + +free_field_copy: + simple_free(mt->fc); +free_hl: + simple_free(hl); +free_definer: + simple_free(mt->definer); +dec_refcount: + mt->refcount--; + + return rte_errno; +} + +void mlx5dr_definer_put(struct mlx5dr_match_template *mt) +{ + if (--mt->refcount) + return; + + simple_free(mt->fc); + mlx5dr_cmd_destroy_obj(mt->definer->obj); + simple_free(mt->definer); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h new file mode 100644 index 0000000000..6982b7a0ab --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_definer.h @@ -0,0 +1,577 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_DEFINER_H_ +#define MLX5DR_DEFINER_H_ + +/* Selectors based on match TAG */ +#define DW_SELECTORS_MATCH 6 +#define DW_SELECTORS_LIMITED 3 +#define DW_SELECTORS 9 +#define BYTE_SELECTORS 8 + +enum mlx5dr_definer_fname { + MLX5DR_DEFINER_FNAME_ETH_SMAC_48_16_O, + MLX5DR_DEFINER_FNAME_ETH_SMAC_48_16_I, + MLX5DR_DEFINER_FNAME_ETH_SMAC_15_0_O, + MLX5DR_DEFINER_FNAME_ETH_SMAC_15_0_I, + MLX5DR_DEFINER_FNAME_ETH_DMAC_48_16_O, + MLX5DR_DEFINER_FNAME_ETH_DMAC_48_16_I, + MLX5DR_DEFINER_FNAME_ETH_DMAC_15_0_O, + MLX5DR_DEFINER_FNAME_ETH_DMAC_15_0_I, + MLX5DR_DEFINER_FNAME_ETH_TYPE_O, + MLX5DR_DEFINER_FNAME_ETH_TYPE_I, + MLX5DR_DEFINER_FNAME_VLAN_TYPE_O, + MLX5DR_DEFINER_FNAME_VLAN_TYPE_I, + MLX5DR_DEFINER_FNAME_VLAN_TCI_O, + MLX5DR_DEFINER_FNAME_VLAN_TCI_I, + MLX5DR_DEFINER_FNAME_IPV4_IHL_O, + MLX5DR_DEFINER_FNAME_IPV4_IHL_I, + MLX5DR_DEFINER_FNAME_IP_TTL_O, + MLX5DR_DEFINER_FNAME_IP_TTL_I, + MLX5DR_DEFINER_FNAME_IPV4_DST_O, + MLX5DR_DEFINER_FNAME_IPV4_DST_I, + MLX5DR_DEFINER_FNAME_IPV4_SRC_O, + MLX5DR_DEFINER_FNAME_IPV4_SRC_I, + MLX5DR_DEFINER_FNAME_IP_VERSION_O, + MLX5DR_DEFINER_FNAME_IP_VERSION_I, + MLX5DR_DEFINER_FNAME_IP_FRAG_O, + MLX5DR_DEFINER_FNAME_IP_FRAG_I, + MLX5DR_DEFINER_FNAME_IPV6_PAYLOAD_LEN_O, + MLX5DR_DEFINER_FNAME_IPV6_PAYLOAD_LEN_I, + MLX5DR_DEFINER_FNAME_IP_TOS_O, + MLX5DR_DEFINER_FNAME_IP_TOS_I, + MLX5DR_DEFINER_FNAME_IPV6_FLOW_LABEL_O, + MLX5DR_DEFINER_FNAME_IPV6_FLOW_LABEL_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_127_96_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_95_64_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_63_32_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_31_0_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_127_96_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_95_64_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_63_32_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_31_0_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_127_96_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_95_64_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_63_32_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_31_0_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_127_96_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_95_64_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_63_32_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_31_0_I, + MLX5DR_DEFINER_FNAME_IP_PROTOCOL_O, + MLX5DR_DEFINER_FNAME_IP_PROTOCOL_I, + MLX5DR_DEFINER_FNAME_L4_SPORT_O, + MLX5DR_DEFINER_FNAME_L4_SPORT_I, + MLX5DR_DEFINER_FNAME_L4_DPORT_O, + MLX5DR_DEFINER_FNAME_L4_DPORT_I, + MLX5DR_DEFINER_FNAME_TCP_FLAGS_I, + MLX5DR_DEFINER_FNAME_TCP_FLAGS_O, + MLX5DR_DEFINER_FNAME_GTP_TEID, + MLX5DR_DEFINER_FNAME_GTP_MSG_TYPE, + MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG, + MLX5DR_DEFINER_FNAME_GTP_NEXT_EXT_HDR, + MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_PDU, + MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_QFI, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_0, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_1, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_2, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_3, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_4, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_5, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_6, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_7, + MLX5DR_DEFINER_FNAME_VPORT_REG_C_0, + MLX5DR_DEFINER_FNAME_VXLAN_FLAGS, + MLX5DR_DEFINER_FNAME_VXLAN_VNI, + MLX5DR_DEFINER_FNAME_SOURCE_QP, + MLX5DR_DEFINER_FNAME_REG_0, + MLX5DR_DEFINER_FNAME_REG_1, + MLX5DR_DEFINER_FNAME_REG_2, + MLX5DR_DEFINER_FNAME_REG_3, + MLX5DR_DEFINER_FNAME_REG_4, + MLX5DR_DEFINER_FNAME_REG_5, + MLX5DR_DEFINER_FNAME_REG_6, + MLX5DR_DEFINER_FNAME_REG_7, + MLX5DR_DEFINER_FNAME_REG_A, + MLX5DR_DEFINER_FNAME_REG_B, + MLX5DR_DEFINER_FNAME_GRE_KEY_PRESENT, + MLX5DR_DEFINER_FNAME_GRE_C_VER, + MLX5DR_DEFINER_FNAME_GRE_PROTOCOL, + MLX5DR_DEFINER_FNAME_GRE_OPT_KEY, + MLX5DR_DEFINER_FNAME_GRE_OPT_SEQ, + MLX5DR_DEFINER_FNAME_GRE_OPT_CHECKSUM, + MLX5DR_DEFINER_FNAME_INTEGRITY_O, + MLX5DR_DEFINER_FNAME_INTEGRITY_I, + MLX5DR_DEFINER_FNAME_ICMP_DW1, + MLX5DR_DEFINER_FNAME_ICMP_DW2, + MLX5DR_DEFINER_FNAME_MAX, +}; + +enum mlx5dr_definer_type { + MLX5DR_DEFINER_TYPE_MATCH, + MLX5DR_DEFINER_TYPE_JUMBO, +}; + +struct mlx5dr_definer_fc { + uint8_t item_idx; + uint32_t byte_off; + int bit_off; + uint32_t bit_mask; + enum mlx5dr_definer_fname fname; + void (*tag_set)(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag); + void (*tag_mask_set)(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag); +}; + +struct mlx5_ifc_definer_hl_eth_l2_bits { + u8 dmac_47_16[0x20]; + u8 dmac_15_0[0x10]; + u8 l3_ethertype[0x10]; + u8 reserved_at_40[0x1]; + u8 sx_sniffer[0x1]; + u8 functional_lb[0x1]; + u8 ip_fragmented[0x1]; + u8 qp_type[0x2]; + u8 encap_type[0x2]; + u8 port_number[0x2]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 tci[0x10]; /* contains first_priority[0x3] + first_cfi[0x1] + first_vlan_id[0xc] */ + u8 l4_type[0x4]; + u8 reserved_at_64[0x2]; + u8 ipsec_layer[0x2]; + u8 l2_type[0x2]; + u8 force_lb[0x1]; + u8 l2_ok[0x1]; + u8 l3_ok[0x1]; + u8 l4_ok[0x1]; + u8 second_vlan_qualifier[0x2]; + u8 second_priority[0x3]; + u8 second_cfi[0x1]; + u8 second_vlan_id[0xc]; +}; + +struct mlx5_ifc_definer_hl_eth_l2_src_bits { + u8 smac_47_16[0x20]; + u8 smac_15_0[0x10]; + u8 loopback_syndrome[0x8]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 ip_fragmented[0x1]; + u8 functional_lb[0x1]; +}; + +struct mlx5_ifc_definer_hl_ib_l2_bits { + u8 sx_sniffer[0x1]; + u8 force_lb[0x1]; + u8 functional_lb[0x1]; + u8 reserved_at_3[0x3]; + u8 port_number[0x2]; + u8 sl[0x4]; + u8 qp_type[0x2]; + u8 lnh[0x2]; + u8 dlid[0x10]; + u8 vl[0x4]; + u8 lrh_packet_length[0xc]; + u8 slid[0x10]; +}; + +struct mlx5_ifc_definer_hl_eth_l3_bits { + u8 ip_version[0x4]; + u8 ihl[0x4]; + union { + u8 tos[0x8]; + struct { + u8 dscp[0x6]; + u8 ecn[0x2]; + }; + }; + u8 time_to_live_hop_limit[0x8]; + u8 protocol_next_header[0x8]; + u8 identification[0x10]; + u8 flags[0x3]; + u8 fragment_offset[0xd]; + u8 ipv4_total_length[0x10]; + u8 checksum[0x10]; + u8 reserved_at_60[0xc]; + u8 flow_label[0x14]; + u8 packet_length[0x10]; + u8 ipv6_payload_length[0x10]; +}; + +struct mlx5_ifc_definer_hl_eth_l4_bits { + u8 source_port[0x10]; + u8 destination_port[0x10]; + u8 data_offset[0x4]; + u8 l4_ok[0x1]; + u8 l3_ok[0x1]; + u8 ip_fragmented[0x1]; + u8 tcp_ns[0x1]; + union { + u8 tcp_flags[0x8]; + struct { + u8 tcp_cwr[0x1]; + u8 tcp_ece[0x1]; + u8 tcp_urg[0x1]; + u8 tcp_ack[0x1]; + u8 tcp_psh[0x1]; + u8 tcp_rst[0x1]; + u8 tcp_syn[0x1]; + u8 tcp_fin[0x1]; + }; + }; + u8 first_fragment[0x1]; + u8 reserved_at_31[0xf]; +}; + +struct mlx5_ifc_definer_hl_src_qp_gvmi_bits { + u8 loopback_syndrome[0x8]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 reserved_at_e[0x1]; + u8 functional_lb[0x1]; + u8 source_gvmi[0x10]; + u8 force_lb[0x1]; + u8 ip_fragmented[0x1]; + u8 source_is_requestor[0x1]; + u8 reserved_at_23[0x5]; + u8 source_qp[0x18]; +}; + +struct mlx5_ifc_definer_hl_ib_l4_bits { + u8 opcode[0x8]; + u8 qp[0x18]; + u8 se[0x1]; + u8 migreq[0x1]; + u8 ackreq[0x1]; + u8 fecn[0x1]; + u8 becn[0x1]; + u8 bth[0x1]; + u8 deth[0x1]; + u8 dcceth[0x1]; + u8 reserved_at_28[0x2]; + u8 pad_count[0x2]; + u8 tver[0x4]; + u8 p_key[0x10]; + u8 reserved_at_40[0x8]; + u8 deth_source_qp[0x18]; +}; + +enum mlx5dr_integrity_ok1_bits { + MLX5DR_DEFINER_OKS1_FIRST_L4_OK = 24, + MLX5DR_DEFINER_OKS1_FIRST_L3_OK = 25, + MLX5DR_DEFINER_OKS1_SECOND_L4_OK = 26, + MLX5DR_DEFINER_OKS1_SECOND_L3_OK = 27, + MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK = 28, + MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK = 29, + MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK = 30, + MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK = 31, +}; + +struct mlx5_ifc_definer_hl_oks1_bits { + union { + u8 oks1_bits[0x20]; + struct { + u8 second_ipv4_checksum_ok[0x1]; + u8 second_l4_checksum_ok[0x1]; + u8 first_ipv4_checksum_ok[0x1]; + u8 first_l4_checksum_ok[0x1]; + u8 second_l3_ok[0x1]; + u8 second_l4_ok[0x1]; + u8 first_l3_ok[0x1]; + u8 first_l4_ok[0x1]; + u8 flex_parser7_steering_ok[0x1]; + u8 flex_parser6_steering_ok[0x1]; + u8 flex_parser5_steering_ok[0x1]; + u8 flex_parser4_steering_ok[0x1]; + u8 flex_parser3_steering_ok[0x1]; + u8 flex_parser2_steering_ok[0x1]; + u8 flex_parser1_steering_ok[0x1]; + u8 flex_parser0_steering_ok[0x1]; + u8 second_ipv6_extension_header_vld[0x1]; + u8 first_ipv6_extension_header_vld[0x1]; + u8 l3_tunneling_ok[0x1]; + u8 l2_tunneling_ok[0x1]; + u8 second_tcp_ok[0x1]; + u8 second_udp_ok[0x1]; + u8 second_ipv4_ok[0x1]; + u8 second_ipv6_ok[0x1]; + u8 second_l2_ok[0x1]; + u8 vxlan_ok[0x1]; + u8 gre_ok[0x1]; + u8 first_tcp_ok[0x1]; + u8 first_udp_ok[0x1]; + u8 first_ipv4_ok[0x1]; + u8 first_ipv6_ok[0x1]; + u8 first_l2_ok[0x1]; + }; + }; +}; + +struct mlx5_ifc_definer_hl_oks2_bits { + u8 reserved_at_0[0xa]; + u8 second_mpls_ok[0x1]; + u8 second_mpls4_s_bit[0x1]; + u8 second_mpls4_qualifier[0x1]; + u8 second_mpls3_s_bit[0x1]; + u8 second_mpls3_qualifier[0x1]; + u8 second_mpls2_s_bit[0x1]; + u8 second_mpls2_qualifier[0x1]; + u8 second_mpls1_s_bit[0x1]; + u8 second_mpls1_qualifier[0x1]; + u8 second_mpls0_s_bit[0x1]; + u8 second_mpls0_qualifier[0x1]; + u8 first_mpls_ok[0x1]; + u8 first_mpls4_s_bit[0x1]; + u8 first_mpls4_qualifier[0x1]; + u8 first_mpls3_s_bit[0x1]; + u8 first_mpls3_qualifier[0x1]; + u8 first_mpls2_s_bit[0x1]; + u8 first_mpls2_qualifier[0x1]; + u8 first_mpls1_s_bit[0x1]; + u8 first_mpls1_qualifier[0x1]; + u8 first_mpls0_s_bit[0x1]; + u8 first_mpls0_qualifier[0x1]; +}; + +struct mlx5_ifc_definer_hl_voq_bits { + u8 reserved_at_0[0x18]; + u8 ecn_ok[0x1]; + u8 congestion[0x1]; + u8 profile[0x2]; + u8 internal_prio[0x4]; +}; + +struct mlx5_ifc_definer_hl_ipv4_src_dst_bits { + u8 source_address[0x20]; + u8 destination_address[0x20]; +}; + +struct mlx5_ifc_definer_hl_ipv6_addr_bits { + u8 ipv6_address_127_96[0x20]; + u8 ipv6_address_95_64[0x20]; + u8 ipv6_address_63_32[0x20]; + u8 ipv6_address_31_0[0x20]; +}; + +struct mlx5_ifc_definer_tcp_icmp_header_bits { + union { + struct { + u8 icmp_dw1[0x20]; + u8 icmp_dw2[0x20]; + u8 icmp_dw3[0x20]; + }; + struct { + u8 tcp_seq[0x20]; + u8 tcp_ack[0x20]; + u8 tcp_win_urg[0x20]; + }; + }; +}; + +struct mlx5_ifc_definer_hl_tunnel_header_bits { + u8 tunnel_header_0[0x20]; + u8 tunnel_header_1[0x20]; + u8 tunnel_header_2[0x20]; + u8 tunnel_header_3[0x20]; +}; + +struct mlx5_ifc_definer_hl_metadata_bits { + u8 metadata_to_cqe[0x20]; + u8 general_purpose[0x20]; + u8 acomulated_hash[0x20]; +}; + +struct mlx5_ifc_definer_hl_flex_parser_bits { + u8 flex_parser_7[0x20]; + u8 flex_parser_6[0x20]; + u8 flex_parser_5[0x20]; + u8 flex_parser_4[0x20]; + u8 flex_parser_3[0x20]; + u8 flex_parser_2[0x20]; + u8 flex_parser_1[0x20]; + u8 flex_parser_0[0x20]; +}; + +struct mlx5_ifc_definer_hl_registers_bits { + u8 register_c_10[0x20]; + u8 register_c_11[0x20]; + u8 register_c_8[0x20]; + u8 register_c_9[0x20]; + u8 register_c_6[0x20]; + u8 register_c_7[0x20]; + u8 register_c_4[0x20]; + u8 register_c_5[0x20]; + u8 register_c_2[0x20]; + u8 register_c_3[0x20]; + u8 register_c_0[0x20]; + u8 register_c_1[0x20]; +}; + +struct mlx5_ifc_definer_hl_bits { + struct mlx5_ifc_definer_hl_eth_l2_bits eth_l2_outer; + struct mlx5_ifc_definer_hl_eth_l2_bits eth_l2_inner; + struct mlx5_ifc_definer_hl_eth_l2_src_bits eth_l2_src_outer; + struct mlx5_ifc_definer_hl_eth_l2_src_bits eth_l2_src_inner; + struct mlx5_ifc_definer_hl_ib_l2_bits ib_l2; + struct mlx5_ifc_definer_hl_eth_l3_bits eth_l3_outer; + struct mlx5_ifc_definer_hl_eth_l3_bits eth_l3_inner; + struct mlx5_ifc_definer_hl_eth_l4_bits eth_l4_outer; + struct mlx5_ifc_definer_hl_eth_l4_bits eth_l4_inner; + struct mlx5_ifc_definer_hl_src_qp_gvmi_bits source_qp_gvmi; + struct mlx5_ifc_definer_hl_ib_l4_bits ib_l4; + struct mlx5_ifc_definer_hl_oks1_bits oks1; + struct mlx5_ifc_definer_hl_oks2_bits oks2; + struct mlx5_ifc_definer_hl_voq_bits voq; + u8 reserved_at_480[0x380]; + struct mlx5_ifc_definer_hl_ipv4_src_dst_bits ipv4_src_dest_outer; + struct mlx5_ifc_definer_hl_ipv4_src_dst_bits ipv4_src_dest_inner; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_dst_outer; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_dst_inner; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_src_outer; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_src_inner; + u8 unsupported_dest_ib_l3[0x80]; + u8 unsupported_source_ib_l3[0x80]; + u8 unsupported_udp_misc_outer[0x20]; + u8 unsupported_udp_misc_inner[0x20]; + struct mlx5_ifc_definer_tcp_icmp_header_bits tcp_icmp; + struct mlx5_ifc_definer_hl_tunnel_header_bits tunnel_header; + u8 unsupported_mpls_outer[0xa0]; + u8 unsupported_mpls_inner[0xa0]; + u8 unsupported_config_headers_outer[0x80]; + u8 unsupported_config_headers_inner[0x80]; + u8 unsupported_random_number[0x20]; + u8 unsupported_ipsec[0x60]; + struct mlx5_ifc_definer_hl_metadata_bits metadata; + u8 unsupported_utc_timestamp[0x40]; + u8 unsupported_free_running_timestamp[0x40]; + struct mlx5_ifc_definer_hl_flex_parser_bits flex_parser; + struct mlx5_ifc_definer_hl_registers_bits registers; + /* struct x ib_l3_extended; */ + /* struct x rwh */ + /* struct x dcceth */ + /* struct x dceth */ +}; + +enum mlx5dr_definer_gtp { + MLX5DR_DEFINER_GTP_EXT_HDR_BIT = 0x04, +}; + +struct mlx5_ifc_header_gtp_bits { + u8 version[0x3]; + u8 proto_type[0x1]; + u8 reserved1[0x1]; + u8 ext_hdr_flag[0x1]; + u8 seq_num_flag[0x1]; + u8 pdu_flag[0x1]; + u8 msg_type[0x8]; + u8 msg_len[0x8]; + u8 teid[0x20]; +}; + +struct mlx5_ifc_header_opt_gtp_bits { + u8 seq_num[0x10]; + u8 pdu_num[0x8]; + u8 next_ext_hdr_type[0x8]; +}; + +struct mlx5_ifc_header_gtp_psc_bits { + u8 len[0x8]; + u8 pdu_type[0x4]; + u8 flags[0x4]; + u8 qfi[0x8]; + u8 reserved2[0x8]; +}; + +struct mlx5_ifc_header_ipv6_vtc_bits { + u8 version[0x4]; + union { + u8 tos[0x8]; + struct { + u8 dscp[0x6]; + u8 ecn[0x2]; + }; + }; + u8 flow_label[0x14]; +}; + +struct mlx5_ifc_header_vxlan_bits { + u8 flags[0x8]; + u8 reserved1[0x18]; + u8 vni[0x18]; + u8 reserved2[0x8]; +}; + +struct mlx5_ifc_header_gre_bits { + union { + u8 c_rsvd0_ver[0x10]; + struct { + u8 gre_c_present[0x1]; + u8 reserved_at_1[0x1]; + u8 gre_k_present[0x1]; + u8 gre_s_present[0x1]; + u8 reserved_at_4[0x9]; + u8 version[0x3]; + }; + }; + u8 gre_protocol[0x10]; + u8 checksum[0x10]; + u8 reserved_at_30[0x10]; +}; + +struct mlx5_ifc_header_icmp_bits { + union { + u8 icmp_dw1[0x20]; + struct { + u8 cksum[0x10]; + u8 code[0x8]; + u8 type[0x8]; + }; + }; + union { + u8 icmp_dw2[0x20]; + struct { + u8 seq_nb[0x10]; + u8 ident[0x10]; + }; + }; +}; + +struct mlx5dr_definer { + enum mlx5dr_definer_type type; + uint8_t dw_selector[DW_SELECTORS]; + uint8_t byte_selector[BYTE_SELECTORS]; + struct mlx5dr_rule_match_tag mask; + struct mlx5dr_devx_obj *obj; +}; + +static inline bool +mlx5dr_definer_is_jumbo(struct mlx5dr_definer *definer) +{ + return (definer->type == MLX5DR_DEFINER_TYPE_JUMBO); +} + +void mlx5dr_definer_create_tag(const struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag); + +int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b); + +int mlx5dr_definer_get_id(struct mlx5dr_definer *definer); + +int mlx5dr_definer_get(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt); + +void mlx5dr_definer_put(struct mlx5dr_match_template *mt); + +#endif /* MLX5DR_DEFINER_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 13/19] net/mlx5/hws: Add HWS context object 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (11 preceding siblings ...) 2022-10-06 15:03 ` [v2 12/19] net/mlx5/hws: Add HWS definer layer Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 14/19] net/mlx5/hws: Add HWS table object Alex Vesker ` (5 subsequent siblings) 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Context is the first mlx5dr object created, all sub object: table, matcher, rule, action are created using the context. The context holds the capabilities and send queues used for configuring the offloads to the HW. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_context.c | 222 ++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_context.h | 40 +++++ 2 files changed, 262 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.h diff --git a/drivers/net/mlx5/hws/mlx5dr_context.c b/drivers/net/mlx5/hws/mlx5dr_context.c new file mode 100644 index 0000000000..0b0831043e --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_context.c @@ -0,0 +1,222 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static int mlx5dr_context_pools_init(struct mlx5dr_context *ctx) +{ + struct mlx5dr_pool_attr pool_attr = {0}; + uint8_t max_log_sz; + int i; + + if (mlx5dr_pat_init_pattern_cache(&ctx->pattern_cache)) + return rte_errno; + + /* Create an STC pool per FT type */ + pool_attr.pool_type = MLX5DR_POOL_TYPE_STC; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_STC_POOL; + max_log_sz = RTE_MIN(MLX5DR_POOL_STC_LOG_SZ, ctx->caps->stc_alloc_log_max); + pool_attr.alloc_log_sz = RTE_MAX(max_log_sz, ctx->caps->stc_alloc_log_gran); + + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) { + pool_attr.table_type = i; + ctx->stc_pool[i] = mlx5dr_pool_create(ctx, &pool_attr); + if (!ctx->stc_pool[i]) { + DR_LOG(ERR, "Failed to allocate STC pool [%d]", i); + goto free_stc_pools; + } + } + + return 0; + +free_stc_pools: + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) + if (ctx->stc_pool[i]) + mlx5dr_pool_destroy(ctx->stc_pool[i]); + + mlx5dr_pat_uninit_pattern_cache(ctx->pattern_cache); + + return rte_errno; +} + +static void mlx5dr_context_pools_uninit(struct mlx5dr_context *ctx) +{ + int i; + + mlx5dr_pat_uninit_pattern_cache(ctx->pattern_cache); + + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) { + if (ctx->stc_pool[i]) + mlx5dr_pool_destroy(ctx->stc_pool[i]); + } +} + +static int mlx5dr_context_init_pd(struct mlx5dr_context *ctx, + struct ibv_pd *pd) +{ + struct mlx5dv_pd mlx5_pd = {0}; + struct mlx5dv_obj obj; + int ret; + + if (pd) { + ctx->pd = pd; + } else { + ctx->pd = mlx5_glue->alloc_pd(ctx->ibv_ctx); + if (!ctx->pd) { + DR_LOG(ERR, "Failed to allocate PD"); + rte_errno = errno; + return rte_errno; + } + ctx->flags |= MLX5DR_CONTEXT_FLAG_PRIVATE_PD; + } + + obj.pd.in = ctx->pd; + obj.pd.out = &mlx5_pd; + + ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD); + if (ret) + goto free_private_pd; + + ctx->pd_num = mlx5_pd.pdn; + + return 0; + +free_private_pd: + if (ctx->flags & MLX5DR_CONTEXT_FLAG_PRIVATE_PD) + mlx5_glue->dealloc_pd(ctx->pd); + + return ret; +} + +static int mlx5dr_context_uninit_pd(struct mlx5dr_context *ctx) +{ + if (ctx->flags & MLX5DR_CONTEXT_FLAG_PRIVATE_PD) + return mlx5_glue->dealloc_pd(ctx->pd); + + return 0; +} + +static void mlx5dr_context_check_hws_supp(struct mlx5dr_context *ctx) +{ + struct mlx5dr_cmd_query_caps *caps = ctx->caps; + + /* HWS not supported on device / FW */ + if (!caps->wqe_based_update) { + DR_LOG(INFO, "Required HWS WQE based insertion cap not supported"); + return; + } + + /* Current solution requires all rules to set reparse bit */ + if ((!caps->nic_ft.reparse || !caps->fdb_ft.reparse) || + !IS_BIT_SET(caps->rtc_reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS)) { + DR_LOG(INFO, "Required HWS reparse cap not supported"); + return; + } + + /* FW/HW must support 8DW STE */ + if (!IS_BIT_SET(caps->ste_format, MLX5_IFC_RTC_STE_FORMAT_8DW)) { + DR_LOG(INFO, "Required HWS STE format not supported"); + return; + } + + /* All rules are add by hash */ + if (!IS_BIT_SET(caps->rtc_index_mode, MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH)) { + DR_LOG(INFO, "Required HWS RTC index mode not supported"); + return; + } + + /* All rules are add by hash */ + if (!IS_BIT_SET(caps->definer_format_sup, MLX5_IFC_DEFINER_FORMAT_ID_SELECT)) { + DR_LOG(INFO, "Required HWS Dynamic definer not supported"); + return; + } + + ctx->flags |= MLX5DR_CONTEXT_FLAG_HWS_SUPPORT; +} + +static int mlx5dr_context_init_hws(struct mlx5dr_context *ctx, + struct mlx5dr_context_attr *attr) +{ + int ret; + + mlx5dr_context_check_hws_supp(ctx); + + if (!(ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) + return 0; + + ret = mlx5dr_context_init_pd(ctx, attr->pd); + if (ret) + return ret; + + ret = mlx5dr_context_pools_init(ctx); + if (ret) + goto uninit_pd; + + ret = mlx5dr_send_queues_open(ctx, attr->queues, attr->queue_size); + if (ret) + goto pools_uninit; + + return 0; + +pools_uninit: + mlx5dr_context_pools_uninit(ctx); +uninit_pd: + mlx5dr_context_uninit_pd(ctx); + return ret; +} + +static void mlx5dr_context_uninit_hws(struct mlx5dr_context *ctx) +{ + if (!(ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) + return; + + mlx5dr_send_queues_close(ctx); + mlx5dr_context_pools_uninit(ctx); + mlx5dr_context_uninit_pd(ctx); +} + +struct mlx5dr_context *mlx5dr_context_open(struct ibv_context *ibv_ctx, + struct mlx5dr_context_attr *attr) +{ + struct mlx5dr_context *ctx; + int ret; + + ctx = simple_calloc(1, sizeof(*ctx)); + if (!ctx) { + rte_errno = ENOMEM; + return NULL; + } + + ctx->ibv_ctx = ibv_ctx; + pthread_spin_init(&ctx->ctrl_lock, PTHREAD_PROCESS_PRIVATE); + + ctx->caps = simple_calloc(1, sizeof(*ctx->caps)); + if (!ctx->caps) + goto free_ctx; + + ret = mlx5dr_cmd_query_caps(ibv_ctx, ctx->caps); + if (ret) + goto free_caps; + + ret = mlx5dr_context_init_hws(ctx, attr); + if (ret) + goto free_caps; + + return ctx; + +free_caps: + simple_free(ctx->caps); +free_ctx: + simple_free(ctx); + return NULL; +} + +int mlx5dr_context_close(struct mlx5dr_context *ctx) +{ + mlx5dr_context_uninit_hws(ctx); + simple_free(ctx->caps); + pthread_spin_destroy(&ctx->ctrl_lock); + simple_free(ctx); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_context.h b/drivers/net/mlx5/hws/mlx5dr_context.h new file mode 100644 index 0000000000..b0c7802daf --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_context.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_CONTEXT_H_ +#define MLX5DR_CONTEXT_H_ + +enum mlx5dr_context_flags { + MLX5DR_CONTEXT_FLAG_HWS_SUPPORT = 1 << 0, + MLX5DR_CONTEXT_FLAG_PRIVATE_PD = 1 << 1, +}; + +enum mlx5dr_context_shared_stc_type { + MLX5DR_CONTEXT_SHARED_STC_DECAP = 0, + MLX5DR_CONTEXT_SHARED_STC_POP = 1, + MLX5DR_CONTEXT_SHARED_STC_MAX = 2, +}; + +struct mlx5dr_context_common_res { + struct mlx5dr_action_default_stc *default_stc; + struct mlx5dr_action_shared_stc *shared_stc[MLX5DR_CONTEXT_SHARED_STC_MAX]; + struct mlx5dr_cmd_forward_tbl *default_miss; +}; + +struct mlx5dr_context { + struct ibv_context *ibv_ctx; + struct mlx5dr_cmd_query_caps *caps; + struct ibv_pd *pd; + uint32_t pd_num; + struct mlx5dr_pool *stc_pool[MLX5DR_TABLE_TYPE_MAX]; + struct mlx5dr_context_common_res common_res[MLX5DR_TABLE_TYPE_MAX]; + struct mlx5dr_pattern_cache *pattern_cache; + pthread_spinlock_t ctrl_lock; + enum mlx5dr_context_flags flags; + struct mlx5dr_send_engine *send_queue; + size_t queues; + LIST_HEAD(table_head, mlx5dr_table) head; +}; + +#endif /* MLX5DR_CONTEXT_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 14/19] net/mlx5/hws: Add HWS table object 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (12 preceding siblings ...) 2022-10-06 15:03 ` [v2 13/19] net/mlx5/hws: Add HWS context object Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 15/19] net/mlx5/hws: Add HWS matcher object Alex Vesker ` (4 subsequent siblings) 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit HWS table resides under the context object, each context can have multiple tables with different steering types RX/TX/FDB. The table is not only a logical object but it is also represented in the HW, packets can be steered to the table and from there to other tables. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_table.c | 248 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_table.h | 44 +++++ 2 files changed, 292 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.h diff --git a/drivers/net/mlx5/hws/mlx5dr_table.c b/drivers/net/mlx5/hws/mlx5dr_table.c new file mode 100644 index 0000000000..d3f77e4780 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_table.c @@ -0,0 +1,248 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static void mlx5dr_table_init_next_ft_attr(struct mlx5dr_table *tbl, + struct mlx5dr_cmd_ft_create_attr *ft_attr) +{ + ft_attr->type = tbl->fw_ft_type; + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) + ft_attr->level = tbl->ctx->caps->fdb_ft.max_level - 1; + else + ft_attr->level = tbl->ctx->caps->nic_ft.max_level - 1; + ft_attr->rtc_valid = true; +} + +/* Call this under ctx->ctrl_lock */ +static int +mlx5dr_table_up_default_fdb_miss_tbl(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_ft_create_attr ft_attr = {0}; + struct mlx5dr_cmd_forward_tbl *default_miss; + struct mlx5dr_context *ctx = tbl->ctx; + uint8_t tbl_type = tbl->type; + uint32_t vport; + + if (tbl->type != MLX5DR_TABLE_TYPE_FDB) + return 0; + + if (ctx->common_res[tbl_type].default_miss) { + ctx->common_res[tbl_type].default_miss->refcount++; + return 0; + } + + ft_attr.type = tbl->fw_ft_type; + ft_attr.level = tbl->ctx->caps->fdb_ft.max_level; /* The last level */ + ft_attr.rtc_valid = false; + + assert(ctx->caps->eswitch_manager); + vport = ctx->caps->eswitch_manager_vport_number; + + default_miss = mlx5dr_cmd_miss_ft_create(ctx->ibv_ctx, &ft_attr, vport); + if (!default_miss) { + DR_LOG(ERR, "Failed to default miss table type: 0x%x", tbl_type); + return rte_errno; + } + + ctx->common_res[tbl_type].default_miss = default_miss; + ctx->common_res[tbl_type].default_miss->refcount++; + return 0; +} + +/* Called under pthread_spin_lock(&ctx->ctrl_lock) */ +static void mlx5dr_table_down_default_fdb_miss_tbl(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_forward_tbl *default_miss; + struct mlx5dr_context *ctx = tbl->ctx; + uint8_t tbl_type = tbl->type; + + if (tbl->type != MLX5DR_TABLE_TYPE_FDB) + return; + + default_miss = ctx->common_res[tbl_type].default_miss; + if (--default_miss->refcount) + return; + + mlx5dr_cmd_miss_ft_destroy(default_miss); + + simple_free(default_miss); + ctx->common_res[tbl_type].default_miss = NULL; +} + +static int +mlx5dr_table_connect_to_default_miss_tbl(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + int ret; + + assert(tbl->type == MLX5DR_TABLE_TYPE_FDB); + + mlx5dr_cmd_set_attr_connect_miss_tbl(tbl->ctx, + tbl->fw_ft_type, + tbl->type, + &ft_attr); + + /* Connect to next */ + ret = mlx5dr_cmd_flow_table_modify(ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect FT to default FDB FT"); + return errno; + } + + return 0; +} + +struct mlx5dr_devx_obj * +mlx5dr_table_create_default_ft(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_ft_create_attr ft_attr = {0}; + struct mlx5dr_devx_obj *ft_obj; + int ret; + + mlx5dr_table_init_next_ft_attr(tbl, &ft_attr); + + ft_obj = mlx5dr_cmd_flow_table_create(tbl->ctx->ibv_ctx, &ft_attr); + if (ft_obj && tbl->type == MLX5DR_TABLE_TYPE_FDB) { + /* Take/create ref over the default miss */ + ret = mlx5dr_table_up_default_fdb_miss_tbl(tbl); + if (ret) { + DR_LOG(ERR, "Failed to get default fdb miss"); + goto free_ft_obj; + } + ret = mlx5dr_table_connect_to_default_miss_tbl(tbl, ft_obj); + if (ret) { + DR_LOG(ERR, "Failed connecting to default miss tbl"); + goto down_miss_tbl; + } + } + + return ft_obj; + +down_miss_tbl: + mlx5dr_table_down_default_fdb_miss_tbl(tbl); +free_ft_obj: + mlx5dr_cmd_destroy_obj(ft_obj); + return NULL; +} + +void mlx5dr_table_destroy_default_ft(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft_obj) +{ + mlx5dr_table_down_default_fdb_miss_tbl(tbl); + mlx5dr_cmd_destroy_obj(ft_obj); +} + +static int mlx5dr_table_init(struct mlx5dr_table *tbl) +{ + struct mlx5dr_context *ctx = tbl->ctx; + int ret; + + if (mlx5dr_table_is_root(tbl)) + return 0; + + if (!(tbl->ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) { + DR_LOG(ERR, "HWS not supported, cannot create mlx5dr_table"); + rte_errno = EOPNOTSUPP; + return rte_errno; + } + + switch (tbl->type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + tbl->fw_ft_type = FS_FT_NIC_RX; + break; + case MLX5DR_TABLE_TYPE_NIC_TX: + tbl->fw_ft_type = FS_FT_NIC_TX; + break; + case MLX5DR_TABLE_TYPE_FDB: + tbl->fw_ft_type = FS_FT_FDB; + break; + default: + assert(0); + break; + } + + pthread_spin_lock(&ctx->ctrl_lock); + tbl->ft = mlx5dr_table_create_default_ft(tbl); + if (!tbl->ft) { + DR_LOG(ERR, "Failed to create flow table devx object"); + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; + } + + ret = mlx5dr_action_get_default_stc(ctx, tbl->type); + if (ret) + goto tbl_destroy; + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +tbl_destroy: + mlx5dr_table_destroy_default_ft(tbl, tbl->ft); + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void mlx5dr_table_uninit(struct mlx5dr_table *tbl) +{ + if (mlx5dr_table_is_root(tbl)) + return; + pthread_spin_lock(&tbl->ctx->ctrl_lock); + mlx5dr_action_put_default_stc(tbl->ctx, tbl->type); + mlx5dr_table_destroy_default_ft(tbl, tbl->ft); + pthread_spin_unlock(&tbl->ctx->ctrl_lock); +} + +struct mlx5dr_table *mlx5dr_table_create(struct mlx5dr_context *ctx, + struct mlx5dr_table_attr *attr) +{ + struct mlx5dr_table *tbl; + int ret; + + if (attr->type > MLX5DR_TABLE_TYPE_FDB) { + DR_LOG(ERR, "Invalid table type %d", attr->type); + return NULL; + } + + tbl = simple_malloc(sizeof(*tbl)); + if (!tbl) { + rte_errno = ENOMEM; + return NULL; + } + + tbl->ctx = ctx; + tbl->type = attr->type; + tbl->level = attr->level; + LIST_INIT(&tbl->head); + + ret = mlx5dr_table_init(tbl); + if (ret) { + DR_LOG(ERR, "Failed to initialise table"); + goto free_tbl; + } + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_INSERT_HEAD(&ctx->head, tbl, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + return tbl; + +free_tbl: + simple_free(tbl); + return NULL; +} + +int mlx5dr_table_destroy(struct mlx5dr_table *tbl) +{ + struct mlx5dr_context *ctx = tbl->ctx; + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_REMOVE(tbl, next); + pthread_spin_unlock(&ctx->ctrl_lock); + mlx5dr_table_uninit(tbl); + simple_free(tbl); + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_table.h b/drivers/net/mlx5/hws/mlx5dr_table.h new file mode 100644 index 0000000000..786dddfaa4 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_table.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_TABLE_H_ +#define MLX5DR_TABLE_H_ + +#define MLX5DR_ROOT_LEVEL 0 + +struct mlx5dr_table { + struct mlx5dr_context *ctx; + struct mlx5dr_devx_obj *ft; + enum mlx5dr_table_type type; + uint32_t fw_ft_type; + uint32_t level; + LIST_HEAD(matcher_head, mlx5dr_matcher) head; + LIST_ENTRY(mlx5dr_table) next; +}; + +static inline +uint32_t mlx5dr_table_get_res_fw_ft_type(enum mlx5dr_table_type tbl_type, + bool is_mirror) +{ + if (tbl_type == MLX5DR_TABLE_TYPE_NIC_RX) + return FS_FT_NIC_RX; + else if (tbl_type == MLX5DR_TABLE_TYPE_NIC_TX) + return FS_FT_NIC_TX; + else if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + return is_mirror ? FS_FT_FDB_TX : FS_FT_FDB_RX; + + assert(0); + return 0; +} + +static inline bool mlx5dr_table_is_root(struct mlx5dr_table *tbl) +{ + return (tbl->level == MLX5DR_ROOT_LEVEL); +} + +struct mlx5dr_devx_obj *mlx5dr_table_create_default_ft(struct mlx5dr_table *tbl); + +void mlx5dr_table_destroy_default_ft(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft_obj); +#endif /* MLX5DR_TABLE_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 15/19] net/mlx5/hws: Add HWS matcher object 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (13 preceding siblings ...) 2022-10-06 15:03 ` [v2 14/19] net/mlx5/hws: Add HWS table object Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 16/19] net/mlx5/hws: Add HWS rule object Alex Vesker ` (3 subsequent siblings) 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika HWS matcher resides under the table object, each table can have multiple chained matcher with different attributes. Each matcher represents a combination of match and action templates. Each matcher can contain multiple configurations based on the templates. Packets are steered from the table to the matcher and from there to other objects. The matcher allows efficent HW packet field matching and action execution based on the configuration done to it. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_matcher.c | 922 ++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_matcher.h | 76 +++ 2 files changed, 998 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.h diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c new file mode 100644 index 0000000000..835a3908eb --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -0,0 +1,922 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static bool mlx5dr_matcher_requires_col_tbl(uint8_t log_num_of_rules) +{ + /* Collision table concatenation is done only for large rule tables */ + return log_num_of_rules > MLX5DR_MATCHER_ASSURED_RULES_TH; +} + +static uint8_t mlx5dr_matcher_rules_to_tbl_depth(uint8_t log_num_of_rules) +{ + if (mlx5dr_matcher_requires_col_tbl(log_num_of_rules)) + return MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH; + + /* For small rule tables we use a single deep table to assure insertion */ + return RTE_MIN(log_num_of_rules, MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH); +} + +static int mlx5dr_matcher_create_end_ft(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_table *tbl = matcher->tbl; + + matcher->end_ft = mlx5dr_table_create_default_ft(tbl); + if (!matcher->end_ft) { + DR_LOG(ERR, "Failed to create matcher end flow table"); + return rte_errno; + } + return 0; +} + +static void mlx5dr_matcher_destroy_end_ft(struct mlx5dr_matcher *matcher) +{ + mlx5dr_table_destroy_default_ft(matcher->tbl, matcher->end_ft); +} + +static int mlx5dr_matcher_connect(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_matcher *prev = NULL; + struct mlx5dr_matcher *next = NULL; + struct mlx5dr_matcher *tmp_matcher; + struct mlx5dr_devx_obj *ft; + int ret; + + /* Find location in matcher list */ + if (LIST_EMPTY(&tbl->head)) { + LIST_INSERT_HEAD(&tbl->head, matcher, next); + goto connect; + } + + LIST_FOREACH(tmp_matcher, &tbl->head, next) { + if (tmp_matcher->attr.priority > matcher->attr.priority) { + next = tmp_matcher; + break; + } + prev = tmp_matcher; + } + + if (next) + LIST_INSERT_BEFORE(next, matcher, next); + else + LIST_INSERT_AFTER(prev, matcher, next); + +connect: + ft_attr.modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID; + ft_attr.type = tbl->fw_ft_type; + + /* Connect to next */ + if (next) { + if (next->match_ste.rtc_0) + ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; + if (next->match_ste.rtc_1) + ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + + ret = mlx5dr_cmd_flow_table_modify(matcher->end_ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect new matcher to next RTC"); + goto remove_from_list; + } + } + + /* Connect to previous */ + ft = prev ? prev->end_ft : tbl->ft; + + if (matcher->match_ste.rtc_0) + ft_attr.rtc_id_0 = matcher->match_ste.rtc_0->id; + if (matcher->match_ste.rtc_1) + ft_attr.rtc_id_1 = matcher->match_ste.rtc_1->id; + + ret = mlx5dr_cmd_flow_table_modify(ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect new matcher to previous FT"); + goto remove_from_list; + } + + return 0; + +remove_from_list: + LIST_REMOVE(matcher, next); + return ret; +} + +static int mlx5dr_matcher_disconnect(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_matcher *tmp_matcher; + struct mlx5dr_devx_obj *prev_ft; + struct mlx5dr_matcher *next; + int ret; + + prev_ft = matcher->tbl->ft; + LIST_FOREACH(tmp_matcher, &tbl->head, next) { + if (tmp_matcher == matcher) + break; + + prev_ft = tmp_matcher->end_ft; + } + + next = matcher->next.le_next; + + ft_attr.modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID; + ft_attr.type = matcher->tbl->fw_ft_type; + + if (next) { + /* Connect previous end FT to next RTC if exists */ + if (next->match_ste.rtc_0) + ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; + if (next->match_ste.rtc_1) + ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + } else { + /* Matcher is last, point prev end FT to default miss */ + mlx5dr_cmd_set_attr_connect_miss_tbl(tbl->ctx, + tbl->fw_ft_type, + tbl->type, + &ft_attr); + } + + ret = mlx5dr_cmd_flow_table_modify(prev_ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to disconnect matcher"); + return ret; + } + + LIST_REMOVE(matcher, next); + + return 0; +} + +static void mlx5dr_matcher_set_rtc_attr_sz(struct mlx5dr_matcher *matcher, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr, + bool is_match_rtc, + bool is_mirror) +{ + enum mlx5dr_matcher_flow_src flow_src = matcher->attr.optimize_flow_src; + struct mlx5dr_pool_chunk *ste = &matcher->action_ste.ste; + + if ((flow_src == MLX5DR_MATCHER_FLOW_SRC_VPORT && !is_mirror) || + (flow_src == MLX5DR_MATCHER_FLOW_SRC_WIRE && is_mirror)) { + /* Optimize FDB RTC */ + rtc_attr->log_size = 0; + rtc_attr->log_depth = 0; + } else { + /* Keep original values */ + rtc_attr->log_size = is_match_rtc ? matcher->attr.table.sz_row_log : ste->order; + rtc_attr->log_depth = is_match_rtc ? matcher->attr.table.sz_col_log : 0; + } +} + +static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, + bool is_match_rtc) +{ + const char *rtc_type_str = is_match_rtc ? "match" : "action"; + struct mlx5dr_cmd_rtc_create_attr rtc_attr = {0}; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_action_default_stc *default_stc; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_devx_obj **rtc_0, **rtc_1; + struct mlx5dr_pool *ste_pool, *stc_pool; + struct mlx5dr_devx_obj *devx_obj; + struct mlx5dr_pool_chunk *ste; + int ret; + + if (is_match_rtc) { + rtc_0 = &matcher->match_ste.rtc_0; + rtc_1 = &matcher->match_ste.rtc_1; + ste_pool = matcher->match_ste.pool; + ste = &matcher->match_ste.ste; + ste->order = matcher->attr.table.sz_col_log + + matcher->attr.table.sz_row_log; + rtc_attr.log_size = matcher->attr.table.sz_row_log; + rtc_attr.log_depth = matcher->attr.table.sz_col_log; + rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH; + /* The first match template is used since all share the same definer */ + rtc_attr.definer_id = mlx5dr_definer_get_id(matcher->mt[0]->definer); + rtc_attr.is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + rtc_attr.miss_ft_id = matcher->end_ft->id; + /* Match pool requires implicit allocation */ + ret = mlx5dr_pool_chunk_alloc(ste_pool, ste); + if (ret) { + DR_LOG(ERR, "Failed to allocate STE for %s RTC", rtc_type_str); + return ret; + } + } else { + rtc_0 = &matcher->action_ste.rtc_0; + rtc_1 = &matcher->action_ste.rtc_1; + ste_pool = matcher->action_ste.pool; + ste = &matcher->action_ste.ste; + ste->order = rte_log2_u32(matcher->action_ste.max_stes) + + matcher->attr.table.sz_row_log; + rtc_attr.log_size = ste->order; + rtc_attr.log_depth = 0; + rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; + /* The action STEs use the default always hit definer */ + rtc_attr.definer_id = ctx->caps->trivial_match_definer; + rtc_attr.is_jumbo = false; + rtc_attr.miss_ft_id = 0; + } + + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + + rtc_attr.pd = ctx->pd_num; + rtc_attr.ste_base = devx_obj->id; + rtc_attr.ste_offset = ste->offset; + rtc_attr.table_type = mlx5dr_table_get_res_fw_ft_type(tbl->type, false); + mlx5dr_matcher_set_rtc_attr_sz(matcher, &rtc_attr, is_match_rtc, false); + + /* STC is a single resource (devx_obj), use any STC for the ID */ + stc_pool = ctx->stc_pool[tbl->type]; + default_stc = ctx->common_res[tbl->type].default_stc; + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, &default_stc->default_hit); + rtc_attr.stc_base = devx_obj->id; + + *rtc_0 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); + if (!*rtc_0) { + DR_LOG(ERR, "Failed to create matcher %s RTC", rtc_type_str); + goto free_ste; + } + + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) { + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + rtc_attr.ste_base = devx_obj->id; + rtc_attr.table_type = mlx5dr_table_get_res_fw_ft_type(tbl->type, true); + + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, &default_stc->default_hit); + rtc_attr.stc_base = devx_obj->id; + mlx5dr_matcher_set_rtc_attr_sz(matcher, &rtc_attr, is_match_rtc, true); + + *rtc_1 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); + if (!*rtc_1) { + DR_LOG(ERR, "Failed to create peer matcher %s RTC0", rtc_type_str); + goto destroy_rtc_0; + } + } + + return 0; + +destroy_rtc_0: + mlx5dr_cmd_destroy_obj(*rtc_0); +free_ste: + if (is_match_rtc) + mlx5dr_pool_chunk_free(ste_pool, ste); + return rte_errno; +} + +static void mlx5dr_matcher_destroy_rtc(struct mlx5dr_matcher *matcher, + bool is_match_rtc) +{ + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_devx_obj *rtc_0, *rtc_1; + struct mlx5dr_pool_chunk *ste; + struct mlx5dr_pool *ste_pool; + + if (is_match_rtc) { + rtc_0 = matcher->match_ste.rtc_0; + rtc_1 = matcher->match_ste.rtc_1; + ste_pool = matcher->match_ste.pool; + ste = &matcher->match_ste.ste; + } else { + rtc_0 = matcher->action_ste.rtc_0; + rtc_1 = matcher->action_ste.rtc_1; + ste_pool = matcher->action_ste.pool; + ste = &matcher->action_ste.ste; + } + + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) + mlx5dr_cmd_destroy_obj(rtc_1); + + mlx5dr_cmd_destroy_obj(rtc_0); + if (is_match_rtc) + mlx5dr_pool_chunk_free(ste_pool, ste); +} + +static void mlx5dr_matcher_set_pool_attr(struct mlx5dr_pool_attr *attr, + struct mlx5dr_matcher *matcher) +{ + switch (matcher->attr.optimize_flow_src) { + case MLX5DR_MATCHER_FLOW_SRC_VPORT: + attr->opt_type = MLX5DR_POOL_OPTIMIZE_ORIG; + break; + case MLX5DR_MATCHER_FLOW_SRC_WIRE: + attr->opt_type = MLX5DR_POOL_OPTIMIZE_MIRROR; + break; + default: + break; + } +} + +static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher) +{ + bool is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_pool_attr pool_attr = {0}; + struct mlx5dr_context *ctx = tbl->ctx; + uint32_t required_stes; + int i, ret; + bool valid; + + for (i = 0; i < matcher->num_of_at; i++) { + struct mlx5dr_action_template *at = matcher->at[i]; + + /* Check if action combinabtion is valid */ + valid = mlx5dr_action_check_combo(at->action_type_arr, matcher->tbl->type); + if (!valid) { + DR_LOG(ERR, "Invalid combination in action template %d", i); + return rte_errno; + } + + /* Process action template to setters */ + ret = mlx5dr_action_template_process(at); + if (ret) { + DR_LOG(ERR, "Failed to process action template %d", i); + return rte_errno; + } + + required_stes = at->num_of_action_stes - (!is_jumbo || at->only_term); + matcher->action_ste.max_stes = RTE_MAX(matcher->action_ste.max_stes, required_stes); + + /* Future: Optimize reparse */ + } + + /* There are no additioanl STEs required for matcher */ + if (!matcher->action_ste.max_stes) + return 0; + + /* Allocate action STE mempool */ + pool_attr.table_type = tbl->type; + pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_STE_ACTION_POOL; + pool_attr.alloc_log_sz = rte_log2_u32(matcher->action_ste.max_stes) + + matcher->attr.table.sz_row_log; + mlx5dr_matcher_set_pool_attr(&pool_attr, matcher); + matcher->action_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); + if (!matcher->action_ste.pool) { + DR_LOG(ERR, "Failed to create action ste pool"); + return rte_errno; + } + + /* Allocate action RTC */ + ret = mlx5dr_matcher_create_rtc(matcher, false); + if (ret) { + DR_LOG(ERR, "Failed to create action RTC"); + goto free_ste_pool; + } + + /* Allocate STC for jumps to STE */ + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE; + stc_attr.ste_table.ste = matcher->action_ste.ste; + stc_attr.ste_table.ste_pool = matcher->action_ste.pool; + stc_attr.ste_table.match_definer_id = ctx->caps->trivial_match_definer; + + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl->type, + &matcher->action_ste.stc); + if (ret) { + DR_LOG(ERR, "Failed to create action jump to table STC"); + goto free_rtc; + } + + return 0; + +free_rtc: + mlx5dr_matcher_destroy_rtc(matcher, false); +free_ste_pool: + mlx5dr_pool_destroy(matcher->action_ste.pool); + return rte_errno; +} + +static void mlx5dr_matcher_unbind_at(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_table *tbl = matcher->tbl; + + if (!matcher->action_ste.max_stes) + return; + + mlx5dr_action_free_single_stc(tbl->ctx, tbl->type, &matcher->action_ste.stc); + mlx5dr_matcher_destroy_rtc(matcher, false); + mlx5dr_pool_destroy(matcher->action_ste.pool); +} + +static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_pool_attr pool_attr = {0}; + int i, created = 0; + int ret = -1; + + for (i = 0; i < matcher->num_of_mt; i++) { + /* Get a definer for each match template */ + ret = mlx5dr_definer_get(ctx, matcher->mt[i]); + if (ret) + goto definer_put; + + created++; + + /* Verify all templates produce the same definer */ + if (i == 0) + continue; + + ret = mlx5dr_definer_compare(matcher->mt[i]->definer, + matcher->mt[i - 1]->definer); + if (ret) { + DR_LOG(ERR, "Match templates cannot be used on the same matcher"); + rte_errno = ENOTSUP; + goto definer_put; + } + } + + /* Create an STE pool per matcher*/ + pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_MATCHER_STE_POOL; + pool_attr.table_type = matcher->tbl->type; + pool_attr.alloc_log_sz = matcher->attr.table.sz_col_log + + matcher->attr.table.sz_row_log; + mlx5dr_matcher_set_pool_attr(&pool_attr, matcher); + + matcher->match_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); + if (!matcher->match_ste.pool) { + DR_LOG(ERR, "Failed to allocate matcher STE pool"); + goto definer_put; + } + + return 0; + +definer_put: + while (created--) + mlx5dr_definer_put(matcher->mt[created]); + + return ret; +} + +static void mlx5dr_matcher_unbind_mt(struct mlx5dr_matcher *matcher) +{ + int i; + + for (i = 0; i < matcher->num_of_mt; i++) + mlx5dr_definer_put(matcher->mt[i]); + + mlx5dr_pool_destroy(matcher->match_ste.pool); +} + +static int +mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps, + struct mlx5dr_matcher *matcher, + bool is_root) +{ + struct mlx5dr_matcher_attr *attr = &matcher->attr; + + if (matcher->tbl->type != MLX5DR_TABLE_TYPE_FDB && attr->optimize_flow_src) { + DR_LOG(ERR, "NIC domain doesn't support flow_src"); + goto not_supported; + } + + if (is_root) { + if (attr->mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) { + DR_LOG(ERR, "Root matcher supports only rule resource mode"); + goto not_supported; + } + if (attr->optimize_flow_src) { + DR_LOG(ERR, "Root matcher can't specify FDB direction"); + goto not_supported; + } + return 0; + } + + /* Convert number of rules to the required depth */ + if (attr->mode == MLX5DR_MATCHER_RESOURCE_MODE_RULE) + attr->table.sz_col_log = mlx5dr_matcher_rules_to_tbl_depth(attr->rule.num_log); + + if (attr->table.sz_col_log > caps->rtc_log_depth_max) { + DR_LOG(ERR, "Matcher depth exceeds limit %d", caps->rtc_log_depth_max); + goto not_supported; + } + + if (attr->table.sz_col_log + attr->table.sz_row_log > caps->ste_alloc_log_max) { + DR_LOG(ERR, "Total matcher size exceeds limit %d", caps->ste_alloc_log_max); + goto not_supported; + } + + if (attr->table.sz_col_log + attr->table.sz_row_log < caps->ste_alloc_log_gran) { + DR_LOG(ERR, "Total matcher size below limit %d", caps->ste_alloc_log_gran); + goto not_supported; + } + + return 0; + +not_supported: + rte_errno = EOPNOTSUPP; + return rte_errno; +} + +static int mlx5dr_matcher_create_and_connect(struct mlx5dr_matcher *matcher) +{ + int ret; + + /* Select and create the definers for current matcher */ + ret = mlx5dr_matcher_bind_mt(matcher); + if (ret) + return ret; + + /* Calculate and verify action combination */ + ret = mlx5dr_matcher_bind_at(matcher); + if (ret) + goto unbind_mt; + + /* Create matcher end flow table anchor */ + ret = mlx5dr_matcher_create_end_ft(matcher); + if (ret) + goto unbind_at; + + /* Allocate the RTC for the new matcher */ + ret = mlx5dr_matcher_create_rtc(matcher, true); + if (ret) + goto destroy_end_ft; + + /* Connect the matcher to the matcher list */ + ret = mlx5dr_matcher_connect(matcher); + if (ret) + goto destroy_rtc; + + return 0; + +destroy_rtc: + mlx5dr_matcher_destroy_rtc(matcher, true); +destroy_end_ft: + mlx5dr_matcher_destroy_end_ft(matcher); +unbind_at: + mlx5dr_matcher_unbind_at(matcher); +unbind_mt: + mlx5dr_matcher_unbind_mt(matcher); + return ret; +} + +static void mlx5dr_matcher_destroy_and_disconnect(struct mlx5dr_matcher *matcher) +{ + mlx5dr_matcher_disconnect(matcher); + mlx5dr_matcher_destroy_rtc(matcher, true); + mlx5dr_matcher_destroy_end_ft(matcher); + mlx5dr_matcher_unbind_at(matcher); + mlx5dr_matcher_unbind_mt(matcher); +} + +static int +mlx5dr_matcher_create_col_matcher(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_matcher *col_matcher; + int ret; + + if (matcher->attr.mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) + return 0; + + if (!mlx5dr_matcher_requires_col_tbl(matcher->attr.rule.num_log)) + return 0; + + col_matcher = simple_calloc(1, sizeof(*matcher)); + if (!col_matcher) { + rte_errno = ENOMEM; + return rte_errno; + } + + col_matcher->tbl = matcher->tbl; + col_matcher->num_of_mt = matcher->num_of_mt; + memcpy(col_matcher->mt, matcher->mt, matcher->num_of_mt * sizeof(*matcher->mt)); + col_matcher->num_of_at = matcher->num_of_at; + memcpy(col_matcher->at, matcher->at, matcher->num_of_at * sizeof(*matcher->at)); + + col_matcher->attr.priority = matcher->attr.priority; + col_matcher->attr.mode = MLX5DR_MATCHER_RESOURCE_MODE_HTABLE; + col_matcher->attr.optimize_flow_src = matcher->attr.optimize_flow_src; + col_matcher->attr.table.sz_row_log = matcher->attr.rule.num_log; + col_matcher->attr.table.sz_col_log = MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH; + if (col_matcher->attr.table.sz_row_log > MLX5DR_MATCHER_ASSURED_ROW_RATIO) + col_matcher->attr.table.sz_row_log -= MLX5DR_MATCHER_ASSURED_ROW_RATIO; + + ret = mlx5dr_matcher_process_attr(ctx->caps, col_matcher, false); + if (ret) + goto free_col_matcher; + + ret = mlx5dr_matcher_create_and_connect(col_matcher); + if (ret) + goto free_col_matcher; + + matcher->col_matcher = col_matcher; + + return 0; + +free_col_matcher: + simple_free(col_matcher); + DR_LOG(ERR, "Failed to create assured collision matcher"); + return ret; +} + +static void +mlx5dr_matcher_destroy_col_matcher(struct mlx5dr_matcher *matcher) +{ + if (matcher->attr.mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) + return; + + if (matcher->col_matcher) { + mlx5dr_matcher_destroy_and_disconnect(matcher->col_matcher); + simple_free(matcher->col_matcher); + } +} + +static int mlx5dr_matcher_init(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + + /* Allocate matcher resource and connect to the packet pipe */ + ret = mlx5dr_matcher_create_and_connect(matcher); + if (ret) + goto unlock_err; + + /* Create additional matcher for collision handling */ + ret = mlx5dr_matcher_create_col_matcher(matcher); + if (ret) + goto destory_and_disconnect; + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +destory_and_disconnect: + mlx5dr_matcher_destroy_and_disconnect(matcher); +unlock_err: + pthread_spin_unlock(&ctx->ctrl_lock); + return ret; +} + +static int mlx5dr_matcher_uninit(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + + pthread_spin_lock(&ctx->ctrl_lock); + mlx5dr_matcher_destroy_col_matcher(matcher); + mlx5dr_matcher_destroy_and_disconnect(matcher); + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; +} + +static int mlx5dr_matcher_init_root(struct mlx5dr_matcher *matcher) +{ + enum mlx5dr_table_type type = matcher->tbl->type; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dv_flow_matcher_attr attr = {0}; + struct mlx5dv_flow_match_parameters *mask; + struct mlx5_flow_attr flow_attr = {0}; + enum mlx5dv_flow_table_type ft_type; + struct rte_flow_error rte_error; + uint8_t match_criteria; + int ret; + + switch (type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_RX; + break; + case MLX5DR_TABLE_TYPE_NIC_TX: + ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_TX; + break; + case MLX5DR_TABLE_TYPE_FDB: + ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_FDB; + break; + default: + assert(0); + break; + } + + if (matcher->attr.priority > UINT16_MAX) { + DR_LOG(ERR, "Root matcher priority exceeds allowed limit"); + rte_errno = EINVAL; + return rte_errno; + } + + mask = simple_calloc(1, MLX5_ST_SZ_BYTES(fte_match_param) + + offsetof(struct mlx5dv_flow_match_parameters, match_buf)); + if (!mask) { + rte_errno = ENOMEM; + return rte_errno; + } + + flow_attr.tbl_type = type; + + /* On root table matcher, only a single match template is supported */ + ret = flow_dv_translate_items_hws(matcher->mt[0]->items, + &flow_attr, mask->match_buf, + MLX5_SET_MATCHER_HS_M, NULL, + &match_criteria, + &rte_error); + if (ret) { + DR_LOG(ERR, "Failed to convert items to PRM [%s]", rte_error.message); + goto free_mask; + } + + mask->match_sz = MLX5_ST_SZ_BYTES(fte_match_param); + attr.match_mask = mask; + attr.match_criteria_enable = match_criteria; + attr.ft_type = ft_type; + attr.type = IBV_FLOW_ATTR_NORMAL; + attr.priority = matcher->attr.priority; + attr.comp_mask = MLX5DV_FLOW_MATCHER_MASK_FT_TYPE; + + matcher->dv_matcher = + mlx5_glue->dv_create_flow_matcher_root(ctx->ibv_ctx, &attr); + if (!matcher->dv_matcher) { + DR_LOG(ERR, "Failed to create DV flow matcher"); + rte_errno = errno; + goto free_mask; + } + + simple_free(mask); + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_INSERT_HEAD(&matcher->tbl->head, matcher, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_mask: + simple_free(mask); + return rte_errno; +} + +static int mlx5dr_matcher_uninit_root(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_REMOVE(matcher, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + ret = mlx5_glue->dv_destroy_flow_matcher_root(matcher->dv_matcher); + if (ret) { + DR_LOG(ERR, "Failed to Destroy DV flow matcher"); + rte_errno = errno; + } + + return ret; +} + +static int +mlx5dr_matcher_check_template(uint8_t num_of_mt, uint8_t num_of_at, bool is_root) +{ + uint8_t max_num_of_mt; + + max_num_of_mt = is_root ? + MLX5DR_MATCHER_MAX_MT_ROOT : + MLX5DR_MATCHER_MAX_MT; + + if (!num_of_mt || !num_of_at) { + DR_LOG(ERR, "Number of action/match template cannot be zero"); + goto out_not_sup; + } + + if (num_of_at > MLX5DR_MATCHER_MAX_AT) { + DR_LOG(ERR, "Number of action templates exceeds limit"); + goto out_not_sup; + } + + if (num_of_mt > max_num_of_mt) { + DR_LOG(ERR, "Number of match templates exceeds limit"); + goto out_not_sup; + } + + return 0; + +out_not_sup: + rte_errno = ENOTSUP; + return rte_errno; +} + +struct mlx5dr_matcher * +mlx5dr_matcher_create(struct mlx5dr_table *tbl, + struct mlx5dr_match_template *mt[], + uint8_t num_of_mt, + struct mlx5dr_action_template *at[], + uint8_t num_of_at, + struct mlx5dr_matcher_attr *attr) +{ + bool is_root = mlx5dr_table_is_root(tbl); + struct mlx5dr_matcher *matcher; + int ret; + + ret = mlx5dr_matcher_check_template(num_of_mt, num_of_at, is_root); + if (ret) + return NULL; + + matcher = simple_calloc(1, sizeof(*matcher)); + if (!matcher) { + rte_errno = ENOMEM; + return NULL; + } + + matcher->tbl = tbl; + matcher->attr = *attr; + matcher->num_of_mt = num_of_mt; + memcpy(matcher->mt, mt, num_of_mt * sizeof(*mt)); + matcher->num_of_at = num_of_at; + memcpy(matcher->at, at, num_of_at * sizeof(*at)); + + ret = mlx5dr_matcher_process_attr(tbl->ctx->caps, matcher, is_root); + if (ret) + goto free_matcher; + + if (is_root) + ret = mlx5dr_matcher_init_root(matcher); + else + ret = mlx5dr_matcher_init(matcher); + + if (ret) { + DR_LOG(ERR, "Failed to initialise matcher: %d", ret); + goto free_matcher; + } + + return matcher; + +free_matcher: + simple_free(matcher); + return NULL; +} + +int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher) +{ + if (mlx5dr_table_is_root(matcher->tbl)) + mlx5dr_matcher_uninit_root(matcher); + else + mlx5dr_matcher_uninit(matcher); + + simple_free(matcher); + return 0; +} + +struct mlx5dr_match_template * +mlx5dr_match_template_create(const struct rte_flow_item items[], + enum mlx5dr_match_template_flags flags) +{ + struct mlx5dr_match_template *mt; + struct rte_flow_error error; + int ret, len; + + if (flags > MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH) { + DR_LOG(ERR, "Unsupported match template flag provided"); + rte_errno = EINVAL; + return NULL; + } + + mt = simple_calloc(1, sizeof(*mt)); + if (!mt) { + DR_LOG(ERR, "Failed to allocate match template"); + rte_errno = ENOMEM; + return NULL; + } + + mt->flags = flags; + + /* Duplicate the user given items */ + ret = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, NULL, 0, items, &error); + if (ret <= 0) { + DR_LOG(ERR, "Unable to process items (%s): %s", + error.message ? error.message : "unspecified", + strerror(rte_errno)); + goto free_template; + } + + len = RTE_ALIGN(ret, 16); + mt->items = simple_calloc(1, len); + if (!mt->items) { + DR_LOG(ERR, "Failed to allocate item copy"); + rte_errno = ENOMEM; + goto free_template; + } + + ret = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, mt->items, ret, items, &error); + if (ret <= 0) + goto free_dst; + + return mt; + +free_dst: + simple_free(mt->items); +free_template: + simple_free(mt); + return NULL; +} + +int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt) +{ + assert(!mt->refcount); + simple_free(mt->items); + simple_free(mt); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h new file mode 100644 index 0000000000..b7bf94762c --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h @@ -0,0 +1,76 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_MATCHER_H_ +#define MLX5DR_MATCHER_H_ + +/* Max supported match template */ +#define MLX5DR_MATCHER_MAX_MT 2 +#define MLX5DR_MATCHER_MAX_MT_ROOT 1 + +/* Max supported action template */ +#define MLX5DR_MATCHER_MAX_AT 4 + +/* We calculated that concatenating a collision table to the main table with + * 3% of the main table rows will be enough resources for high insertion + * success probability. + * + * The calculation: log2(2^x * 3 / 100) = log2(2^x) + log2(3/100) = x - 5.05 ~ 5 + */ +#define MLX5DR_MATCHER_ASSURED_ROW_RATIO 5 +/* Thrashold to determine if amount of rules require a collision table */ +#define MLX5DR_MATCHER_ASSURED_RULES_TH 10 +/* Required depth of an assured collision table */ +#define MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH 4 +/* Required depth of the main large table */ +#define MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH 2 + +struct mlx5dr_match_template { + struct rte_flow_item *items; + struct mlx5dr_definer *definer; + struct mlx5dr_definer_fc *fc; + uint32_t fc_sz; + uint64_t item_flags; + uint8_t vport_item_id; + enum mlx5dr_match_template_flags flags; + uint32_t refcount; +}; + +struct mlx5dr_matcher_match_ste { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_devx_obj *rtc_0; + struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_pool *pool; +}; + +struct mlx5dr_matcher_action_ste { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_pool_chunk stc; + struct mlx5dr_devx_obj *rtc_0; + struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_pool *pool; + uint8_t max_stes; +}; + +struct mlx5dr_matcher { + struct mlx5dr_table *tbl; + struct mlx5dr_matcher_attr attr; + struct mlx5dv_flow_matcher *dv_matcher; + struct mlx5dr_match_template *mt[MLX5DR_MATCHER_MAX_MT]; + uint8_t num_of_mt; + struct mlx5dr_action_template *at[MLX5DR_MATCHER_MAX_AT]; + uint8_t num_of_at; + struct mlx5dr_devx_obj *end_ft; + struct mlx5dr_matcher *col_matcher; + struct mlx5dr_matcher_match_ste match_ste; + struct mlx5dr_matcher_action_ste action_ste; + LIST_ENTRY(mlx5dr_matcher) next; +}; + +int mlx5dr_matcher_conv_items_to_prm(uint64_t *match_buf, + struct rte_flow_item *items, + uint8_t *match_criteria, + bool is_value); + +#endif /* MLX5DR_MATCHER_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 16/19] net/mlx5/hws: Add HWS rule object 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (14 preceding siblings ...) 2022-10-06 15:03 ` [v2 15/19] net/mlx5/hws: Add HWS matcher object Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 17/19] net/mlx5/hws: Add HWS action object Alex Vesker ` (2 subsequent siblings) 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit HWS rule objects reside under the matcher, each rule holds the configuration for the packet fields to match on and the set of actions to execute over the packet that has the requested fields. Rules can be created asynchronously in parallel over multiple queues to different matchers. Each rule is configured to the HW. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_rule.c | 528 +++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_rule.h | 50 +++ 2 files changed, 578 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.h diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c new file mode 100644 index 0000000000..b27318e6d4 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -0,0 +1,528 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static void mlx5dr_rule_skip(struct mlx5dr_matcher *matcher, + const struct rte_flow_item *items, + bool *skip_rx, bool *skip_tx) +{ + struct mlx5dr_match_template *mt = matcher->mt[0]; + const struct flow_hw_port_info *vport; + const struct rte_flow_item_ethdev *v; + + /* Flow_src is the 1st priority */ + if (matcher->attr.optimize_flow_src) { + *skip_tx = matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_WIRE; + *skip_rx = matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_VPORT; + return; + } + + /* By default FDB rules are added to both RX and TX */ + *skip_rx = false; + *skip_tx = false; + + if (mt->item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) { + v = items[mt->vport_item_id].spec; + vport = flow_hw_conv_port_id(v->port_id); + if (unlikely(!vport)) { + DR_LOG(ERR, "Fail to map port ID %d, ignoring", v->port_id); + return; + } + + if (!vport->is_wire) + /* Match vport ID is not WIRE -> Skip RX */ + *skip_rx = true; + else + /* Match vport ID is WIRE -> Skip TX */ + *skip_tx = true; + } +} + +static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, + struct mlx5dr_rule *rule, + const struct rte_flow_item *items, + void *user_data) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_table *tbl = matcher->tbl; + bool skip_rx, skip_tx; + + dep_wqe->rule = rule; + dep_wqe->user_data = user_data; + + switch (tbl->type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + case MLX5DR_TABLE_TYPE_NIC_TX: + dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; + dep_wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0->id : 0; + dep_wqe->rtc_1 = 0; + dep_wqe->retry_rtc_1 = 0; + break; + + case MLX5DR_TABLE_TYPE_FDB: + mlx5dr_rule_skip(matcher, items, &skip_rx, &skip_tx); + + if (!skip_rx) { + dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; + dep_wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0->id : 0; + } else { + dep_wqe->rtc_0 = 0; + dep_wqe->retry_rtc_0 = 0; + } + + if (!skip_tx) { + dep_wqe->rtc_1 = matcher->match_ste.rtc_1->id; + dep_wqe->retry_rtc_1 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_1->id : 0; + } else { + dep_wqe->rtc_1 = 0; + dep_wqe->retry_rtc_1 = 0; + } + + break; + + default: + assert(false); + break; + } +} + +static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue, + struct mlx5dr_rule *rule, + bool err, + void *user_data, + enum mlx5dr_rule_status rule_status_on_succ) +{ + enum rte_flow_op_status comp_status; + + if (!err) { + comp_status = RTE_FLOW_OP_SUCCESS; + rule->status = rule_status_on_succ; + } else { + comp_status = RTE_FLOW_OP_ERROR; + rule->status = MLX5DR_RULE_STATUS_FAILED; + } + + mlx5dr_send_engine_inc_rule(queue); + mlx5dr_send_engine_gen_comp(queue, user_data, comp_status); +} + +static int mlx5dr_rule_alloc_action_ste(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + int ret; + + /* Use rule_idx for locking optimzation, otherwise allocate from pool */ + if (matcher->attr.optimize_using_rule_idx) { + rule->action_ste_idx = attr->rule_idx * matcher->action_ste.max_stes; + } else { + struct mlx5dr_pool_chunk ste = {0}; + + ste.order = rte_log2_u32(matcher->action_ste.max_stes); + ret = mlx5dr_pool_chunk_alloc(matcher->action_ste.pool, &ste); + if (ret) { + DR_LOG(ERR, "Failed to allocate STE for rule actions"); + return ret; + } + rule->action_ste_idx = ste.offset; + } + return 0; +} + +void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + + if (rule->action_ste_idx > -1 && !matcher->attr.optimize_using_rule_idx) { + struct mlx5dr_pool_chunk ste = {0}; + + /* This release is safe only when the rule match part was deleted */ + ste.order = rte_log2_u32(matcher->action_ste.max_stes); + ste.offset = rule->action_ste_idx; + mlx5dr_pool_chunk_free(matcher->action_ste.pool, &ste); + } +} + +static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr, + struct mlx5dr_actions_apply_data *apply) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_context *ctx = tbl->ctx; + + /* Init rule before reuse */ + rule->rtc_0 = 0; + rule->rtc_1 = 0; + rule->pending_wqes = 0; + rule->action_ste_idx = -1; + rule->status = MLX5DR_RULE_STATUS_CREATING; + + /* Init default send STE attributes */ + ste_attr->gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE; + ste_attr->send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr->send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr->send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + + /* Init default action apply */ + apply->tbl_type = tbl->type; + apply->common_res = &ctx->common_res[tbl->type]; + apply->jump_to_action_stc = matcher->action_ste.stc.offset; + apply->require_dep = 0; +} + +static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dr_action_template *at = rule->matcher->at[at_idx]; + struct mlx5dr_match_template *mt = rule->matcher->mt[mt_idx]; + bool is_jumbo = mlx5dr_definer_is_jumbo(mt->definer); + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + struct mlx5dr_actions_wqe_setter *setter; + struct mlx5dr_actions_apply_data apply; + struct mlx5dr_send_engine *queue; + uint8_t total_stes, action_stes; + int i, ret; + + queue = &ctx->send_queue[attr->queue_id]; + if (unlikely(mlx5dr_send_engine_err(queue))) { + rte_errno = EIO; + return rte_errno; + } + + mlx5dr_rule_create_init(rule, &ste_attr, &apply); + + /* Allocate dependent match WQE since rule might have dependent writes. + * The queued dependent WQE can be later aborted or kept as a dependency. + * dep_wqe buffers (ctrl, data) are also reused for all STE writes. + */ + dep_wqe = mlx5dr_send_add_new_dep_wqe(queue); + mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, attr->user_data); + + ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; + ste_attr.wqe_data = &dep_wqe->wqe_data; + apply.wqe_ctrl = &dep_wqe->wqe_ctrl; + apply.wqe_data = (uint32_t *)&dep_wqe->wqe_data; + apply.rule_action = rule_actions; + apply.queue = queue; + + setter = &at->setters[at->num_of_action_stes]; + total_stes = at->num_of_action_stes + (is_jumbo && !at->only_term); + action_stes = total_stes - 1; + + if (action_stes) { + /* Allocate action STEs for complex rules */ + ret = mlx5dr_rule_alloc_action_ste(rule, attr); + if (ret) { + DR_LOG(ERR, "Failed to allocate action memory %d", ret); + mlx5dr_send_abort_new_dep_wqe(queue); + return ret; + } + /* Skip RX/TX based on the dep_wqe init */ + ste_attr.rtc_0 = dep_wqe->rtc_0 ? matcher->action_ste.rtc_0->id : 0; + ste_attr.rtc_1 = dep_wqe->rtc_1 ? matcher->action_ste.rtc_1->id : 0; + /* Action STEs are written to a specific index last to first */ + ste_attr.direct_index = rule->action_ste_idx + action_stes; + apply.next_direct_idx = ste_attr.direct_index; + } else { + apply.next_direct_idx = 0; + } + + for (i = total_stes; i-- > 0;) { + mlx5dr_action_apply_setter(&apply, setter--, !i && is_jumbo); + + if (i == 0) { + /* Handle last match STE */ + mlx5dr_definer_create_tag(items, mt->fc, mt->fc_sz, + (uint8_t *)dep_wqe->wqe_data.action); + + /* Rule has dependent WQEs, match dep_wqe is queued */ + if (action_stes || apply.require_dep) + break; + + /* Rule has no dependencies, abort dep_wqe and send WQE now */ + mlx5dr_send_abort_new_dep_wqe(queue); + ste_attr.wqe_tag_is_jumbo = is_jumbo; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = dep_wqe->user_data; + ste_attr.send_attr.rule = dep_wqe->rule; + ste_attr.direct_index = 0; + ste_attr.rtc_0 = dep_wqe->rtc_0; + ste_attr.rtc_1 = dep_wqe->rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.retry_rtc_0 = dep_wqe->retry_rtc_0; + ste_attr.retry_rtc_1 = dep_wqe->retry_rtc_1; + } else { + apply.next_direct_idx = --ste_attr.direct_index; + } + + mlx5dr_send_ste(queue, &ste_attr); + } + + /* Backup TAG on the rule for deletion */ + if (is_jumbo) + memcpy(rule->tag.jumbo, dep_wqe->wqe_data.action, MLX5DR_JUMBO_TAG_SZ); + else + memcpy(rule->tag.match, dep_wqe->wqe_data.tag, MLX5DR_MATCH_TAG_SZ); + + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQEs */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + return 0; +} + +static void mlx5dr_rule_destroy_failed_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + + mlx5dr_rule_gen_comp(queue, rule, false, + attr->user_data, MLX5DR_RULE_STATUS_DELETED); + + /* Rule failed now we can safely release action STEs */ + mlx5dr_rule_free_action_ste_idx(rule); + + /* If a rule that was indicated as burst (need to trigger HW) has failed + * insertion we won't ring the HW as nothing is being written to the WQ. + * In such case update the last WQE and ring the HW with that work + */ + if (attr->burst) + return; + + mlx5dr_send_all_dep_wqe(queue); + mlx5dr_send_engine_flush_queue(queue); +} + +static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_wqe_gta_ctrl_seg wqe_ctrl = {0}; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + + /* Rule is not completed yet */ + if (rule->status == MLX5DR_RULE_STATUS_CREATING) { + rte_errno = EBUSY; + return rte_errno; + } + + /* Rule failed and doesn't require cleanup */ + if (rule->status == MLX5DR_RULE_STATUS_FAILED) { + mlx5dr_rule_destroy_failed_hws(rule, attr); + return 0; + } + + if (unlikely(mlx5dr_send_engine_err(queue))) { + mlx5dr_rule_destroy_failed_hws(rule, attr); + return 0; + } + + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQE */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + rule->status = MLX5DR_RULE_STATUS_DELETING; + + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + + ste_attr.send_attr.rule = rule; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = attr->user_data; + + ste_attr.rtc_0 = rule->rtc_0; + ste_attr.rtc_1 = rule->rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.wqe_ctrl = &wqe_ctrl; + ste_attr.wqe_tag = &rule->tag; + ste_attr.wqe_tag_is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE; + + mlx5dr_send_ste(queue, &ste_attr); + + return 0; +} + +static int mlx5dr_rule_create_root(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *rule_attr, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dv_flow_matcher *dv_matcher = rule->matcher->dv_matcher; + uint8_t num_actions = rule->matcher->at[at_idx]->num_actions; + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dv_flow_match_parameters *value; + struct mlx5_flow_attr flow_attr = {0}; + struct mlx5dv_flow_action_attr *attr; + struct rte_flow_error error; + uint8_t match_criteria; + int ret; + + attr = simple_calloc(num_actions, sizeof(*attr)); + if (!attr) { + rte_errno = ENOMEM; + return rte_errno; + } + + value = simple_calloc(1, MLX5_ST_SZ_BYTES(fte_match_param) + + offsetof(struct mlx5dv_flow_match_parameters, match_buf)); + if (!value) { + rte_errno = ENOMEM; + goto free_attr; + } + + flow_attr.tbl_type = rule->matcher->tbl->type; + + ret = flow_dv_translate_items_hws(items, &flow_attr, value->match_buf, + MLX5_SET_MATCHER_HS_V, NULL, + &match_criteria, + &error); + if (ret) { + DR_LOG(ERR, "Failed to convert items to PRM [%s]", error.message); + goto free_value; + } + + /* Convert actions to verb action attr */ + ret = mlx5dr_action_root_build_attr(rule_actions, num_actions, attr); + if (ret) + goto free_value; + + /* Create verb flow */ + value->match_sz = MLX5_ST_SZ_BYTES(fte_match_param); + rule->flow = mlx5_glue->dv_create_flow_root(dv_matcher, + value, + num_actions, + attr); + + mlx5dr_rule_gen_comp(&ctx->send_queue[rule_attr->queue_id], rule, !rule->flow, + rule_attr->user_data, MLX5DR_RULE_STATUS_CREATED); + + simple_free(value); + simple_free(attr); + + return 0; + +free_value: + simple_free(value); +free_attr: + simple_free(attr); + + return -rte_errno; +} + +static int mlx5dr_rule_destroy_root(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + int err = 0; + + if (rule->flow) + err = ibv_destroy_flow(rule->flow); + + mlx5dr_rule_gen_comp(&ctx->send_queue[attr->queue_id], rule, err, + attr->user_data, MLX5DR_RULE_STATUS_DELETED); + + return 0; +} + +int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[], + struct mlx5dr_rule_attr *attr, + struct mlx5dr_rule *rule_handle) +{ + struct mlx5dr_context *ctx; + int ret; + + rule_handle->matcher = matcher; + ctx = matcher->tbl->ctx; + + if (unlikely(!attr->user_data)) { + rte_errno = EINVAL; + return -rte_errno; + } + + /* Check if there is room in queue */ + if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + rte_errno = EBUSY; + return -rte_errno; + } + + assert(matcher->num_of_mt >= mt_idx); + assert(matcher->num_of_at >= at_idx); + + if (unlikely(mlx5dr_table_is_root(matcher->tbl))) + ret = mlx5dr_rule_create_root(rule_handle, + attr, + items, + at_idx, + rule_actions); + else + ret = mlx5dr_rule_create_hws(rule_handle, + attr, + mt_idx, + items, + at_idx, + rule_actions); + return -ret; +} + +int mlx5dr_rule_destroy(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + int ret; + + if (unlikely(!attr->user_data)) { + rte_errno = EINVAL; + return -rte_errno; + } + + /* Check if there is room in queue */ + if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + rte_errno = EBUSY; + return -rte_errno; + } + + if (unlikely(mlx5dr_table_is_root(rule->matcher->tbl))) + ret = mlx5dr_rule_destroy_root(rule, attr); + else + ret = mlx5dr_rule_destroy_hws(rule, attr); + + return -ret; +} + +size_t mlx5dr_rule_get_handle_size(void) +{ + return sizeof(struct mlx5dr_rule); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h new file mode 100644 index 0000000000..96c85674f2 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_rule.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_RULE_H_ +#define MLX5DR_RULE_H_ + +enum { + MLX5DR_STE_CTRL_SZ = 20, + MLX5DR_ACTIONS_SZ = 12, + MLX5DR_MATCH_TAG_SZ = 32, + MLX5DR_JUMBO_TAG_SZ = 44, +}; + +enum mlx5dr_rule_status { + MLX5DR_RULE_STATUS_UNKNOWN, + MLX5DR_RULE_STATUS_CREATING, + MLX5DR_RULE_STATUS_CREATED, + MLX5DR_RULE_STATUS_DELETING, + MLX5DR_RULE_STATUS_DELETED, + MLX5DR_RULE_STATUS_FAILING, + MLX5DR_RULE_STATUS_FAILED, +}; + +struct mlx5dr_rule_match_tag { + union { + uint8_t jumbo[MLX5DR_JUMBO_TAG_SZ]; + struct { + uint8_t reserved[MLX5DR_ACTIONS_SZ]; + uint8_t match[MLX5DR_MATCH_TAG_SZ]; + }; + }; +}; + +struct mlx5dr_rule { + struct mlx5dr_matcher *matcher; + union { + struct mlx5dr_rule_match_tag tag; + struct ibv_flow *flow; + }; + uint32_t rtc_0; /* The RTC into which the STE was inserted */ + uint32_t rtc_1; /* The RTC into which the STE was inserted */ + int action_ste_idx; /* Action STE pool ID */ + uint8_t status; /* enum mlx5dr_rule_status */ + uint8_t pending_wqes; +}; + +void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule); + +#endif /* MLX5DR_RULE_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 17/19] net/mlx5/hws: Add HWS action object 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (15 preceding siblings ...) 2022-10-06 15:03 ` [v2 16/19] net/mlx5/hws: Add HWS rule object Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 18/19] net/mlx5/hws: Add HWS debug layer Alex Vesker 2022-10-06 15:03 ` [v2 19/19] net/mlx5/hws: Enable HWS Alex Vesker 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit Action objects are used for executing different HW actions over packets. Each action contains the HW resources and parameters needed for action use over the HW when creating a rule. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 2221 +++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 253 +++ drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 511 ++++++ drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 83 + 4 files changed, 3068 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.h diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c new file mode 100644 index 0000000000..d3eb091498 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -0,0 +1,2221 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +#define WIRE_PORT 0xFFFF + +#define MLX5DR_ACTION_METER_INIT_COLOR_OFFSET 1 + +/* This is the maximum allowed action order for each table type: + * TX: POP_VLAN, CTR, ASO_METER, AS_CT, PUSH_VLAN, MODIFY, ENCAP, Term + * RX: TAG, DECAP, POP_VLAN, CTR, ASO_METER, ASO_CT, PUSH_VLAN, MODIFY, + * ENCAP, Term + * FDB: DECAP, POP_VLAN, CTR, ASO_METER, ASO_CT, PUSH_VLAN, MODIFY, + * ENCAP, Term + */ +static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_MAX] = { + [MLX5DR_TABLE_TYPE_NIC_RX] = { + BIT(MLX5DR_ACTION_TYP_TAG), + BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_TIR) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, + [MLX5DR_TABLE_TYPE_NIC_TX] = { + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, + [MLX5DR_TABLE_TYPE_FDB] = { + BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_VPORT) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, +}; + +static int mlx5dr_action_get_shared_stc_nic(struct mlx5dr_context *ctx, + enum mlx5dr_context_shared_stc_type stc_type, + uint8_t tbl_type) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_action_shared_stc *shared_stc; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + if (ctx->common_res[tbl_type].shared_stc[stc_type]) { + rte_atomic32_add(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount, 1); + pthread_spin_unlock(&ctx->ctrl_lock); + return 0; + } + + shared_stc = simple_calloc(1, sizeof(*shared_stc)); + if (!shared_stc) { + DR_LOG(ERR, "Failed to allocate memory for shared STCs"); + rte_errno = ENOMEM; + goto unlock_and_out; + } + switch (stc_type) { + case MLX5DR_CONTEXT_SHARED_STC_DECAP: + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + stc_attr.remove_header.decap = 0; + stc_attr.remove_header.start_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + stc_attr.remove_header.end_anchor = MLX5_HEADER_ANCHOR_IPV6_IPV4; + break; + case MLX5DR_CONTEXT_SHARED_STC_POP: + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + stc_attr.remove_words.start_anchor = MLX5_HEADER_ANCHOR_FIRST_VLAN_START; + stc_attr.remove_words.num_of_words = MLX5DR_ACTION_HDR_LEN_L2_VLAN; + break; + default: + DR_LOG(ERR, "No such type : stc_type\n"); + assert(false); + rte_errno = EINVAL; + goto unlock_and_out; + } + + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &shared_stc->remove_header); + if (ret) { + DR_LOG(ERR, "Failed to allocate shared decap l2 STC"); + goto free_shared_stc; + } + + ctx->common_res[tbl_type].shared_stc[stc_type] = shared_stc; + + rte_atomic32_init(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount); + rte_atomic32_set(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount, 1); + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_shared_stc: + simple_free(shared_stc); +unlock_and_out: + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void mlx5dr_action_put_shared_stc_nic(struct mlx5dr_context *ctx, + enum mlx5dr_context_shared_stc_type stc_type, + uint8_t tbl_type) +{ + struct mlx5dr_action_shared_stc *shared_stc; + + pthread_spin_lock(&ctx->ctrl_lock); + if (!rte_atomic32_dec_and_test(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount)) { + pthread_spin_unlock(&ctx->ctrl_lock); + return; + } + + shared_stc = ctx->common_res[tbl_type].shared_stc[stc_type]; + + mlx5dr_action_free_single_stc(ctx, tbl_type, &shared_stc->remove_header); + simple_free(shared_stc); + ctx->common_res[tbl_type].shared_stc[stc_type] = NULL; + pthread_spin_unlock(&ctx->ctrl_lock); +} + +static int mlx5dr_action_get_shared_stc(struct mlx5dr_action *action, + enum mlx5dr_context_shared_stc_type stc_type) +{ + struct mlx5dr_context *ctx = action->ctx; + int ret; + + if (stc_type >= MLX5DR_CONTEXT_SHARED_STC_MAX) { + assert(false); + rte_errno = EINVAL; + return rte_errno; + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for RX shared STCs (type: %d)", + stc_type); + return ret; + } + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for TX shared STCs(type: %d)", + stc_type); + goto clean_nic_rx_stc; + } + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for FDB shared STCs (type: %d)", + stc_type); + goto clean_nic_tx_stc; + } + } + + return 0; + +clean_nic_tx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); +clean_nic_rx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + + return ret; +} + +static void mlx5dr_action_put_shared_stc(struct mlx5dr_action *action, + enum mlx5dr_context_shared_stc_type stc_type) +{ + struct mlx5dr_context *ctx = action->ctx; + + if (stc_type >= MLX5DR_CONTEXT_SHARED_STC_MAX) { + assert(false); + return; + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB); +} + +static void mlx5dr_action_print_combo(enum mlx5dr_action_type *user_actions) +{ + DR_LOG(ERR, "Invalid action_type sequence"); + while (*user_actions != MLX5DR_ACTION_TYP_LAST) { + DR_LOG(ERR, "%s", mlx5dr_debug_action_type_to_str(*user_actions)); + user_actions++; + } +} + +bool mlx5dr_action_check_combo(enum mlx5dr_action_type *user_actions, + enum mlx5dr_table_type table_type) +{ + const uint32_t *order_arr = action_order_arr[table_type]; + uint8_t order_idx = 0; + uint8_t user_idx = 0; + bool valid_combo; + + while (order_arr[order_idx] != BIT(MLX5DR_ACTION_TYP_LAST)) { + /* User action order validated move to next user action */ + if (BIT(user_actions[user_idx]) & order_arr[order_idx]) + user_idx++; + + /* Iterate to the next supported action in the order */ + order_idx++; + } + + /* Combination is valid if all user action were processed */ + valid_combo = user_actions[user_idx] == MLX5DR_ACTION_TYP_LAST; + if (!valid_combo) + mlx5dr_action_print_combo(user_actions); + + return valid_combo; +} + +int mlx5dr_action_root_build_attr(struct mlx5dr_rule_action rule_actions[], + uint32_t num_actions, + struct mlx5dv_flow_action_attr *attr) +{ + struct mlx5dr_action *action; + uint32_t i; + + for (i = 0; i < num_actions; i++) { + action = rule_actions[i].action; + + switch (action->type) { + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_TIR: + attr[i].type = MLX5DV_FLOW_ACTION_DEST_DEVX; + attr[i].obj = action->devx_obj; + break; + case MLX5DR_ACTION_TYP_TAG: + attr[i].type = MLX5DV_FLOW_ACTION_TAG; + attr[i].tag_value = rule_actions[i].tag.value; + break; +#ifdef HAVE_MLX5_DR_CREATE_ACTION_DEFAULT_MISS + case MLX5DR_ACTION_TYP_MISS: + attr[i].type = MLX5DV_FLOW_ACTION_DEFAULT_MISS; + break; +#endif + case MLX5DR_ACTION_TYP_DROP: + attr[i].type = MLX5DV_FLOW_ACTION_DROP; + break; + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + attr[i].type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; + attr[i].action = action->flow_action; + break; +#ifdef HAVE_IBV_FLOW_DEVX_COUNTERS + case MLX5DR_ACTION_TYP_CTR: + attr[i].type = MLX5DV_FLOW_ACTION_COUNTERS_DEVX; + attr[i].obj = action->devx_obj; + + if (rule_actions[i].counter.offset) { + DR_LOG(ERR, "Counter offset not supported over root"); + rte_errno = ENOTSUP; + return rte_errno; + } + break; +#endif + default: + DR_LOG(ERR, "Found unsupported action type: %d", action->type); + rte_errno = ENOTSUP; + return rte_errno; + } + } + + return 0; +} + +static bool mlx5dr_action_fixup_stc_attr(struct mlx5dr_cmd_stc_modify_attr *stc_attr, + struct mlx5dr_cmd_stc_modify_attr *fixup_stc_attr, + enum mlx5dr_table_type table_type, + bool is_mirror) +{ + struct mlx5dr_devx_obj *devx_obj; + bool use_fixup = false; + uint32_t fw_tbl_type; + + fw_tbl_type = mlx5dr_table_get_res_fw_ft_type(table_type, is_mirror); + + switch (stc_attr->action_type) { + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE: + if (!is_mirror) + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_attr->ste_table.ste_pool, + &stc_attr->ste_table.ste); + else + devx_obj = + mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_attr->ste_table.ste_pool, + &stc_attr->ste_table.ste); + + *fixup_stc_attr = *stc_attr; + fixup_stc_attr->ste_table.ste_obj_id = devx_obj->id; + use_fixup = true; + break; + + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT: + if (stc_attr->vport.vport_num != WIRE_PORT) + break; + + if (fw_tbl_type == FS_FT_FDB_RX) { + /* The FW doesn't allow to go back to wire in RX, so change it to DROP */ + fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + fixup_stc_attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + fixup_stc_attr->stc_offset = stc_attr->stc_offset; + } else if (fw_tbl_type == FS_FT_FDB_TX) { + /*The FW doesn't allow to go to wire in the TX by JUMP_TO_VPORT*/ + fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK; + fixup_stc_attr->action_offset = stc_attr->action_offset; + fixup_stc_attr->stc_offset = stc_attr->stc_offset; + fixup_stc_attr->vport.vport_num = 0; + fixup_stc_attr->vport.esw_owner_vhca_id = stc_attr->vport.esw_owner_vhca_id; + } + use_fixup = true; + break; + + default: + break; + } + + return use_fixup; +} + +int mlx5dr_action_alloc_single_stc(struct mlx5dr_context *ctx, + struct mlx5dr_cmd_stc_modify_attr *stc_attr, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc) +{ + struct mlx5dr_cmd_stc_modify_attr cleanup_stc_attr = {0}; + struct mlx5dr_pool *stc_pool = ctx->stc_pool[table_type]; + struct mlx5dr_cmd_stc_modify_attr fixup_stc_attr = {0}; + struct mlx5dr_devx_obj *devx_obj_0; + bool use_fixup; + int ret; + + ret = mlx5dr_pool_chunk_alloc(stc_pool, stc); + if (ret) { + DR_LOG(ERR, "Failed to allocate single action STC"); + return ret; + } + + stc_attr->stc_offset = stc->offset; + devx_obj_0 = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, stc); + + /* According to table/action limitation change the stc_attr */ + use_fixup = mlx5dr_action_fixup_stc_attr(stc_attr, &fixup_stc_attr, table_type, false); + ret = mlx5dr_cmd_stc_modify(devx_obj_0, use_fixup ? &fixup_stc_attr : stc_attr); + if (ret) { + DR_LOG(ERR, "Failed to modify STC action_type %d tbl_type %d", + stc_attr->action_type, table_type); + goto free_chunk; + } + + /* Modify the FDB peer */ + if (table_type == MLX5DR_TABLE_TYPE_FDB) { + struct mlx5dr_devx_obj *devx_obj_1; + + devx_obj_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, stc); + + use_fixup = mlx5dr_action_fixup_stc_attr(stc_attr, &fixup_stc_attr, + table_type, true); + ret = mlx5dr_cmd_stc_modify(devx_obj_1, use_fixup ? &fixup_stc_attr : stc_attr); + if (ret) { + DR_LOG(ERR, "Failed to modify peer STC action_type %d tbl_type %d", + stc_attr->action_type, table_type); + goto clean_devx_obj_0; + } + } + + return 0; + +clean_devx_obj_0: + cleanup_stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + cleanup_stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + cleanup_stc_attr.stc_offset = stc->offset; + mlx5dr_cmd_stc_modify(devx_obj_0, &cleanup_stc_attr); +free_chunk: + mlx5dr_pool_chunk_free(stc_pool, stc); + return rte_errno; +} + +void mlx5dr_action_free_single_stc(struct mlx5dr_context *ctx, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc) +{ + struct mlx5dr_pool *stc_pool = ctx->stc_pool[table_type]; + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_devx_obj *devx_obj; + + /* Modify the STC not to point to an object */ + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + stc_attr.stc_offset = stc->offset; + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, stc); + mlx5dr_cmd_stc_modify(devx_obj, &stc_attr); + + if (table_type == MLX5DR_TABLE_TYPE_FDB) { + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, stc); + mlx5dr_cmd_stc_modify(devx_obj, &stc_attr); + } + + mlx5dr_pool_chunk_free(stc_pool, stc); +} + +static uint32_t mlx5dr_action_get_mh_stc_type(__be64 pattern) +{ + uint8_t action_type = MLX5_GET(set_action_in, &pattern, action_type); + + switch (action_type) { + case MLX5_MODIFICATION_TYPE_SET: + return MLX5_IFC_STC_ACTION_TYPE_SET; + case MLX5_MODIFICATION_TYPE_ADD: + return MLX5_IFC_STC_ACTION_TYPE_ADD; + case MLX5_MODIFICATION_TYPE_COPY: + return MLX5_IFC_STC_ACTION_TYPE_COPY; + default: + assert(false); + DR_LOG(ERR, "Unsupported action type: 0x%x\n", action_type); + rte_errno = ENOTSUP; + return MLX5_IFC_STC_ACTION_TYPE_NOP; + } +} + +static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, + struct mlx5dr_devx_obj *obj, + struct mlx5dr_cmd_stc_modify_attr *attr) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TAG: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_TAG; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + break; + case MLX5DR_ACTION_TYP_DROP: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + break; + case MLX5DR_ACTION_TYP_MISS: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ALLOW; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + /* TODO Need to support default miss for FDB */ + break; + case MLX5DR_ACTION_TYP_CTR: + attr->id = obj->id; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_COUNTER; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW0; + break; + case MLX5DR_ACTION_TYP_TIR: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->dest_tir_num = obj->id; + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + if (action->modify_header.num_of_actions == 1) { + attr->modify_action.data = action->modify_header.single_action; + attr->action_type = mlx5dr_action_get_mh_stc_type(attr->modify_action.data); + + if (attr->action_type == MLX5_IFC_STC_ACTION_TYPE_ADD || + attr->action_type == MLX5_IFC_STC_ACTION_TYPE_SET) + MLX5_SET(set_action_in, &attr->modify_action.data, data, 0); + } else { + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST; + attr->modify_header.arg_id = action->modify_header.arg_obj->id; + attr->modify_header.pattern_id = action->modify_header.pattern_obj->id; + } + break; + case MLX5DR_ACTION_TYP_FT: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->dest_table_id = obj->id; + break; + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + attr->remove_header.decap = 1; + attr->remove_header.start_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->remove_header.end_anchor = MLX5_HEADER_ANCHOR_INNER_MAC; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.arg_id = action->reformat.arg_obj->id; + attr->insert_header.header_size = action->reformat.header_size; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.arg_id = action->reformat.arg_obj->id; + attr->insert_header.header_size = action->reformat.header_size; + break; + case MLX5DR_ACTION_TYP_ASO_METER: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ASO; + attr->aso.aso_type = ASO_OPC_MOD_POLICER; + attr->aso.devx_obj_id = obj->id; + attr->aso.return_reg_id = action->aso.return_reg_id; + break; + case MLX5DR_ACTION_TYP_ASO_CT: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ASO; + attr->aso.aso_type = ASO_OPC_MOD_CONNECTION_TRACKING; + attr->aso.devx_obj_id = obj->id; + attr->aso.return_reg_id = action->aso.return_reg_id; + break; + case MLX5DR_ACTION_TYP_VPORT: + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT; + attr->vport.vport_num = action->vport.vport_num; + attr->vport.esw_owner_vhca_id = action->vport.esw_owner_vhca_id; + break; + case MLX5DR_ACTION_TYP_POP_VLAN: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + attr->remove_words.start_anchor = MLX5_HEADER_ANCHOR_FIRST_VLAN_START; + attr->remove_words.num_of_words = MLX5DR_ACTION_HDR_LEN_L2_VLAN / 2; + break; + case MLX5DR_ACTION_TYP_PUSH_VLAN: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 0; + attr->insert_header.is_inline = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.insert_offset = MLX5DR_ACTION_HDR_LEN_L2_MACS; + attr->insert_header.header_size = MLX5DR_ACTION_HDR_LEN_L2_VLAN; + break; + default: + DR_LOG(ERR, "Invalid action type %d", action->type); + assert(false); + } +} + +static int +mlx5dr_action_create_stcs(struct mlx5dr_action *action, + struct mlx5dr_devx_obj *obj) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_context *ctx = action->ctx; + int ret; + + mlx5dr_action_fill_stc_attr(action, obj, &stc_attr); + + /* Block unsupported parallel devx obj modify over the same base */ + pthread_spin_lock(&ctx->ctrl_lock); + + /* Allocate STC for RX */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); + if (ret) + goto out_err; + } + + /* Allocate STC for TX */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); + if (ret) + goto free_nic_rx_stc; + } + + /* Allocate STC for FDB */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_FDB, + &action->stc[MLX5DR_TABLE_TYPE_FDB]); + if (ret) + goto free_nic_tx_stc; + } + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_nic_tx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_free_single_stc(ctx, + MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); +free_nic_rx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_free_single_stc(ctx, + MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); +out_err: + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void +mlx5dr_action_destroy_stcs(struct mlx5dr_action *action) +{ + struct mlx5dr_context *ctx = action->ctx; + + /* Block unsupported parallel devx obj modify over the same base */ + pthread_spin_lock(&ctx->ctrl_lock); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_FDB, + &action->stc[MLX5DR_TABLE_TYPE_FDB]); + + pthread_spin_unlock(&ctx->ctrl_lock); +} + +static bool +mlx5dr_action_is_root_flags(uint32_t flags) +{ + return flags & (MLX5DR_ACTION_FLAG_ROOT_RX | + MLX5DR_ACTION_FLAG_ROOT_TX | + MLX5DR_ACTION_FLAG_ROOT_FDB); +} + +static bool +mlx5dr_action_is_hws_flags(uint32_t flags) +{ + return flags & (MLX5DR_ACTION_FLAG_HWS_RX | + MLX5DR_ACTION_FLAG_HWS_TX | + MLX5DR_ACTION_FLAG_HWS_FDB); +} + +static struct mlx5dr_action * +mlx5dr_action_create_generic(struct mlx5dr_context *ctx, + uint32_t flags, + enum mlx5dr_action_type action_type) +{ + struct mlx5dr_action *action; + + if (!mlx5dr_action_is_root_flags(flags) && + !mlx5dr_action_is_hws_flags(flags)) { + DR_LOG(ERR, "Action flags must specify root or non root (HWS)"); + rte_errno = ENOTSUP; + return NULL; + } + + action = simple_calloc(1, sizeof(*action)); + if (!action) { + DR_LOG(ERR, "Failed to allocate memory for action [%d]", action_type); + rte_errno = ENOMEM; + return NULL; + } + + action->ctx = ctx; + action->flags = flags; + action->type = action_type; + + return action; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx, + struct mlx5dr_table *tbl, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_table_is_root(tbl)) { + DR_LOG(ERR, "Root table cannot be set as destination"); + rte_errno = ENOTSUP; + return NULL; + } + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_FT); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = tbl->ft->obj; + } else { + ret = mlx5dr_action_create_stcs(action, tbl->ft); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *obj, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_TIR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = obj->obj; + } else { + ret = mlx5dr_action_create_stcs(action, obj); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_drop(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_DROP); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_default_miss(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_MISS); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_tag(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_TAG); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static struct mlx5dr_action * +mlx5dr_action_create_aso(struct mlx5dr_context *ctx, + enum mlx5dr_action_type action_type, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "ASO action cannot be used over root table"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + action->aso.devx_obj = devx_obj; + action->aso.return_reg_id = return_reg_id; + + ret = mlx5dr_action_create_stcs(action, devx_obj); + if (ret) + goto free_action; + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_aso_meter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + return mlx5dr_action_create_aso(ctx, MLX5DR_ACTION_TYP_ASO_METER, + devx_obj, return_reg_id, flags); +} + +struct mlx5dr_action * +mlx5dr_action_create_aso_ct(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + return mlx5dr_action_create_aso(ctx, MLX5DR_ACTION_TYP_ASO_CT, + devx_obj, return_reg_id, flags); +} + +struct mlx5dr_action * +mlx5dr_action_create_counter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *obj, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_CTR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = obj->obj; + } else { + ret = mlx5dr_action_create_stcs(action, obj); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static int mlx5dr_action_create_dest_vport_hws(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint32_t ib_port_num) +{ + struct mlx5dr_cmd_query_vport_caps vport_caps = {0}; + int ret; + + ret = mlx5dr_cmd_query_ib_port(ctx->ibv_ctx, &vport_caps, ib_port_num); + if (ret) { + DR_LOG(ERR, "Failed querying port %d\n", ib_port_num); + return ret; + } + action->vport.vport_num = vport_caps.vport_num; + action->vport.esw_owner_vhca_id = vport_caps.esw_owner_vhca_id; + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed creating stc for port %d\n", ib_port_num); + return ret; + } + + return 0; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_vport(struct mlx5dr_context *ctx, + uint32_t ib_port_num, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (!(flags & MLX5DR_ACTION_FLAG_HWS_FDB)) { + DR_LOG(ERR, "Vport action is supported for FDB only\n"); + rte_errno = EINVAL; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_VPORT); + if (!action) + return NULL; + + ret = mlx5dr_action_create_dest_vport_hws(ctx, action, ib_port_num); + if (ret) { + DR_LOG(ERR, "Failed to create vport action HWS\n"); + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_push_vlan(struct mlx5dr_context *ctx, uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Push vlan action not supported for root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_PUSH_VLAN); + if (!action) + return NULL; + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed creating stc for push vlan\n"); + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_pop_vlan(struct mlx5dr_context *ctx, uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Pop vlan action not supported for root"); + rte_errno = ENOTSUP; + return NULL; + } + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_POP_VLAN); + if (!action) + return NULL; + + ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); + if (ret) { + DR_LOG(ERR, "Failed to create remove stc for reformat"); + goto free_action; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed creating stc for pop vlan\n"); + goto free_shared; + } + + return action; + +free_shared: + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); +free_action: + simple_free(action); + return NULL; +} + +static int +mlx5dr_action_conv_reformat_type_to_action(uint32_t reformat_type, + enum mlx5dr_action_type *action_type) +{ + switch (reformat_type) { + case MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2: + *action_type = MLX5DR_ACTION_TYP_TNL_L2_TO_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2: + *action_type = MLX5DR_ACTION_TYP_L2_TO_TNL_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_TNL_L3_TO_L2: + *action_type = MLX5DR_ACTION_TYP_TNL_L3_TO_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3: + *action_type = MLX5DR_ACTION_TYP_L2_TO_TNL_L3; + break; + default: + DR_LOG(ERR, "Invalid reformat type requested"); + rte_errno = ENOTSUP; + return rte_errno; + } + return 0; +} + +static void +mlx5dr_action_conv_reformat_to_verbs(uint32_t action_type, + uint32_t *verb_reformat_type) +{ + switch (action_type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TUNNEL_TO_L2; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TO_L2_TUNNEL; + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L3_TUNNEL_TO_L2; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TO_L3_TUNNEL; + break; + } +} + +static void +mlx5dr_action_conv_flags_to_ft_type(uint32_t flags, enum mlx5dv_flow_table_type *ft_type) +{ + if (flags & MLX5DR_ACTION_FLAG_ROOT_RX) + *ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_RX; + else if (flags & MLX5DR_ACTION_FLAG_ROOT_TX) + *ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_TX; + else if (flags & MLX5DR_ACTION_FLAG_ROOT_FDB) + *ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_FDB; +} + +static int +mlx5dr_action_create_reformat_root(struct mlx5dr_action *action, + size_t data_sz, + void *data) +{ + enum mlx5dv_flow_table_type ft_type = 0; /*fix compilation warn*/ + uint32_t verb_reformat_type = 0; + + /* Convert action to FT type and verbs reformat type */ + mlx5dr_action_conv_flags_to_ft_type(action->flags, &ft_type); + mlx5dr_action_conv_reformat_to_verbs(action->type, &verb_reformat_type); + + /* Create the reformat type for root table */ + action->flow_action = + mlx5_glue->dv_create_flow_action_packet_reformat_root(action->ctx->ibv_ctx, + data_sz, + data, + verb_reformat_type, + ft_type); + if (!action->flow_action) { + rte_errno = errno; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_action_handle_reformat_args(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + uint32_t args_log_size; + int ret; + + if (data_sz % 2 != 0) { + DR_LOG(ERR, "Data size should be multiply of 2"); + rte_errno = EINVAL; + return rte_errno; + } + action->reformat.header_size = data_sz; + + args_log_size = mlx5dr_arg_data_size_to_arg_log_size(data_sz); + if (args_log_size >= MLX5DR_ARG_CHUNK_SIZE_MAX) { + DR_LOG(ERR, "Data size is bigger than supported"); + rte_errno = EINVAL; + return rte_errno; + } + args_log_size += bulk_size; + + if (!mlx5dr_arg_is_valid_arg_request_size(ctx, args_log_size)) { + DR_LOG(ERR, "Arg size %d does not fit FW requests", + args_log_size); + rte_errno = EINVAL; + return rte_errno; + } + + action->reformat.arg_obj = mlx5dr_cmd_arg_create(ctx->ibv_ctx, + args_log_size, + ctx->pd_num); + if (!action->reformat.arg_obj) { + DR_LOG(ERR, "Failed to create arg for reformat"); + return rte_errno; + } + + /* When INLINE need to write the arg data */ + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->reformat.arg_obj->id, + data, + data_sz); + if (ret) { + DR_LOG(ERR, "Failed to write inline arg for reformat"); + goto free_arg; + } + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static int mlx5dr_action_handle_l2_to_tunnel_l2(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + ret = mlx5dr_action_handle_reformat_args(ctx, data_sz, data, bulk_size, + action); + if (ret) { + DR_LOG(ERR, "Failed to create args for reformat"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed to create stc for reformat"); + goto free_arg; + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static int mlx5dr_action_get_shared_stc_offset(struct mlx5dr_context_common_res *common_res, + enum mlx5dr_context_shared_stc_type stc_type) +{ + return common_res->shared_stc[stc_type]->remove_header.offset; +} + +static int mlx5dr_action_handle_l2_to_tunnel_l3(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + ret = mlx5dr_action_handle_reformat_args(ctx, data_sz, data, bulk_size, + action); + if (ret) { + DR_LOG(ERR, "Failed to create args for reformat"); + return ret; + } + + /* The action is remove-l2-header + insert-l3-header */ + ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); + if (ret) { + DR_LOG(ERR, "Failed to create remove stc for reformat"); + goto free_arg; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed to create insert stc for reformat"); + goto down_shared; + } + + return 0; + +down_shared: + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static void mlx5dr_action_prepare_decap_l3_actions(size_t data_sz, + uint8_t *mh_data, + int *num_of_actions) +{ + int actions; + uint32_t i; + + /* Remove L2L3 outer headers */ + MLX5_SET(stc_ste_param_remove, mh_data, action_type, + MLX5_MODIFICATION_TYPE_REMOVE); + MLX5_SET(stc_ste_param_remove, mh_data, decap, 0x1); + MLX5_SET(stc_ste_param_remove, mh_data, remove_start_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + MLX5_SET(stc_ste_param_remove, mh_data, remove_end_anchor, + MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4); + mh_data += MLX5DR_ACTION_DOUBLE_SIZE; /* Assume every action is 2 dw */ + actions = 1; + + /* Add the new header using inline action 4Byte at a time, the header + * is added in reversed order to the beginning of the packet to avoid + * incorrect parsing by the HW. Since header is 14B or 18B an extra + * two bytes are padded and later removed. + */ + for (i = 0; i < data_sz / MLX5DR_ACTION_INLINE_DATA_SIZE + 1; i++) { + MLX5_SET(stc_ste_param_insert, mh_data, action_type, + MLX5_MODIFICATION_TYPE_INSERT); + MLX5_SET(stc_ste_param_insert, mh_data, inline_data, 0x1); + MLX5_SET(stc_ste_param_insert, mh_data, insert_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + MLX5_SET(stc_ste_param_insert, mh_data, insert_size, 2); + mh_data += MLX5DR_ACTION_DOUBLE_SIZE; + actions++; + } + + /* Remove first 2 extra bytes */ + MLX5_SET(stc_ste_param_remove_words, mh_data, action_type, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS); + MLX5_SET(stc_ste_param_remove_words, mh_data, remove_start_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + /* The hardware expects here size in words (2 bytes) */ + MLX5_SET(stc_ste_param_remove_words, mh_data, remove_size, 1); + actions++; + + *num_of_actions = actions; +} + +static int +mlx5dr_action_handle_tunnel_l3_to_l2(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + uint8_t mh_data[MLX5DR_ACTION_REFORMAT_DATA_SIZE] = {0}; + int num_of_actions; + int mh_data_size; + int ret; + + if (data_sz != MLX5DR_ACTION_HDR_LEN_L2 && + data_sz != MLX5DR_ACTION_HDR_LEN_L2_W_VLAN) { + DR_LOG(ERR, "Data size is not supported for decap-l3\n"); + rte_errno = EINVAL; + return rte_errno; + } + + mlx5dr_action_prepare_decap_l3_actions(data_sz, mh_data, &num_of_actions); + + mh_data_size = num_of_actions * MLX5DR_MODIFY_ACTION_SIZE; + + ret = mlx5dr_pat_arg_create_modify_header(ctx, action, mh_data_size, + (__be64 *)mh_data, bulk_size); + if (ret) { + DR_LOG(ERR, "Failed allocating modify-header for decap-l3\n"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_mh_obj; + + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + mlx5dr_action_prepare_decap_l3_data(data, mh_data, num_of_actions); + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->modify_header.arg_obj->id, + (uint8_t *)mh_data, + num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); + if (ret) { + DR_LOG(ERR, "Failed writing INLINE arg decap_l3"); + goto clean_stc; + } + } + + return 0; + +clean_stc: + mlx5dr_action_destroy_stcs(action); +free_mh_obj: + mlx5dr_pat_arg_destroy_modify_header(ctx, action); + return ret; +} + +static int +mlx5dr_action_create_reformat_hws(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + switch (action->type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + ret = mlx5dr_action_create_stcs(action, NULL); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + ret = mlx5dr_action_handle_l2_to_tunnel_l2(ctx, data_sz, data, bulk_size, action); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + ret = mlx5dr_action_handle_l2_to_tunnel_l3(ctx, data_sz, data, bulk_size, action); + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + ret = mlx5dr_action_handle_tunnel_l3_to_l2(ctx, data_sz, data, bulk_size, action); + break; + + default: + assert(false); + rte_errno = ENOTSUP; + return rte_errno; + } + + return ret; +} + +struct mlx5dr_action * +mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, + enum mlx5dr_action_reformat_type reformat_type, + size_t data_sz, + void *inline_data, + uint32_t log_bulk_size, + uint32_t flags) +{ + enum mlx5dr_action_type action_type; + struct mlx5dr_action *action; + int ret; + + ret = mlx5dr_action_conv_reformat_type_to_action(reformat_type, &action_type); + if (ret) + return NULL; + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + if (log_bulk_size) { + DR_LOG(ERR, "Bulk reformat not supported over root"); + rte_errno = ENOTSUP; + goto free_action; + } + + ret = mlx5dr_action_create_reformat_root(action, data_sz, inline_data); + if (ret) + goto free_action; + + return action; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Reformat flags don't fit HWS (flags: %x0x)\n", + flags); + rte_errno = EINVAL; + goto free_action; + } + + ret = mlx5dr_action_create_reformat_hws(ctx, data_sz, inline_data, log_bulk_size, action); + if (ret) { + DR_LOG(ERR, "Failed to create reformat.\n"); + rte_errno = EINVAL; + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static int +mlx5dr_action_create_modify_header_root(struct mlx5dr_action *action, + size_t actions_sz, + __be64 *actions) +{ + enum mlx5dv_flow_table_type ft_type = 0; + + mlx5dr_action_conv_flags_to_ft_type(action->flags, &ft_type); + + action->flow_action = + mlx5_glue->dv_create_flow_action_modify_header_root(action->ctx->ibv_ctx, + actions_sz, + (uint64_t *)actions, + ft_type); + if (!action->flow_action) { + rte_errno = errno; + return rte_errno; + } + + return 0; +} + +struct mlx5dr_action * +mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, + size_t pattern_sz, + __be64 pattern[], + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_MODIFY_HDR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + if (log_bulk_size) { + DR_LOG(ERR, "Bulk modify-header not supported over root"); + rte_errno = ENOTSUP; + goto free_action; + } + ret = mlx5dr_action_create_modify_header_root(action, pattern_sz, pattern); + if (ret) + goto free_action; + + return action; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Flags don't fit hws (flags: %x0x, log_bulk_size: %d)\n", + flags, log_bulk_size); + rte_errno = EINVAL; + goto free_action; + } + + if (pattern_sz / MLX5DR_MODIFY_ACTION_SIZE == 1) { + /* Optimize single modiy action to be used inline */ + action->modify_header.single_action = pattern[0]; + action->modify_header.num_of_actions = 1; + action->modify_header.single_action_type = + MLX5_GET(set_action_in, pattern, action_type); + } else { + /* Use multi action pattern and argument */ + ret = mlx5dr_pat_arg_create_modify_header(ctx, action, pattern_sz, + pattern, log_bulk_size); + if (ret) { + DR_LOG(ERR, "Failed allocating modify-header\n"); + goto free_action; + } + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_mh_obj; + + return action; + +free_mh_obj: + if (action->modify_header.num_of_actions > 1) + mlx5dr_pat_arg_destroy_modify_header(ctx, action); +free_action: + simple_free(action); + return NULL; +} + +static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TIR: + case MLX5DR_ACTION_TYP_MISS: + case MLX5DR_ACTION_TYP_TAG: + case MLX5DR_ACTION_TYP_DROP: + case MLX5DR_ACTION_TYP_CTR: + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_ASO_METER: + case MLX5DR_ACTION_TYP_ASO_CT: + case MLX5DR_ACTION_TYP_PUSH_VLAN: + mlx5dr_action_destroy_stcs(action); + break; + case MLX5DR_ACTION_TYP_POP_VLAN: + mlx5dr_action_destroy_stcs(action); + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + mlx5dr_action_destroy_stcs(action); + if (action->modify_header.num_of_actions > 1) + mlx5dr_pat_arg_destroy_modify_header(action->ctx, action); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + mlx5dr_action_destroy_stcs(action); + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + mlx5dr_action_destroy_stcs(action); + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + break; + } +} + +static void mlx5dr_action_destroy_root(struct mlx5dr_action *action) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + ibv_destroy_flow_action(action->flow_action); + break; + } +} + +int mlx5dr_action_destroy(struct mlx5dr_action *action) +{ + if (mlx5dr_action_is_root_flags(action->flags)) + mlx5dr_action_destroy_root(action); + else + mlx5dr_action_destroy_hws(action); + + simple_free(action); + return 0; +} + +/* Called under pthread_spin_lock(&ctx->ctrl_lock) */ +int mlx5dr_action_get_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_action_default_stc *default_stc; + int ret; + + if (ctx->common_res[tbl_type].default_stc) { + ctx->common_res[tbl_type].default_stc->refcount++; + return 0; + } + + default_stc = simple_calloc(1, sizeof(*default_stc)); + if (!default_stc) { + DR_LOG(ERR, "Failed to allocate memory for default STCs"); + rte_errno = ENOMEM; + return rte_errno; + } + + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_NOP; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW0; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_ctr); + if (ret) { + DR_LOG(ERR, "Failed to allocate default counter STC"); + goto free_default_stc; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw5); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW5 STC"); + goto free_nop_ctr; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW6; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw6); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW6 STC"); + goto free_nop_dw5; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW7; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw7); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW7 STC"); + goto free_nop_dw6; + } + + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_ALLOW; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->default_hit); + if (ret) { + DR_LOG(ERR, "Failed to allocate default allow STC"); + goto free_nop_dw7; + } + + ctx->common_res[tbl_type].default_stc = default_stc; + ctx->common_res[tbl_type].default_stc->refcount++; + + return 0; + +free_nop_dw7: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw7); +free_nop_dw6: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw6); +free_nop_dw5: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw5); +free_nop_ctr: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_ctr); +free_default_stc: + simple_free(default_stc); + return rte_errno; +} + +void mlx5dr_action_put_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type) +{ + struct mlx5dr_action_default_stc *default_stc; + + default_stc = ctx->common_res[tbl_type].default_stc; + + default_stc = ctx->common_res[tbl_type].default_stc; + if (--default_stc->refcount) + return; + + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->default_hit); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw7); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw6); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw5); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_ctr); + simple_free(default_stc); + ctx->common_res[tbl_type].default_stc = NULL; +} + +static void mlx5dr_action_modify_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions) +{ + mlx5dr_arg_write(queue, NULL, arg_idx, arg_data, + num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); +} + +void +mlx5dr_action_prepare_decap_l3_data(uint8_t *src, uint8_t *dst, + uint16_t num_of_actions) +{ + uint8_t *e_src; + int i; + + /* num_of_actions = remove l3l2 + 4/5 inserts + remove extra 2 bytes + * copy from end of src to the start of dst. + * move to the end, 2 is the leftover from 14B or 18B + */ + if (num_of_actions == DECAP_L3_NUM_ACTIONS_W_NO_VLAN) + e_src = src + MLX5DR_ACTION_HDR_LEN_L2; + else + e_src = src + MLX5DR_ACTION_HDR_LEN_L2_W_VLAN; + + /* Move dst over the first remove action + zero data */ + dst += MLX5DR_ACTION_DOUBLE_SIZE; + /* Move dst over the first insert ctrl action */ + dst += MLX5DR_ACTION_DOUBLE_SIZE / 2; + /* Actions: + * no vlan: r_h-insert_4b-insert_4b-insert_4b-insert_4b-remove_2b. + * with vlan: r_h-insert_4b-insert_4b-insert_4b-insert_4b-insert_4b-remove_2b. + * the loop is without the last insertion. + */ + for (i = 0; i < num_of_actions - 3; i++) { + e_src -= MLX5DR_ACTION_INLINE_DATA_SIZE; + memcpy(dst, e_src, MLX5DR_ACTION_INLINE_DATA_SIZE); /* data */ + dst += MLX5DR_ACTION_DOUBLE_SIZE; + } + /* Copy the last 2 bytes after a gap of 2 bytes which will be removed */ + e_src -= MLX5DR_ACTION_INLINE_DATA_SIZE / 2; + dst += MLX5DR_ACTION_INLINE_DATA_SIZE / 2; + memcpy(dst, e_src, 2); +} + +static struct mlx5dr_actions_wqe_setter * +mlx5dr_action_setter_find_first(struct mlx5dr_actions_wqe_setter *setter, + uint8_t req_flags) +{ + /* Use a new setter if requested flags are taken */ + while (setter->flags & req_flags) + setter++; + + /* Use current setter in required flags are not used */ + return setter; +} + +static void +mlx5dr_action_apply_stc(struct mlx5dr_actions_apply_data *apply, + enum mlx5dr_action_stc_idx stc_idx, + uint8_t action_idx) +{ + struct mlx5dr_action *action = apply->rule_action[action_idx].action; + + apply->wqe_ctrl->stc_ix[stc_idx] = + htobe32(action->stc[apply->tbl_type].offset); +} + +static void +mlx5dr_action_setter_push_vlan(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_double]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = rule_action->push_vlan.vlan_hdr; + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; +} + +static void +mlx5dr_action_setter_modify_header(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + uint32_t arg_sz, arg_idx; + uint8_t *single_action; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + + if (action->modify_header.num_of_actions == 1) { + if (action->modify_header.single_action_type == + MLX5_MODIFICATION_TYPE_COPY) { + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + return; + } + + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) + single_action = (uint8_t *)&action->modify_header.single_action; + else + single_action = rule_action->modify_header.data; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = + *(__be32 *)MLX5_ADDR_OF(set_action_in, single_action, data); + } else { + /* Argument offset multiple with number of args per these actions */ + arg_sz = mlx5dr_arg_get_arg_size(action->modify_header.num_of_actions); + arg_idx = rule_action->modify_header.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + if (!(action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_action_modify_write(apply->queue, + action->modify_header.arg_obj->id + arg_idx, + rule_action->modify_header.data, + action->modify_header.num_of_actions); + } + } +} + +static void +mlx5dr_action_setter_insert_ptr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + uint32_t arg_idx, arg_sz; + + rule_action = &apply->rule_action[setter->idx_double]; + + /* Argument offset multiple on args required for header size */ + arg_sz = mlx5dr_arg_data_size_to_arg_size(rule_action->action->reformat.header_size); + arg_idx = rule_action->reformat.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + if (!(rule_action->action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_arg_write(apply->queue, NULL, + rule_action->action->reformat.arg_obj->id + arg_idx, + rule_action->reformat.data, + rule_action->action->reformat.header_size); + } +} + +static void +mlx5dr_action_setter_tnl_l3_to_l2(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + uint32_t arg_sz, arg_idx; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + + /* Argument offset multiple on args required for num of actions */ + arg_sz = mlx5dr_arg_get_arg_size(action->modify_header.num_of_actions); + arg_idx = rule_action->reformat.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + if (!(action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_arg_decapl3_write(apply->queue, + action->modify_header.arg_obj->id + arg_idx, + rule_action->reformat.data, + action->modify_header.num_of_actions); + } +} + +static void +mlx5dr_action_setter_aso(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + uint32_t exe_aso_ctrl; + uint32_t offset; + + rule_action = &apply->rule_action[setter->idx_double]; + + switch(rule_action->action->type) { + case MLX5DR_ACTION_TYP_ASO_METER: + /* exe_aso_ctrl format: + * [STC only and reserved bits 29b][init_color 2b][meter_id 1b] + */ + offset = rule_action->aso_meter.offset / MLX5_ASO_METER_NUM_PER_OBJ; + exe_aso_ctrl = rule_action->aso_meter.offset % MLX5_ASO_METER_NUM_PER_OBJ; + exe_aso_ctrl |= rule_action->aso_meter.init_color << + MLX5DR_ACTION_METER_INIT_COLOR_OFFSET; + break; + case MLX5DR_ACTION_TYP_ASO_CT: + /* exe_aso_ctrl CT format: + * [STC only and reserved bits 31b][direction 1b] + */ + offset = rule_action->aso_ct.offset / MLX5_ASO_CT_NUM_PER_OBJ; + exe_aso_ctrl = rule_action->aso_ct.direction; + break; + default: + DR_LOG(ERR, "Unsupported ASO action type: %d", rule_action->action->type); + rte_errno = ENOTSUP; + return; + } + + /* aso_object_offset format: [24B] */ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = htobe32(offset); + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(exe_aso_ctrl); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; +} + +static void +mlx5dr_action_setter_tag(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_single]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = htobe32(rule_action->tag.value); + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW5, setter->idx_single); +} + +static void +mlx5dr_action_setter_ctrl_ctr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_ctr]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW0] = htobe32(rule_action->counter.offset); + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_CTRL, setter->idx_ctr); +} + +static void +mlx5dr_action_setter_single(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW5, setter->idx_single); +} + +static void +mlx5dr_action_setter_single_double_pop(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(mlx5dr_action_get_shared_stc_offset(apply->common_res, + MLX5DR_CONTEXT_SHARED_STC_POP)); +} + +static void +mlx5dr_action_setter_hit(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = 0; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_HIT, setter->idx_hit); +} + +static void +mlx5dr_action_setter_default_hit(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_HIT] = + htobe32(apply->common_res->default_stc->default_hit.offset); +} + +static void +mlx5dr_action_setter_hit_next_action(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = htobe32(apply->next_direct_idx << 6); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_HIT] = htobe32(apply->jump_to_action_stc); +} + +static void +mlx5dr_action_setter_common_decap(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(mlx5dr_action_get_shared_stc_offset(apply->common_res, + MLX5DR_CONTEXT_SHARED_STC_DECAP)); +} + +int mlx5dr_action_template_process(struct mlx5dr_action_template *at) +{ + struct mlx5dr_actions_wqe_setter *start_setter = at->setters + 1; + enum mlx5dr_action_type *action_type = at->action_type_arr; + struct mlx5dr_actions_wqe_setter *setter = at->setters; + struct mlx5dr_actions_wqe_setter *pop_setter = NULL; + struct mlx5dr_actions_wqe_setter *last_setter; + int i; + + /* Note: Given action combination must be valid */ + + /* Check if action were already processed */ + if (at->num_of_action_stes) + return 0; + + for (i = 0; i < MLX5DR_ACTION_MAX_STE; i++) + setter[i].set_hit = &mlx5dr_action_setter_hit_next_action; + + /* The same action template setters can be used with jumbo or match + * STE, to support both cases we reseve the first setter for cases + * with jumbo STE to allow jump to the first action STE. + * This extra setter can be reduced in some cases on rule creation. + */ + setter = start_setter; + last_setter = start_setter; + + for (i = 0; i < at->num_actions; i++) { + switch (action_type[i]) { + case MLX5DR_ACTION_TYP_DROP: + case MLX5DR_ACTION_TYP_TIR: + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_VPORT: + case MLX5DR_ACTION_TYP_MISS: + /* Hit action */ + last_setter->flags |= ASF_HIT; + last_setter->set_hit = &mlx5dr_action_setter_hit; + last_setter->idx_hit = i; + break; + + case MLX5DR_ACTION_TYP_POP_VLAN: + /* Single remove header to header */ + if (pop_setter) { + /* We have 2 pops, use the shared */ + pop_setter->set_single = &mlx5dr_action_setter_single_double_pop; + break; + } + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY); + setter->flags |= ASF_SINGLE1 | ASF_REPARSE | ASF_REMOVE; + setter->set_single = &mlx5dr_action_setter_single; + setter->idx_single = i; + pop_setter = setter; + break; + + case MLX5DR_ACTION_TYP_PUSH_VLAN: + /* Double insert inline */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_REPARSE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_push_vlan; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_MODIFY_HDR: + /* Double modify header list */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_modify_header; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_ASO_METER: + case MLX5DR_ACTION_TYP_ASO_CT: + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE); + setter->flags |= ASF_DOUBLE; + setter->set_double = &mlx5dr_action_setter_aso; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + /* Single remove header to header */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY); + setter->flags |= ASF_SINGLE1 | ASF_REMOVE | ASF_REPARSE; + setter->set_single = &mlx5dr_action_setter_single; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + /* Double insert header with pointer */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE); + setter->flags |= ASF_DOUBLE | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_insert_ptr; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + /* Single remove + Double insert header with pointer */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_DOUBLE); + setter->flags |= ASF_SINGLE1 | ASF_DOUBLE | ASF_REPARSE | ASF_REMOVE; + setter->set_double = &mlx5dr_action_setter_insert_ptr; + setter->idx_double = i; + setter->set_single = &mlx5dr_action_setter_common_decap; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + /* Double modify header list with remove and push inline */ + setter = mlx5dr_action_setter_find_first(last_setter, + ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_tnl_l3_to_l2; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_TAG: + /* Single TAG action, search for any room from the start */ + setter = mlx5dr_action_setter_find_first(start_setter, ASF_SINGLE1); + setter->flags |= ASF_SINGLE1; + setter->set_single = &mlx5dr_action_setter_tag; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_CTR: + /* Control counter action + * TODO: Current counter executed first. Support is needed + * for single ation counter action which is done last. + * Example: Decap + CTR + */ + setter = mlx5dr_action_setter_find_first(start_setter, ASF_CTR); + setter->flags |= ASF_CTR; + setter->set_ctr = &mlx5dr_action_setter_ctrl_ctr; + setter->idx_ctr = i; + break; + + default: + DR_LOG(ERR, "Unsupported action type: %d", action_type[i]); + rte_errno = ENOTSUP; + assert(false); + return rte_errno; + } + + last_setter = RTE_MAX(setter, last_setter); + } + + /* Set default hit on the last STE if no hit action provided */ + if (!(last_setter->flags & ASF_HIT)) + last_setter->set_hit = &mlx5dr_action_setter_default_hit; + + at->num_of_action_stes = last_setter - start_setter + 1; + + /* Check if action template doesn't require any action DWs */ + at->only_term = (at->num_of_action_stes == 1) && + !(last_setter->flags & ~(ASF_CTR | ASF_HIT)); + + return 0; +} + +struct mlx5dr_action_template * +mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[]) +{ + struct mlx5dr_action_template *at; + uint8_t num_actions = 0; + int i; + + at = simple_calloc(1, sizeof(*at)); + if (!at) { + DR_LOG(ERR, "Failed to allocate action template"); + rte_errno = ENOMEM; + return NULL; + } + + while (action_type[num_actions++] != MLX5DR_ACTION_TYP_LAST); + + at->num_actions = num_actions - 1; + at->action_type_arr = simple_calloc(num_actions, sizeof(*action_type)); + if (!at->action_type_arr) { + DR_LOG(ERR, "Failed to allocate action type array"); + rte_errno = ENOMEM; + goto free_at; + } + + for (i = 0; i < num_actions; i++) + at->action_type_arr[i] = action_type[i]; + + return at; + +free_at: + simple_free(at); + return NULL; +} + +int mlx5dr_action_template_destroy(struct mlx5dr_action_template *at) +{ + simple_free(at->action_type_arr); + simple_free(at); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h new file mode 100644 index 0000000000..f14d91f994 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -0,0 +1,253 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_ACTION_H_ +#define MLX5DR_ACTION_H_ + +/* Max number of STEs needed for a rule (including match) */ +#define MLX5DR_ACTION_MAX_STE 7 + +enum mlx5dr_action_stc_idx { + MLX5DR_ACTION_STC_IDX_CTRL = 0, + MLX5DR_ACTION_STC_IDX_HIT = 1, + MLX5DR_ACTION_STC_IDX_DW5 = 2, + MLX5DR_ACTION_STC_IDX_DW6 = 3, + MLX5DR_ACTION_STC_IDX_DW7 = 4, + MLX5DR_ACTION_STC_IDX_MAX = 5, + /* STC Jumvo STE combo: CTR, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_JUMBO_STE = 1, + /* STC combo1: CTR, SINGLE, DOUBLE, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_COMBO1 = 3, + /* STC combo2: CTR, 3 x SINGLE, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_COMBO2 = 4, +}; + +enum mlx5dr_action_offset { + MLX5DR_ACTION_OFFSET_DW0 = 0, + MLX5DR_ACTION_OFFSET_DW5 = 5, + MLX5DR_ACTION_OFFSET_DW6 = 6, + MLX5DR_ACTION_OFFSET_DW7 = 7, + MLX5DR_ACTION_OFFSET_HIT = 3, + MLX5DR_ACTION_OFFSET_HIT_LSB = 4, +}; + +enum { + MLX5DR_ACTION_DOUBLE_SIZE = 8, + MLX5DR_ACTION_INLINE_DATA_SIZE = 4, + MLX5DR_ACTION_HDR_LEN_L2_MACS = 12, + MLX5DR_ACTION_HDR_LEN_L2_VLAN = 4, + MLX5DR_ACTION_HDR_LEN_L2_ETHER = 2, + MLX5DR_ACTION_HDR_LEN_L2 = (MLX5DR_ACTION_HDR_LEN_L2_MACS + + MLX5DR_ACTION_HDR_LEN_L2_ETHER), + MLX5DR_ACTION_HDR_LEN_L2_W_VLAN = (MLX5DR_ACTION_HDR_LEN_L2 + + MLX5DR_ACTION_HDR_LEN_L2_VLAN), + MLX5DR_ACTION_REFORMAT_DATA_SIZE = 64, + DECAP_L3_NUM_ACTIONS_W_NO_VLAN = 6, + DECAP_L3_NUM_ACTIONS_W_VLAN = 7, +}; + +enum mlx5dr_action_setter_flag { + ASF_SINGLE1 = 1 << 0, + ASF_SINGLE2 = 1 << 1, + ASF_SINGLE3 = 1 << 2, + ASF_DOUBLE = ASF_SINGLE2 | ASF_SINGLE3, + ASF_REPARSE = 1 << 3, + ASF_REMOVE = 1 << 4, + ASF_MODIFY = 1 << 5, + ASF_CTR = 1 << 6, + ASF_HIT = 1 << 7, +}; + +struct mlx5dr_action_default_stc { + struct mlx5dr_pool_chunk nop_ctr; + struct mlx5dr_pool_chunk nop_dw5; + struct mlx5dr_pool_chunk nop_dw6; + struct mlx5dr_pool_chunk nop_dw7; + struct mlx5dr_pool_chunk default_hit; + uint32_t refcount; +}; + +struct mlx5dr_action_shared_stc { + struct mlx5dr_pool_chunk remove_header; + rte_atomic32_t refcount; +}; + +struct mlx5dr_actions_apply_data { + struct mlx5dr_send_engine *queue; + struct mlx5dr_rule_action *rule_action; + uint32_t *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + uint32_t jump_to_action_stc; + struct mlx5dr_context_common_res *common_res; + enum mlx5dr_table_type tbl_type; + uint32_t next_direct_idx; + uint8_t require_dep; +}; + +struct mlx5dr_actions_wqe_setter; + +typedef void (*mlx5dr_action_setter_fp) + (struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter); + +struct mlx5dr_actions_wqe_setter { + mlx5dr_action_setter_fp set_single; + mlx5dr_action_setter_fp set_double; + mlx5dr_action_setter_fp set_hit; + mlx5dr_action_setter_fp set_ctr; + uint8_t idx_single; + uint8_t idx_double; + uint8_t idx_ctr; + uint8_t idx_hit; + uint8_t flags; +}; + +struct mlx5dr_action_template { + struct mlx5dr_actions_wqe_setter setters[MLX5DR_ACTION_MAX_STE]; + enum mlx5dr_action_type *action_type_arr; + uint8_t num_of_action_stes; + uint8_t num_actions; + uint8_t only_term; +}; + +struct mlx5dr_action { + uint8_t type; + uint8_t flags; + struct mlx5dr_context *ctx; + union { + struct { + struct mlx5dr_pool_chunk stc[MLX5DR_TABLE_TYPE_MAX]; + union { + struct { + struct mlx5dr_devx_obj *pattern_obj; + struct mlx5dr_devx_obj *arg_obj; + __be64 single_action; + uint8_t single_action_type; + uint16_t num_of_actions; + } modify_header; + struct { + struct mlx5dr_devx_obj *arg_obj; + uint32_t header_size; + } reformat; + struct { + struct mlx5dr_devx_obj *devx_obj; + uint8_t return_reg_id; + } aso; + struct { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + } vport; + }; + }; + + struct ibv_flow_action *flow_action; + struct mlx5dv_devx_obj *devx_obj; + struct ibv_qp *qp; + }; +}; + +int mlx5dr_action_root_build_attr(struct mlx5dr_rule_action rule_actions[], + uint32_t num_actions, + struct mlx5dv_flow_action_attr *attr); + +int mlx5dr_action_get_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type); + +void mlx5dr_action_put_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type); + +void mlx5dr_action_prepare_decap_l3_data(uint8_t *src, uint8_t *dst, + uint16_t num_of_actions); + +int mlx5dr_action_template_process(struct mlx5dr_action_template *at); + +bool mlx5dr_action_check_combo(enum mlx5dr_action_type *user_actions, + enum mlx5dr_table_type table_type); + +int mlx5dr_action_alloc_single_stc(struct mlx5dr_context *ctx, + struct mlx5dr_cmd_stc_modify_attr *stc_attr, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc); + +void mlx5dr_action_free_single_stc(struct mlx5dr_context *ctx, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc); + +static inline void +mlx5dr_action_setter_default_single(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(apply->common_res->default_stc->nop_dw5.offset); +} + +static inline void +mlx5dr_action_setter_default_double(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = + htobe32(apply->common_res->default_stc->nop_dw6.offset); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = + htobe32(apply->common_res->default_stc->nop_dw7.offset); +} + +static inline void +mlx5dr_action_setter_default_ctr(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW0] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_CTRL] = + htobe32(apply->common_res->default_stc->nop_ctr.offset); +} + +static inline void +mlx5dr_action_apply_setter(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter, + bool is_jumbo) +{ + uint8_t num_of_actions; + + /* Set control counter */ + if (setter->flags & ASF_CTR) + setter->set_ctr(apply, setter); + else + mlx5dr_action_setter_default_ctr(apply, setter); + + /* Set single and double on match */ + if (!is_jumbo) { + if (setter->flags & ASF_SINGLE1) + setter->set_single(apply, setter); + else + mlx5dr_action_setter_default_single(apply, setter); + + if (setter->flags & ASF_DOUBLE) + setter->set_double(apply, setter); + else + mlx5dr_action_setter_default_double(apply, setter); + + num_of_actions = setter->flags & ASF_DOUBLE ? + MLX5DR_ACTION_STC_IDX_LAST_COMBO1 : + MLX5DR_ACTION_STC_IDX_LAST_COMBO2; + } else { + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + num_of_actions = MLX5DR_ACTION_STC_IDX_LAST_JUMBO_STE; + } + + /* Set next/final hit action */ + setter->set_hit(apply, setter); + + /* Set number of actions */ + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_CTRL] |= + htobe32(num_of_actions << 29); +} + +#endif /* MLX5DR_ACTION_H_ */ diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.c b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c new file mode 100644 index 0000000000..9b73707ee8 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c @@ -0,0 +1,511 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_data_size_to_arg_log_size(uint16_t data_size) +{ + /* Return the roundup of log2(data_size) */ + if (data_size <= MLX5DR_ARG_DATA_SIZE) + return MLX5DR_ARG_CHUNK_SIZE_1; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 2) + return MLX5DR_ARG_CHUNK_SIZE_2; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 4) + return MLX5DR_ARG_CHUNK_SIZE_3; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 8) + return MLX5DR_ARG_CHUNK_SIZE_4; + + return MLX5DR_ARG_CHUNK_SIZE_MAX; +} + +uint32_t mlx5dr_arg_data_size_to_arg_size(uint16_t data_size) +{ + return BIT(mlx5dr_arg_data_size_to_arg_log_size(data_size)); +} + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_get_arg_log_size(uint16_t num_of_actions) +{ + return mlx5dr_arg_data_size_to_arg_log_size(num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); +} + +uint32_t mlx5dr_arg_get_arg_size(uint16_t num_of_actions) +{ + return BIT(mlx5dr_arg_get_arg_log_size(num_of_actions)); +} + +/* Cache and cache element handling */ +int mlx5dr_pat_init_pattern_cache(struct mlx5dr_pattern_cache **cache) +{ + struct mlx5dr_pattern_cache *new_cache; + + new_cache = simple_calloc(1, sizeof(*new_cache)); + if (!new_cache) { + rte_errno = ENOMEM; + return rte_errno; + } + LIST_INIT(&new_cache->head); + pthread_spin_init(&new_cache->lock, PTHREAD_PROCESS_PRIVATE); + + *cache = new_cache; + + return 0; +} + +void mlx5dr_pat_uninit_pattern_cache(struct mlx5dr_pattern_cache *cache) +{ + simple_free(cache); +} + +static bool mlx5dr_pat_compare_pattern(enum mlx5dr_action_type cur_type, + int cur_num_of_actions, + __be64 cur_actions[], + enum mlx5dr_action_type type, + int num_of_actions, + __be64 actions[]) +{ + int i; + + if ((cur_num_of_actions != num_of_actions) || (cur_type != type)) + return false; + + /* All decap-l3 look the same, only change is the num of actions */ + if (type == MLX5DR_ACTION_TYP_TNL_L3_TO_L2) + return true; + + for (i = 0; i < num_of_actions; i++) { + u8 action_id = + MLX5_GET(set_action_in, &actions[i], action_type); + + if (action_id == MLX5_MODIFICATION_TYPE_COPY) { + if (actions[i] != cur_actions[i]) + return false; + } else { + /* Compare just the control, not the values */ + if ((__be32)actions[i] != + (__be32)cur_actions[i]) + return false; + } + } + + return true; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_find_cached_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pat; + + LIST_FOREACH(cached_pat, &cache->head, next) { + if (mlx5dr_pat_compare_pattern(cached_pat->type, + cached_pat->mh_data.num_of_actions, + (__be64 *)cached_pat->mh_data.data, + action->type, + num_of_actions, + actions)) + return cached_pat; + } + + return NULL; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_get_existing_cached_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + cached_pattern = mlx5dr_pat_find_cached_pattern(cache, action, num_of_actions, actions); + if (cached_pattern) { + /* LRU: move it to be first in the list */ + LIST_REMOVE(cached_pattern, next); + LIST_INSERT_HEAD(&cache->head, cached_pattern, next); + rte_atomic32_add(&cached_pattern->refcount, 1); + } + + return cached_pattern; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_get_cached_pattern_by_action(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + LIST_FOREACH(cached_pattern, &cache->head, next) { + if (cached_pattern->mh_data.pattern_obj->id == action->modify_header.pattern_obj->id) + return cached_pattern; + } + + return NULL; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_add_pattern_to_cache(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_devx_obj *pattern_obj, + enum mlx5dr_action_type type, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + cached_pattern = simple_calloc(1, sizeof(*cached_pattern)); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to allocate cached_pattern"); + rte_errno = ENOMEM; + return NULL; + } + + cached_pattern->type = type; + cached_pattern->mh_data.num_of_actions = num_of_actions; + cached_pattern->mh_data.pattern_obj = pattern_obj; + cached_pattern->mh_data.data = + simple_malloc(num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); + if (!cached_pattern->mh_data.data) { + DR_LOG(ERR, "Failed to allocate mh_data.data"); + rte_errno = ENOMEM; + goto free_cached_obj; + } + + memcpy(cached_pattern->mh_data.data, actions, + num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); + + LIST_INSERT_HEAD(&cache->head, cached_pattern, next); + + rte_atomic32_init(&cached_pattern->refcount); + rte_atomic32_set(&cached_pattern->refcount, 1); + + return cached_pattern; + +free_cached_obj: + simple_free(cached_pattern); + return NULL; +} + +static void +mlx5dr_pat_remove_pattern(struct mlx5dr_pat_cached_pattern *cached_pattern) +{ + LIST_REMOVE(cached_pattern, next); + simple_free(cached_pattern->mh_data.data); + simple_free(cached_pattern); +} + +static void +mlx5dr_pat_put_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + pthread_spin_lock(&cache->lock); + cached_pattern = mlx5dr_pat_get_cached_pattern_by_action(cache, action); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to find pattern according to action with pt"); + assert(false); + goto out; + } + + if (!rte_atomic32_dec_and_test(&cached_pattern->refcount)) + goto out; + + mlx5dr_pat_remove_pattern(cached_pattern); + +out: + pthread_spin_unlock(&cache->lock); +} + +static int mlx5dr_pat_get_pattern(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint16_t num_of_actions, + size_t pattern_sz, + __be64 *pattern) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + int ret = 0; + + pthread_spin_lock(&ctx->pattern_cache->lock); + + cached_pattern = mlx5dr_pat_get_existing_cached_pattern(ctx->pattern_cache, + action, + num_of_actions, + pattern); + if (cached_pattern) { + action->modify_header.pattern_obj = cached_pattern->mh_data.pattern_obj; + goto out_unlock; + } + + action->modify_header.pattern_obj = + mlx5dr_cmd_header_modify_pattern_create(ctx->ibv_ctx, + pattern_sz, + (uint8_t *)pattern); + if (!action->modify_header.pattern_obj) { + DR_LOG(ERR, "Failed to create pattern FW object"); + + ret = rte_errno; + goto out_unlock; + } + + cached_pattern = + mlx5dr_pat_add_pattern_to_cache(ctx->pattern_cache, + action->modify_header.pattern_obj, + action->type, + num_of_actions, + pattern); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to add pattern to cache"); + ret = rte_errno; + goto clean_pattern; + } + +out_unlock: + pthread_spin_unlock(&ctx->pattern_cache->lock); + return ret; + +clean_pattern: + mlx5dr_cmd_destroy_obj(action->modify_header.pattern_obj); + pthread_spin_unlock(&ctx->pattern_cache->lock); + return ret; +} + +static void +mlx5d_arg_init_send_attr(struct mlx5dr_send_engine_post_attr *send_attr, + void *comp_data, + uint32_t arg_idx) +{ + send_attr->opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + send_attr->opmod = MLX5DR_WQE_GTA_OPMOD_MOD_ARG; + send_attr->len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + send_attr->id = arg_idx; + send_attr->user_data = comp_data; +} + +void mlx5dr_arg_decapl3_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_arg *wqe_arg; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + size_t wqe_len; + + mlx5d_arg_init_send_attr(&send_attr, NULL, arg_idx); + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + mlx5dr_action_prepare_decap_l3_data(arg_data, (uint8_t *) wqe_arg, + num_of_actions); + mlx5dr_send_engine_post_end(&ctrl, &send_attr); +} + +static int +mlx5dr_arg_poll_for_comp(struct mlx5dr_context *ctx, uint16_t queue_id) +{ + struct rte_flow_op_result comp[1]; + int ret; + + while (true) { + ret = mlx5dr_send_queue_poll(ctx, queue_id, comp, 1); + if (ret) { + if (ret < 0) { + DR_LOG(ERR, "Failed mlx5dr_send_queue_poll"); + } else if (comp[0].status == RTE_FLOW_OP_ERROR) { + DR_LOG(ERR, "Got comp with error"); + rte_errno = ENOENT; + } + break; + } + } + return (ret == 1 ? 0 : ret); +} + +void mlx5dr_arg_write(struct mlx5dr_send_engine *queue, + void *comp_data, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_arg *wqe_arg; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + int i, full_iter, leftover; + size_t wqe_len; + + mlx5d_arg_init_send_attr(&send_attr, comp_data, arg_idx); + + /* Each WQE can hold 64B of data, it might require multiple iteration */ + full_iter = data_size / MLX5DR_ARG_DATA_SIZE; + leftover = data_size & (MLX5DR_ARG_DATA_SIZE - 1); + + for (i = 0; i < full_iter; i++) { + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + memcpy(wqe_arg, arg_data, wqe_len); + send_attr.id = arg_idx++; + mlx5dr_send_engine_post_end(&ctrl, &send_attr); + + /* Move to next argument data */ + arg_data += MLX5DR_ARG_DATA_SIZE; + } + + if (leftover) { + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); // TODO OPT: GTA ctrl might be ignored in case of arg + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + memcpy(wqe_arg, arg_data, leftover); + send_attr.id = arg_idx; + mlx5dr_send_engine_post_end(&ctrl, &send_attr); + } +} + +int mlx5dr_arg_write_inline_arg_data(struct mlx5dr_context *ctx, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size) +{ + struct mlx5dr_send_engine *queue; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + + /* Get the control queue */ + queue = &ctx->send_queue[ctx->queues - 1]; + + mlx5dr_arg_write(queue, arg_data, arg_idx, arg_data, data_size); + + mlx5dr_send_engine_flush_queue(queue); + + /* Poll for completion */ + ret = mlx5dr_arg_poll_for_comp(ctx, ctx->queues - 1); + if (ret) + DR_LOG(ERR, "Failed to get completions for shared action"); + + pthread_spin_unlock(&ctx->ctrl_lock); + + return ret; +} + +bool mlx5dr_arg_is_valid_arg_request_size(struct mlx5dr_context *ctx, + uint32_t arg_size) +{ + if (arg_size < ctx->caps->log_header_modify_argument_granularity || + arg_size > ctx->caps->log_header_modify_argument_max_alloc) { + return false; + } + return true; +} + +static int +mlx5dr_arg_create_modify_header_arg(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *pattern, + uint32_t bulk_size) +{ + uint32_t flags = action->flags; + uint16_t args_log_size; + int ret = 0; + + /* Alloc bulk of args */ + args_log_size = mlx5dr_arg_get_arg_log_size(num_of_actions); + if (args_log_size >= MLX5DR_ARG_CHUNK_SIZE_MAX) { + DR_LOG(ERR, "Exceed number of allowed actions %u", + num_of_actions); + rte_errno = EINVAL; + return rte_errno; + } + + if (!mlx5dr_arg_is_valid_arg_request_size(ctx, args_log_size + bulk_size)) { + DR_LOG(ERR, "Arg size %d does not fit FW capability", + args_log_size + bulk_size); + rte_errno = EINVAL; + return rte_errno; + } + + action->modify_header.arg_obj = + mlx5dr_cmd_arg_create(ctx->ibv_ctx, args_log_size + bulk_size, + ctx->pd_num); + if (!action->modify_header.arg_obj) { + DR_LOG(ERR, "Failed allocating arg in order: %d", + args_log_size + bulk_size); + return rte_errno; + } + + /* When INLINE need to write the arg data */ + if (flags & MLX5DR_ACTION_FLAG_SHARED) + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->modify_header.arg_obj->id, + (uint8_t *)pattern, + num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); + if (ret) { + DR_LOG(ERR, "Failed writing INLINE arg in order: %d", + args_log_size + bulk_size); + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + return rte_errno; + } + + return 0; +} + +int mlx5dr_pat_arg_create_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + size_t pattern_sz, + __be64 pattern[], + uint32_t bulk_size) +{ + uint16_t num_of_actions; + int ret; + + num_of_actions = pattern_sz / MLX5DR_MODIFY_ACTION_SIZE; + if (num_of_actions == 0) { + DR_LOG(ERR, "Invalid number of actions %u\n", num_of_actions); + rte_errno = EINVAL; + return rte_errno; + } + + action->modify_header.num_of_actions = num_of_actions; + + ret = mlx5dr_arg_create_modify_header_arg(ctx, action, + num_of_actions, + pattern, + bulk_size); + if (ret) { + DR_LOG(ERR, "Failed to allocate arg"); + return ret; + } + + ret = mlx5dr_pat_get_pattern(ctx, action, num_of_actions, pattern_sz, + pattern); + if (ret) { + DR_LOG(ERR, "Failed to allocate pattern"); + goto free_arg; + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + return rte_errno; +} + +void mlx5dr_pat_arg_destroy_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action) +{ + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + mlx5dr_pat_put_pattern(ctx->pattern_cache, action); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.h b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h new file mode 100644 index 0000000000..8a4670427f --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_PAT_ARG_H_ +#define MLX5DR_PAT_ARG_H_ + +/* Modify-header arg pool */ +enum mlx5dr_arg_chunk_size { + MLX5DR_ARG_CHUNK_SIZE_1, + /* Keep MIN updated when changing */ + MLX5DR_ARG_CHUNK_SIZE_MIN = MLX5DR_ARG_CHUNK_SIZE_1, + MLX5DR_ARG_CHUNK_SIZE_2, + MLX5DR_ARG_CHUNK_SIZE_3, + MLX5DR_ARG_CHUNK_SIZE_4, + MLX5DR_ARG_CHUNK_SIZE_MAX, +}; + +enum { + MLX5DR_MODIFY_ACTION_SIZE = 8, + MLX5DR_ARG_DATA_SIZE = 64, +}; + +struct mlx5dr_pattern_cache { + /* Protect pattern list */ + pthread_spinlock_t lock; + LIST_HEAD(pattern_head, mlx5dr_pat_cached_pattern) head; +}; + +struct mlx5dr_pat_cached_pattern { + enum mlx5dr_action_type type; + struct { + struct mlx5dr_devx_obj *pattern_obj; + struct dr_icm_chunk *chunk; + uint8_t *data; + uint16_t num_of_actions; + } mh_data; + rte_atomic32_t refcount; + LIST_ENTRY(mlx5dr_pat_cached_pattern) next; +}; + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_get_arg_log_size(uint16_t num_of_actions); + +uint32_t mlx5dr_arg_get_arg_size(uint16_t num_of_actions); + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_data_size_to_arg_log_size(uint16_t data_size); + +uint32_t mlx5dr_arg_data_size_to_arg_size(uint16_t data_size); + +int mlx5dr_pat_init_pattern_cache(struct mlx5dr_pattern_cache **cache); + +void mlx5dr_pat_uninit_pattern_cache(struct mlx5dr_pattern_cache *cache); + +int mlx5dr_pat_arg_create_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + size_t pattern_sz, + __be64 pattern[], + uint32_t bulk_size); + +void mlx5dr_pat_arg_destroy_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action); + +bool mlx5dr_arg_is_valid_arg_request_size(struct mlx5dr_context *ctx, + uint32_t arg_size); + +void mlx5dr_arg_write(struct mlx5dr_send_engine *queue, + void *comp_data, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size); + +void mlx5dr_arg_decapl3_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions); + +int mlx5dr_arg_write_inline_arg_data(struct mlx5dr_context *ctx, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size); +#endif /* MLX5DR_PAT_ARG_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 18/19] net/mlx5/hws: Add HWS debug layer 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (16 preceding siblings ...) 2022-10-06 15:03 ` [v2 17/19] net/mlx5/hws: Add HWS action object Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 2022-10-06 15:03 ` [v2 19/19] net/mlx5/hws: Enable HWS Alex Vesker 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Hamdan Igbaria The debug layer is used to generate a debug CSV file containing details of the context, table, matcher, rules and other useful debug information. Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_debug.c | 462 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_debug.h | 28 ++ 2 files changed, 490 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.h diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c new file mode 100644 index 0000000000..890a761c48 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -0,0 +1,462 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +const char *mlx5dr_debug_action_type_str[] = { + [MLX5DR_ACTION_TYP_LAST] = "LAST", + [MLX5DR_ACTION_TYP_TNL_L2_TO_L2] = "TNL_L2_TO_L2", + [MLX5DR_ACTION_TYP_L2_TO_TNL_L2] = "L2_TO_TNL_L2", + [MLX5DR_ACTION_TYP_TNL_L3_TO_L2] = "TNL_L3_TO_L2", + [MLX5DR_ACTION_TYP_L2_TO_TNL_L3] = "L2_TO_TNL_L3", + [MLX5DR_ACTION_TYP_DROP] = "DROP", + [MLX5DR_ACTION_TYP_TIR] = "TIR", + [MLX5DR_ACTION_TYP_FT] = "FT", + [MLX5DR_ACTION_TYP_CTR] = "CTR", + [MLX5DR_ACTION_TYP_TAG] = "TAG", + [MLX5DR_ACTION_TYP_MODIFY_HDR] = "MODIFY_HDR", + [MLX5DR_ACTION_TYP_VPORT] = "VPORT", + [MLX5DR_ACTION_TYP_MISS] = "DEFAULT_MISS", + [MLX5DR_ACTION_TYP_POP_VLAN] = "POP_VLAN", + [MLX5DR_ACTION_TYP_PUSH_VLAN] = "PUSH_VLAN", + [MLX5DR_ACTION_TYP_ASO_METER] = "ASO_METER", + [MLX5DR_ACTION_TYP_ASO_CT] = "ASO_CT", +}; + +static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX, + "Missing mlx5dr_debug_action_type_str"); + +const char *mlx5dr_debug_action_type_to_str(enum mlx5dr_action_type action_type) +{ + return mlx5dr_debug_action_type_str[action_type]; +} + +static int +mlx5dr_debug_dump_matcher_template_definer(FILE *f, + struct mlx5dr_match_template *mt) +{ + struct mlx5dr_definer *definer = mt->definer; + int i, ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,", + MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_DEFINER, + (uint64_t)(uintptr_t)definer, + (uint64_t)(uintptr_t)mt, + definer->obj->id, + definer->type); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (i = 0; i < DW_SELECTORS; i++) { + ret = fprintf(f, "0x%x%s", definer->dw_selector[i], + (i == DW_SELECTORS - 1) ? "," : "-"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + for (i = 0; i < BYTE_SELECTORS; i++) { + ret = fprintf(f, "0x%x%s", definer->byte_selector[i], + (i == BYTE_SELECTORS - 1) ? "," : "-"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + for (i = 0; i < MLX5DR_JUMBO_TAG_SZ; i++) { + ret = fprintf(f, "%02x", definer->mask.jumbo[i]); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + ret = fprintf(f, "\n"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int +mlx5dr_debug_dump_matcher_match_template(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + int i, ret; + + for (i = 0; i < matcher->num_of_mt; i++) { + struct mlx5dr_match_template *mt = matcher->mt[i]; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_MATCHER_MATCH_TEMPLATE, + (uint64_t)(uintptr_t)mt, + (uint64_t)(uintptr_t)matcher, + is_root ? 0 : mt->fc_sz, + mt->flags); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + if (!is_root) { + ret = mlx5dr_debug_dump_matcher_template_definer(f, mt); + if (ret) + return ret; + } + } + + return 0; +} + +static int +mlx5dr_debug_dump_matcher_action_template(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + enum mlx5dr_action_type action_type; + int i, j, ret; + + for (i = 0; i < matcher->num_of_at; i++) { + struct mlx5dr_action_template *at = matcher->at[i]; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,%d", + MLX5DR_DEBUG_RES_TYPE_MATCHER_ACTION_TEMPLATE, + (uint64_t)(uintptr_t)at, + (uint64_t)(uintptr_t)matcher, + at->only_term ? 0 : 1, + is_root ? 0 : at->num_of_action_stes, + at->num_actions); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (j = 0; j < at->num_actions; j++) { + action_type = at->action_type_arr[j]; + ret = fprintf(f, ",%s", mlx5dr_debug_action_type_to_str(action_type)); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + fprintf(f, "\n"); + } + + return 0; +} + +static int +mlx5dr_debug_dump_matcher_attr(FILE *f, struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_matcher_attr *attr = &matcher->attr; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_MATCHER_ATTR, + (uint64_t)(uintptr_t)matcher, + attr->priority, + attr->mode, + attr->table.sz_row_log, + attr->table.sz_col_log, + attr->optimize_using_rule_idx, + attr->optimize_flow_src); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_matcher(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + enum mlx5dr_table_type tbl_type = matcher->tbl->type; + struct mlx5dr_devx_obj *ste_0, *ste_1 = NULL; + struct mlx5dr_pool_chunk *ste; + struct mlx5dr_pool *ste_pool; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,0x%" PRIx64, + MLX5DR_DEBUG_RES_TYPE_MATCHER, + (uint64_t)(uintptr_t)matcher, + (uint64_t)(uintptr_t)matcher->tbl, + matcher->num_of_mt, + is_root ? 0 : matcher->end_ft->id, + matcher->col_matcher ? (uint64_t)(uintptr_t)matcher->col_matcher : 0); + if (ret < 0) + goto out_err; + + ste = &matcher->match_ste.ste; + ste_pool = matcher->match_ste.pool; + if (ste_pool) { + ste_0 = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + ste_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + } else { + ste_0 = NULL; + ste_1 = NULL; + } + + ret = fprintf(f, ",%d,%d,%d,%d", + matcher->match_ste.rtc_0 ? matcher->match_ste.rtc_0->id : 0, + ste_0 ? (int)ste_0->id : -1, + matcher->match_ste.rtc_1 ? matcher->match_ste.rtc_1->id : 0, + ste_1 ? (int)ste_1->id : -1); + if (ret < 0) + goto out_err; + + ste = &matcher->action_ste.ste; + ste_pool = matcher->action_ste.pool; + if (ste_pool) { + ste_0 = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + ste_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + } else { + ste_0 = NULL; + ste_1 = NULL; + } + + ret = fprintf(f, ",%d,%d,%d,%d\n", + matcher->action_ste.rtc_0 ? matcher->action_ste.rtc_0->id : 0, + ste_0 ? (int)ste_0->id : -1, + matcher->action_ste.rtc_1 ? matcher->action_ste.rtc_1->id : 0, + ste_1 ? (int)ste_1->id : -1); + if (ret < 0) + goto out_err; + + ret = mlx5dr_debug_dump_matcher_attr(f, matcher); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_matcher_match_template(f, matcher); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_matcher_action_template(f, matcher); + if (ret) + return ret; + + return 0; + +out_err: + rte_errno = EINVAL; + return rte_errno; +} + +static int mlx5dr_debug_dump_table(FILE *f, struct mlx5dr_table *tbl) +{ + bool is_root = tbl->level == MLX5DR_ROOT_LEVEL; + struct mlx5dr_matcher *matcher; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_TABLE, + (uint64_t)(uintptr_t)tbl, + (uint64_t)(uintptr_t)tbl->ctx, + is_root ? 0 : tbl->ft->id, + tbl->type, + is_root ? 0 : tbl->fw_ft_type, + tbl->level); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + LIST_FOREACH(matcher, &tbl->head, next) { + ret = mlx5dr_debug_dump_matcher(f, matcher); + if (ret) + return ret; + } + + return 0; +} + +static int +mlx5dr_debug_dump_context_send_engine(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_send_engine *send_queue; + int ret, i, j; + + for (i = 0; i < (int)ctx->queues; i++) { + send_queue = &ctx->send_queue[i]; + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_ENGINE, + (uint64_t)(uintptr_t)ctx, + i, + send_queue->used_entries, + send_queue->th_entries, + send_queue->rings, + send_queue->num_entries, + send_queue->err, + send_queue->completed.ci, + send_queue->completed.pi, + send_queue->completed.mask); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (j = 0; j < MLX5DR_NUM_SEND_RINGS; j++) { + struct mlx5dr_send_ring *send_ring = &send_queue->send_ring[j]; + struct mlx5dr_send_ring_cq *cq = &send_ring->send_cq; + struct mlx5dr_send_ring_sq *sq = &send_ring->send_sq; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_RING, + (uint64_t)(uintptr_t)ctx, + j, + i, + cq->cqn, + cq->cons_index, + cq->ncqe_mask, + cq->buf_sz, + cq->ncqe, + cq->cqe_log_sz, + cq->poll_wqe, + cq->cqe_sz, + sq->sqn, + sq->obj->id, + sq->cur_post, + sq->buf_mask); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + } + + return 0; +} + +static int mlx5dr_debug_dump_context_caps(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_cmd_query_caps *caps = ctx->caps; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%s,%d,%d,%d,%d,", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_CAPS, + (uint64_t)(uintptr_t)ctx, + caps->fw_ver, + caps->wqe_based_update, + caps->ste_format, + caps->ste_alloc_log_max, + caps->log_header_modify_argument_max_alloc); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + ret = fprintf(f, "%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + caps->flex_protocols, + caps->rtc_reparse_mode, + caps->rtc_index_mode, + caps->ste_alloc_log_gran, + caps->stc_alloc_log_max, + caps->stc_alloc_log_gran, + caps->rtc_log_depth_max, + caps->format_select_gtpu_dw_0, + caps->format_select_gtpu_dw_1, + caps->format_select_gtpu_dw_2, + caps->format_select_gtpu_ext_dw_0, + caps->nic_ft.max_level, + caps->nic_ft.reparse, + caps->fdb_ft.max_level, + caps->fdb_ft.reparse, + caps->log_header_modify_argument_granularity); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_context_attr(FILE *f, struct mlx5dr_context *ctx) +{ + int ret; + + ret = fprintf(f, "%u,0x%" PRIx64 ",%d,%zu,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_ATTR, + (uint64_t)(uintptr_t)ctx, + ctx->pd_num, + ctx->queues, + ctx->send_queue->num_entries); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_context_info(FILE *f, struct mlx5dr_context *ctx) +{ + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%s,%s\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT, + (uint64_t)(uintptr_t)ctx, + ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT, + mlx5_glue->get_device_name(ctx->ibv_ctx->device), + DEBUG_VERSION); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + ret = mlx5dr_debug_dump_context_attr(f, ctx); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_context_caps(f, ctx); + if (ret) + return ret; + + return 0; +} + +static int mlx5dr_debug_dump_context(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_table *tbl; + int ret; + + ret = mlx5dr_debug_dump_context_info(f, ctx); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_context_send_engine(f, ctx); + if (ret) + return ret; + + LIST_FOREACH(tbl, &ctx->head, next) { + ret = mlx5dr_debug_dump_table(f, tbl); + if (ret) + return ret; + } + + return 0; +} + +int mlx5dr_debug_dump(struct mlx5dr_context *ctx, FILE *f) +{ + int ret; + + if (!f || !ctx) { + rte_errno = EINVAL; + return -rte_errno; + } + + pthread_spin_lock(&ctx->ctrl_lock); + ret = mlx5dr_debug_dump_context(f, ctx); + pthread_spin_unlock(&ctx->ctrl_lock); + + return -ret; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.h b/drivers/net/mlx5/hws/mlx5dr_debug.h new file mode 100644 index 0000000000..d81585150a --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_debug.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_DEBUG_H_ +#define MLX5DR_DEBUG_H_ + +#define DEBUG_VERSION "1.0" + +enum mlx5dr_debug_res_type { + MLX5DR_DEBUG_RES_TYPE_CONTEXT = 4000, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_ATTR = 4001, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_CAPS = 4002, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_ENGINE = 4003, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_RING = 4004, + + MLX5DR_DEBUG_RES_TYPE_TABLE = 4100, + + MLX5DR_DEBUG_RES_TYPE_MATCHER = 4200, + MLX5DR_DEBUG_RES_TYPE_MATCHER_ATTR = 4201, + MLX5DR_DEBUG_RES_TYPE_MATCHER_MATCH_TEMPLATE = 4202, + MLX5DR_DEBUG_RES_TYPE_MATCHER_ACTION_TEMPLATE = 4204, + MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_DEFINER = 4203, +}; + +const char *mlx5dr_debug_action_type_to_str(enum mlx5dr_action_type action_type); + +#endif /* MLX5DR_DEBUG_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v2 19/19] net/mlx5/hws: Enable HWS 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (17 preceding siblings ...) 2022-10-06 15:03 ` [v2 18/19] net/mlx5/hws: Add HWS debug layer Alex Vesker @ 2022-10-06 15:03 ` Alex Vesker 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-06 15:03 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Replace stub implenation of HWS with mlx5dr code. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/meson.build | 18 + drivers/net/mlx5/hws/mlx5dr.h | 594 +++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_internal.h | 93 ++++ drivers/net/mlx5/meson.build | 1 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_flow.h | 2 + drivers/net/mlx5/mlx5_flow_hw.c | 4 +- 7 files changed, 711 insertions(+), 2 deletions(-) create mode 100644 drivers/net/mlx5/hws/meson.build create mode 100644 drivers/net/mlx5/hws/mlx5dr.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_internal.h diff --git a/drivers/net/mlx5/hws/meson.build b/drivers/net/mlx5/hws/meson.build new file mode 100644 index 0000000000..f94798dd2d --- /dev/null +++ b/drivers/net/mlx5/hws/meson.build @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2022 NVIDIA Corporation & Affiliates + +includes += include_directories('.') +sources += files( + 'mlx5dr_context.c', + 'mlx5dr_table.c', + 'mlx5dr_matcher.c', + 'mlx5dr_rule.c', + 'mlx5dr_action.c', + 'mlx5dr_buddy.c', + 'mlx5dr_pool.c', + 'mlx5dr_cmd.c', + 'mlx5dr_send.c', + 'mlx5dr_definer.c', + 'mlx5dr_debug.c', + 'mlx5dr_pat_arg.c', +) diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h new file mode 100644 index 0000000000..980bda0d63 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -0,0 +1,594 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_H_ +#define MLX5DR_H_ + +#include <rte_flow.h> + +struct mlx5dr_context; +struct mlx5dr_table; +struct mlx5dr_matcher; +struct mlx5dr_rule; + +enum mlx5dr_table_type { + MLX5DR_TABLE_TYPE_NIC_RX, + MLX5DR_TABLE_TYPE_NIC_TX, + MLX5DR_TABLE_TYPE_FDB, + MLX5DR_TABLE_TYPE_MAX, +}; + +enum mlx5dr_matcher_resource_mode { + /* Allocate resources based on number of rules with minimal failure probability */ + MLX5DR_MATCHER_RESOURCE_MODE_RULE, + /* Allocate fixed size hash table based on given column and rows */ + MLX5DR_MATCHER_RESOURCE_MODE_HTABLE, +}; + +enum mlx5dr_action_type { + MLX5DR_ACTION_TYP_LAST, + MLX5DR_ACTION_TYP_TNL_L2_TO_L2, + MLX5DR_ACTION_TYP_L2_TO_TNL_L2, + MLX5DR_ACTION_TYP_TNL_L3_TO_L2, + MLX5DR_ACTION_TYP_L2_TO_TNL_L3, + MLX5DR_ACTION_TYP_DROP, + MLX5DR_ACTION_TYP_TIR, + MLX5DR_ACTION_TYP_FT, + MLX5DR_ACTION_TYP_CTR, + MLX5DR_ACTION_TYP_TAG, + MLX5DR_ACTION_TYP_MODIFY_HDR, + MLX5DR_ACTION_TYP_VPORT, + MLX5DR_ACTION_TYP_MISS, + MLX5DR_ACTION_TYP_POP_VLAN, + MLX5DR_ACTION_TYP_PUSH_VLAN, + MLX5DR_ACTION_TYP_ASO_METER, + MLX5DR_ACTION_TYP_ASO_CT, + MLX5DR_ACTION_TYP_MAX, +}; + +enum mlx5dr_action_flags { + MLX5DR_ACTION_FLAG_ROOT_RX = 1 << 0, + MLX5DR_ACTION_FLAG_ROOT_TX = 1 << 1, + MLX5DR_ACTION_FLAG_ROOT_FDB = 1 << 2, + MLX5DR_ACTION_FLAG_HWS_RX = 1 << 3, + MLX5DR_ACTION_FLAG_HWS_TX = 1 << 4, + MLX5DR_ACTION_FLAG_HWS_FDB = 1 << 5, + /* Shared action can be used over a few threads, since data is written + * only once at the creation of the action. + */ + MLX5DR_ACTION_FLAG_SHARED = 1 << 6, +}; + +enum mlx5dr_action_reformat_type { + MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2, + MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2, + MLX5DR_ACTION_REFORMAT_TYPE_TNL_L3_TO_L2, + MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3, +}; + +enum mlx5dr_action_aso_meter_color { + MLX5DR_ACTION_ASO_METER_COLOR_RED = 0x0, + MLX5DR_ACTION_ASO_METER_COLOR_YELLOW = 0x1, + MLX5DR_ACTION_ASO_METER_COLOR_GREEN = 0x2, + MLX5DR_ACTION_ASO_METER_COLOR_UNDEFINED = 0x3, +}; + +enum mlx5dr_action_aso_ct_flags { + MLX5DR_ACTION_ASO_CT_DIRECTION_INITIATOR = 0 << 0, + MLX5DR_ACTION_ASO_CT_DIRECTION_RESPONDER = 1 << 0, +}; + +enum mlx5dr_match_template_flags { + /* Allow relaxed matching by skipping derived dependent match fields. */ + MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH = 1, +}; + +enum mlx5dr_send_queue_actions { + /* Start executing all pending queued rules and write to HW */ + MLX5DR_SEND_QUEUE_ACTION_DRAIN = 1 << 0, +}; + +struct mlx5dr_context_attr { + uint16_t queues; + uint16_t queue_size; + size_t initial_log_ste_memory; /* Currently not in use */ + /* Optional PD used for allocating res ources */ + struct ibv_pd *pd; +}; + +struct mlx5dr_table_attr { + enum mlx5dr_table_type type; + uint32_t level; +}; + +enum mlx5dr_matcher_flow_src { + MLX5DR_MATCHER_FLOW_SRC_ANY = 0x0, + MLX5DR_MATCHER_FLOW_SRC_WIRE = 0x1, + MLX5DR_MATCHER_FLOW_SRC_VPORT = 0x2, +}; + +struct mlx5dr_matcher_attr { + /* Processing priority inside table */ + uint32_t priority; + /* Provide all rules with unique rule_idx in num_log range to reduce locking */ + bool optimize_using_rule_idx; + /* Resource mode and corresponding size */ + enum mlx5dr_matcher_resource_mode mode; + /* Optimize insertion in case packet origin is the same for all rules */ + enum mlx5dr_matcher_flow_src optimize_flow_src; + union { + struct { + uint8_t sz_row_log; + uint8_t sz_col_log; + } table; + + struct { + uint8_t num_log; + } rule; + }; +}; + +struct mlx5dr_rule_attr { + uint16_t queue_id; + void *user_data; + /* Valid if matcher optimize_using_rule_idx is set */ + uint32_t rule_idx; + uint32_t burst:1; +}; + +struct mlx5dr_devx_obj { + struct mlx5dv_devx_obj *obj; + uint32_t id; +}; + +/* In actions that take offset, the offset is unique, and the user should not + * reuse the same index because data changing is not atomic. + */ +struct mlx5dr_rule_action { + struct mlx5dr_action *action; + union { + struct { + uint32_t value; + } tag; + + struct { + uint32_t offset; + } counter; + + struct { + uint32_t offset; + uint8_t *data; + } modify_header; + + struct { + uint32_t offset; + uint8_t *data; + } reformat; + + struct { + __be32 vlan_hdr; + } push_vlan; + + struct { + uint32_t offset; + enum mlx5dr_action_aso_meter_color init_color; + } aso_meter; + + struct { + uint32_t offset; + enum mlx5dr_action_aso_ct_flags direction; + } aso_ct; + }; +}; + +/* Open a context used for direct rule insertion using hardware steering. + * Each context can contain multiple tables of different types. + * + * @param[in] ibv_ctx + * The ibv context to used for HWS. + * @param[in] attr + * Attributes used for context open. + * @return pointer to mlx5dr_context on success NULL otherwise. + */ +struct mlx5dr_context * +mlx5dr_context_open(struct ibv_context *ibv_ctx, + struct mlx5dr_context_attr *attr); + +/* Close a context used for direct hardware steering. + * + * @param[in] ctx + * mlx5dr context to close. + * @return zero on success non zero otherwise. + */ +int mlx5dr_context_close(struct mlx5dr_context *ctx); + +/* Create a new direct rule table. Each table can contain multiple matchers. + * + * @param[in] ctx + * The context in which the new table will be opened. + * @param[in] attr + * Attributes used for table creation. + * @return pointer to mlx5dr_table on success NULL otherwise. + */ +struct mlx5dr_table * +mlx5dr_table_create(struct mlx5dr_context *ctx, + struct mlx5dr_table_attr *attr); + +/* Destroy direct rule table. + * + * @param[in] tbl + * mlx5dr table to destroy. + * @return zero on success non zero otherwise. + */ +int mlx5dr_table_destroy(struct mlx5dr_table *tbl); + +/* Create new match template based on items mask, the match template + * will be used for matcher creation. + * + * @param[in] items + * Describe the mask for template creation + * @param[in] flags + * Template creation flags + * @return pointer to mlx5dr_match_template on success NULL otherwise + */ +struct mlx5dr_match_template * +mlx5dr_match_template_create(const struct rte_flow_item items[], + enum mlx5dr_match_template_flags flags); + +/* Destroy match template. + * + * @param[in] mt + * Match template to destroy. + * @return zero on success non zero otherwise. + */ +int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt); + +/* Create new action template based on action_type array, the action template + * will be used for matcher creation. + * + * @param[in] action_type + * An array of actions based on the order of actions which will be provided + * with rule_actions to mlx5dr_rule_create. The last action is marked + * using MLX5DR_ACTION_TYP_LAST. + * @return pointer to mlx5dr_action_template on success NULL otherwise + */ +struct mlx5dr_action_template * +mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[]); + +/* Destroy action template. + * + * @param[in] at + * Action template to destroy. + * @return zero on success non zero otherwise. + */ +int mlx5dr_action_template_destroy(struct mlx5dr_action_template *at); + +/* Create a new direct rule matcher. Each matcher can contain multiple rules. + * Matchers on the table will be processed by priority. Matching fields and + * mask are described by the match template. In some cases multiple match + * templates can be used on the same matcher. + * + * @param[in] table + * The table in which the new matcher will be opened. + * @param[in] mt + * Array of match templates to be used on matcher. + * @param[in] num_of_mt + * Number of match templates in mt array. + * @param[in] at + * Array of action templates to be used on matcher. + * @param[in] num_of_at + * Number of action templates in mt array. + * @param[in] attr + * Attributes used for matcher creation. + * @return pointer to mlx5dr_matcher on success NULL otherwise. + */ +struct mlx5dr_matcher * +mlx5dr_matcher_create(struct mlx5dr_table *table, + struct mlx5dr_match_template *mt[], + uint8_t num_of_mt, + struct mlx5dr_action_template *at[], + uint8_t num_of_at, + struct mlx5dr_matcher_attr *attr); + +/* Destroy direct rule matcher. + * + * @param[in] matcher + * Matcher to destroy. + * @return zero on success non zero otherwise. + */ +int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher); + +/* Get the size of the rule handle (mlx5dr_rule) to be used on rule creation. + * + * @return size in bytes of rule handle struct. + */ +size_t mlx5dr_rule_get_handle_size(void); + +/* Enqueue create rule operation. + * + * @param[in] matcher + * The matcher in which the new rule will be created. + * @param[in] mt_idx + * Match template index to create the match with. + * @param[in] items + * The items used for the value matching. + * @param[in] rule_actions + * Rule action to be executed on match. + * @param[in] at_idx + * Action template index to apply the actions with. + * @param[in] num_of_actions + * Number of rule actions. + * @param[in] attr + * Rule creation attributes. + * @param[in, out] rule_handle + * A valid rule handle. The handle doesn't require any initialization. + * @return zero on successful enqueue non zero otherwise. + */ +int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[], + struct mlx5dr_rule_attr *attr, + struct mlx5dr_rule *rule_handle); + +/* Enqueue destroy rule operation. + * + * @param[in] rule + * The rule destruction to enqueue. + * @param[in] attr + * Rule destruction attributes. + * @return zero on successful enqueue non zero otherwise. + */ +int mlx5dr_rule_destroy(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr); + +/* Create direct rule drop action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_dest_drop(struct mlx5dr_context *ctx, + uint32_t flags); + +/* Create direct rule default miss action. + * Defaults are RX: Drop TX: Wire. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_default_miss(struct mlx5dr_context *ctx, + uint32_t flags); + +/* Create direct rule goto table action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] tbl + * Destination table. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx, + struct mlx5dr_table *tbl, + uint32_t flags); + +/* Create direct rule goto vport action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] ib_port_num + * Destination ib_port number. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_dest_vport(struct mlx5dr_context *ctx, + uint32_t ib_port_num, + uint32_t flags); + +/* Create direct rule goto TIR action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] obj + * Direct rule TIR devx object. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *obj, + uint32_t flags); + +/* Create direct rule TAG action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_tag(struct mlx5dr_context *ctx, + uint32_t flags); + +/* Create direct rule counter action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] obj + * Direct rule counter devx object. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_counter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *obj, + uint32_t flags); + +/* Create direct rule reformat action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] reformat_type + * Type of reformat. + * @param[in] data_sz + * Size in bytes of data. + * @param[in] inline_data + * Header data array in case of inline action. + * @param[in] log_bulk_size + * Number of unique values used with this pattern. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, + enum mlx5dr_action_reformat_type reformat_type, + size_t data_sz, + void *inline_data, + uint32_t log_bulk_size, + uint32_t flags); + +/* Create direct rule modify header action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] pattern_sz + * Byte size of the pattern array. + * @param[in] pattern + * PRM format modify pattern action array. + * @param[in] log_bulk_size + * Number of unique values used with this pattern. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, + size_t pattern_sz, + __be64 pattern[], + uint32_t log_bulk_size, + uint32_t flags); + +/* Create direct rule ASO flow meter action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] devx_obj + * The DEVX ASO object. + * @param[in] return_reg_c + * Copy the ASO object value into this reg_c, after a packet hits a rule with this ASO object. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_aso_meter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_c, + uint32_t flags); + +/* Create direct rule ASO CT action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] devx_obj + * The DEVX ASO object. + * @param[in] return_reg_id + * Copy the ASO object value into this reg_id, after a packet hits a rule with this ASO object. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_aso_ct(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags); + +/* Create direct rule pop vlan action. + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_pop_vlan(struct mlx5dr_context *ctx, uint32_t flags); + +/* Create direct rule push vlan action. + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_push_vlan(struct mlx5dr_context *ctx, uint32_t flags); + +/* Destroy direct rule action. + * + * @param[in] action + * The action to destroy. + * @return zero on success non zero otherwise. + */ +int mlx5dr_action_destroy(struct mlx5dr_action *action); + +/* Poll queue for rule creation and deletions completions. + * + * @param[in] ctx + * The context to which the queue belong to. + * @param[in] queue_id + * The id of the queue to poll. + * @param[in, out] res + * Completion array. + * @param[in] res_nb + * Maximum number of results to return. + * @return negative number on failure, the number of completions otherwise. + */ +int mlx5dr_send_queue_poll(struct mlx5dr_context *ctx, + uint16_t queue_id, + struct rte_flow_op_result res[], + uint32_t res_nb); + +/* Perform an action on the queue + * + * @param[in] ctx + * The context to which the queue belong to. + * @param[in] queue_id + * The id of the queue to perform the action on. + * @param[in] actions + * Actions to perform on the queue. (enum mlx5dr_send_queue_actions) + * @return zero on success non zero otherwise. + */ +int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, + uint16_t queue_id, + uint32_t actions); + +/* Dump HWS info + * + * @param[in] ctx + * The context which to dump the info from. + * @param[in] f + * The file to write the dump to. + * @return zero on success non zero otherwise. + */ +int mlx5dr_debug_dump(struct mlx5dr_context *ctx, FILE *f); + +#endif diff --git a/drivers/net/mlx5/hws/mlx5dr_internal.h b/drivers/net/mlx5/hws/mlx5dr_internal.h new file mode 100644 index 0000000000..dbd77b9c66 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_internal.h @@ -0,0 +1,93 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_INTERNAL_H_ +#define MLX5DR_INTERNAL_H_ + +#include <stdint.h> +#include <sys/queue.h> +/* Verbs headers do not support -pedantic. */ +#ifdef PEDANTIC +#pragma GCC diagnostic ignored "-Wpedantic" +#endif +#include <infiniband/verbs.h> +#include <infiniband/mlx5dv.h> +#ifdef PEDANTIC +#pragma GCC diagnostic error "-Wpedantic" +#endif +#include <rte_flow.h> +#include <rte_gtp.h> + +#include "mlx5_prm.h" +#include "mlx5_glue.h" +#include "mlx5_flow.h" +#include "mlx5_utils.h" +#include "mlx5_malloc.h" + +#include "mlx5dr.h" +#include "mlx5dr_pool.h" +#include "mlx5dr_context.h" +#include "mlx5dr_table.h" +#include "mlx5dr_matcher.h" +#include "mlx5dr_send.h" +#include "mlx5dr_rule.h" +#include "mlx5dr_cmd.h" +#include "mlx5dr_action.h" +#include "mlx5dr_definer.h" +#include "mlx5dr_debug.h" +#include "mlx5dr_pat_arg.h" + +#define DW_SIZE 4 +#define BITS_IN_BYTE 8 +#define BITS_IN_DW (BITS_IN_BYTE * DW_SIZE) + +#define BIT(_bit) (1ULL << (_bit)) +#define IS_BIT_SET(_value, _bit) (_value & (1ULL << (_bit))) + +#ifndef ARRAY_SIZE +#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) +#endif + +#ifdef RTE_LIBRTE_MLX5_DEBUG +/* Prevent double function name print when debug is set */ +#define DR_LOG DRV_LOG +#else +/* Print function name as part of the log */ +#define DR_LOG(level, ...) \ + DRV_LOG(level, RTE_FMT("[%s]: " RTE_FMT_HEAD(__VA_ARGS__,), __func__, RTE_FMT_TAIL(__VA_ARGS__,))) +#endif + +static inline void *simple_malloc(size_t size) +{ + return mlx5_malloc(MLX5_MEM_SYS, + size, + MLX5_MALLOC_ALIGNMENT, + SOCKET_ID_ANY); +} + +static inline void *simple_calloc(size_t nmemb, size_t size) +{ + return mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO, + nmemb * size, + MLX5_MALLOC_ALIGNMENT, + SOCKET_ID_ANY); +} + +static inline void simple_free(void *addr) +{ + mlx5_free(addr); +} + +static inline bool is_mem_zero(const uint8_t *mem, size_t size) +{ + assert(size); + return (*mem == 0) && memcmp(mem, mem + 1, size - 1) == 0; +} + +static inline uint64_t roundup_pow_of_two(uint64_t n) +{ + return n == 1 ? 1 : 1ULL << log2above(n); +} + +#endif /* MLX5DR_INTERNAL_H_ */ diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index c7ddd4b65c..f9b266c900 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -71,3 +71,4 @@ endif testpmd_sources += files('mlx5_testpmd.c') subdir(exec_env) +subdir('hws') diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 29657ab273..77309e32a0 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -34,6 +34,7 @@ #include "mlx5_os.h" #include "mlx5_autoconf.h" #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#include "hws/mlx5dr.h" #endif #define MLX5_SH(dev) (((struct mlx5_priv *)(dev)->data->dev_private)->sh) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index cae1a64def..1ad75fc8c6 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -17,6 +17,8 @@ #include <mlx5_prm.h> #include "mlx5.h" +#include "hws/mlx5dr.h" +#include "hws/mlx5dr_rule.h" /* E-Switch Manager port, used for rte_flow_item_port_id. */ #define MLX5_PORT_ESW_MGR UINT32_MAX diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 78c741bb91..7343d59f1f 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -1107,7 +1107,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, actions, rule_acts, &acts_num); ret = mlx5dr_rule_create(table->matcher, pattern_template_index, items, - rule_acts, acts_num, + action_template_index, rule_acts, &rule_attr, &flow->rule); if (likely(!ret)) return (struct rte_flow *)flow; @@ -1498,7 +1498,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, tbl->its[i] = item_templates[i]; } tbl->matcher = mlx5dr_matcher_create - (tbl->grp->tbl, mt, nb_item_templates, &matcher_attr); + (tbl->grp->tbl, mt, nb_item_templates, NULL, 0, &matcher_attr); if (!tbl->matcher) goto it_error; tbl->nb_item_templates = nb_item_templates; -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 00/18] net/mlx5: Add HW steering low level support 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (19 preceding siblings ...) 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 01/18] net/mlx5: split flow item translation Alex Vesker ` (17 more replies) 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (2 subsequent siblings) 23 siblings, 18 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm; +Cc: dev, orika Mellanox ConnetX devices supports packet matching, packet modification and redirection. These functionalities are also referred to as flow-steering. To configure a steering rule, the rule is written to the device owned memory, this memory is accessed and cached by the device when processing a packet. The highlight of this patchset is supporting HW Steering (HWS) which is the new technology supported in new ConnectX devices, HWS allows configuring steering rules directly to the HW using special HW queues with minimal CPU effort. This patchset is the internal low layer implementation for HWS used by the mlx5 PMD. The mlx5dr (direct rule) is layer that bridges between the PMD and the HW by configuring the HW offloads based on the PMD logic v2: Fix check patch and cosmetic changes v3: -Fix unsupported items -Fix compilation with mlx5dv dependency Alex Vesker (9): net/mlx5: Add additional glue functions for HWS net/mlx5/hws: Add HWS send layer net/mlx5/hws: Add HWS definer layer net/mlx5/hws: Add HWS context object net/mlx5/hws: Add HWS table object net/mlx5/hws: Add HWS matcher object net/mlx5/hws: Add HWS rule object net/mlx5/hws: Add HWS action object net/mlx5/hws: Enable HWS Bing Zhao (2): common/mlx5: query set capability of registers net/mlx5: provide the available tag registers Dariusz Sosnowski (1): net/mlx5: add port to metadata conversion Erez Shitrit (2): net/mlx5/hws: Add HWS command layer net/mlx5/hws: Add HWS pool and buddy Hamdan Igbaria (1): net/mlx5/hws: Add HWS debug layer Suanming Mou (3): net/mlx5: split flow item translation net/mlx5: split flow item matcher and value translation net/mlx5: add hardware steering item translation function drivers/common/mlx5/linux/meson.build | 2 + drivers/common/mlx5/linux/mlx5_glue.c | 121 +- drivers/common/mlx5/linux/mlx5_glue.h | 17 + drivers/common/mlx5/mlx5_devx_cmds.c | 30 + drivers/common/mlx5/mlx5_devx_cmds.h | 2 + drivers/common/mlx5/mlx5_prm.h | 652 ++++- drivers/net/mlx5/hws/meson.build | 18 + drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} | 210 +- drivers/net/mlx5/hws/mlx5dr_action.c | 2221 +++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 253 ++ drivers/net/mlx5/hws/mlx5dr_buddy.c | 201 ++ drivers/net/mlx5/hws/mlx5dr_buddy.h | 22 + drivers/net/mlx5/hws/mlx5dr_cmd.c | 948 +++++++ drivers/net/mlx5/hws/mlx5dr_cmd.h | 230 ++ drivers/net/mlx5/hws/mlx5dr_context.c | 223 ++ drivers/net/mlx5/hws/mlx5dr_context.h | 40 + drivers/net/mlx5/hws/mlx5dr_debug.c | 462 +++ drivers/net/mlx5/hws/mlx5dr_debug.h | 28 + drivers/net/mlx5/hws/mlx5dr_definer.c | 1968 +++++++++++++ drivers/net/mlx5/hws/mlx5dr_definer.h | 585 ++++ drivers/net/mlx5/hws/mlx5dr_internal.h | 93 + drivers/net/mlx5/hws/mlx5dr_matcher.c | 922 ++++++ drivers/net/mlx5/hws/mlx5dr_matcher.h | 76 + drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 511 ++++ drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 83 + drivers/net/mlx5/hws/mlx5dr_pool.c | 672 +++++ drivers/net/mlx5/hws/mlx5dr_pool.h | 152 + drivers/net/mlx5/hws/mlx5dr_rule.c | 528 ++++ drivers/net/mlx5/hws/mlx5dr_rule.h | 50 + drivers/net/mlx5/hws/mlx5dr_send.c | 844 ++++++ drivers/net/mlx5/hws/mlx5dr_send.h | 275 ++ drivers/net/mlx5/hws/mlx5dr_table.c | 248 ++ drivers/net/mlx5/hws/mlx5dr_table.h | 44 + drivers/net/mlx5/linux/mlx5_os.c | 12 +- drivers/net/mlx5/meson.build | 5 +- drivers/net/mlx5/mlx5.c | 3 + drivers/net/mlx5/mlx5.h | 3 +- drivers/net/mlx5/mlx5_defs.h | 2 + drivers/net/mlx5/mlx5_dr.c | 383 --- drivers/net/mlx5/mlx5_flow.c | 27 +- drivers/net/mlx5/mlx5_flow.h | 174 +- drivers/net/mlx5/mlx5_flow_dv.c | 2631 +++++++++--------- drivers/net/mlx5/mlx5_flow_hw.c | 115 +- 43 files changed, 14365 insertions(+), 1721 deletions(-) create mode 100644 drivers/net/mlx5/hws/meson.build rename drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} (66%) create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_internal.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.h delete mode 100644 drivers/net/mlx5/mlx5_dr.c -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 01/18] net/mlx5: split flow item translation 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 02/18] net/mlx5: split flow item matcher and value translation Alex Vesker ` (16 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> In order to share the item translation code with hardware steering mode, this commit splits flow item translation code to a dedicate function. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow_dv.c | 1915 ++++++++++++++++--------------- 1 file changed, 979 insertions(+), 936 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 91f287af5c..70a3279e2f 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -13029,8 +13029,7 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, } /** - * Fill the flow with DV spec, lock free - * (mutex should be acquired by caller). + * Translate the flow item to matcher. * * @param[in] dev * Pointer to rte_eth_dev structure. @@ -13040,8 +13039,8 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * Pointer to the flow attributes. * @param[in] items * Pointer to the list of items. - * @param[in] actions - * Pointer to the list of actions. + * @param[in] matcher + * Pointer to the flow matcher. * @param[out] error * Pointer to the error structure. * @@ -13049,1041 +13048,1086 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -flow_dv_translate(struct rte_eth_dev *dev, - struct mlx5_flow *dev_flow, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[], - struct rte_flow_error *error) +flow_dv_translate_items(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + struct mlx5_flow_dv_matcher *matcher, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_sh_config *dev_conf = &priv->sh->config; struct rte_flow *flow = dev_flow->flow; struct mlx5_flow_handle *handle = dev_flow->handle; struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); - struct mlx5_flow_rss_desc *rss_desc; + struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc; uint64_t item_flags = 0; uint64_t last_item = 0; - uint64_t action_flags = 0; - struct mlx5_flow_dv_matcher matcher = { - .mask = { - .size = sizeof(matcher.mask.buf), - }, - }; - int actions_n = 0; - bool actions_end = false; - union { - struct mlx5_flow_dv_modify_hdr_resource res; - uint8_t len[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + - sizeof(struct mlx5_modification_cmd) * - (MLX5_MAX_MODIFY_NUM + 1)]; - } mhdr_dummy; - struct mlx5_flow_dv_modify_hdr_resource *mhdr_res = &mhdr_dummy.res; - const struct rte_flow_action_count *count = NULL; - const struct rte_flow_action_age *non_shared_age = NULL; - union flow_dv_attr flow_attr = { .attr = 0 }; - uint32_t tag_be; - union mlx5_flow_tbl_key tbl_key; - uint32_t modify_action_position = UINT32_MAX; - void *match_mask = matcher.mask.buf; + void *match_mask = matcher->mask.buf; void *match_value = dev_flow->dv.value.buf; uint8_t next_protocol = 0xff; - struct rte_vlan_hdr vlan = { 0 }; - struct mlx5_flow_dv_dest_array_resource mdest_res; - struct mlx5_flow_dv_sample_resource sample_res; - void *sample_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; - const struct rte_flow_action_sample *sample = NULL; - struct mlx5_flow_sub_actions_list *sample_act; - uint32_t sample_act_pos = UINT32_MAX; - uint32_t age_act_pos = UINT32_MAX; - uint32_t num_of_dest = 0; - int tmp_actions_n = 0; - uint32_t table; - int ret = 0; - const struct mlx5_flow_tunnel *tunnel = NULL; - struct flow_grp_info grp_info = { - .external = !!dev_flow->external, - .transfer = !!attr->transfer, - .fdb_def_rule = !!priv->fdb_def_rule, - .skip_scale = dev_flow->skip_scale & - (1 << MLX5_SCALE_FLOW_GROUP_BIT), - .std_tbl_fix = true, - }; + uint16_t priority = 0; const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; const struct rte_flow_item *tunnel_item = NULL; const struct rte_flow_item *gre_item = NULL; + int ret = 0; - if (!wks) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "failed to push flow workspace"); - rss_desc = &wks->rss_desc; - memset(&mdest_res, 0, sizeof(struct mlx5_flow_dv_dest_array_resource)); - memset(&sample_res, 0, sizeof(struct mlx5_flow_dv_sample_resource)); - mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : - MLX5DV_FLOW_TABLE_TYPE_NIC_RX; - /* update normal path action resource into last index of array */ - sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM - 1]; - if (is_tunnel_offload_active(dev)) { - if (dev_flow->tunnel) { - RTE_VERIFY(dev_flow->tof_type == - MLX5_TUNNEL_OFFLOAD_MISS_RULE); - tunnel = dev_flow->tunnel; - } else { - tunnel = mlx5_get_tof(items, actions, - &dev_flow->tof_type); - dev_flow->tunnel = tunnel; - } - grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate - (dev, attr, tunnel, dev_flow->tof_type); - } - mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : - MLX5DV_FLOW_TABLE_TYPE_NIC_RX; - ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table, - &grp_info, error); - if (ret) - return ret; - dev_flow->dv.group = table; - if (attr->transfer) - mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; - /* number of actions must be set to 0 in case of dirty stack. */ - mhdr_res->actions_num = 0; - if (is_flow_tunnel_match_rule(dev_flow->tof_type)) { - /* - * do not add decap action if match rule drops packet - * HW rejects rules with decap & drop - * - * if tunnel match rule was inserted before matching tunnel set - * rule flow table used in the match rule must be registered. - * current implementation handles that in the - * flow_dv_match_register() at the function end. - */ - bool add_decap = true; - const struct rte_flow_action *ptr = actions; - - for (; ptr->type != RTE_FLOW_ACTION_TYPE_END; ptr++) { - if (ptr->type == RTE_FLOW_ACTION_TYPE_DROP) { - add_decap = false; - break; - } - } - if (add_decap) { - if (flow_dv_create_action_l2_decap(dev, dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_DECAP; - } - } - for (; !actions_end ; actions++) { - const struct rte_flow_action_queue *queue; - const struct rte_flow_action_rss *rss; - const struct rte_flow_action *action = actions; - const uint8_t *rss_key; - struct mlx5_flow_tbl_resource *tbl; - struct mlx5_aso_age_action *age_act; - struct mlx5_flow_counter *cnt_act; - uint32_t port_id = 0; - struct mlx5_flow_dv_port_id_action_resource port_id_resource; - int action_type = actions->type; - const struct rte_flow_action *found_action = NULL; - uint32_t jump_group = 0; - uint32_t owner_idx; - struct mlx5_aso_ct_action *ct; + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { + int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + int item_type = items->type; - if (!mlx5_flow_os_action_supported(action_type)) + if (!mlx5_flow_os_item_supported(item_type)) return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "action not supported"); - switch (action_type) { - case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET: - action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET; + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "item not supported"); + switch (item_type) { + case RTE_FLOW_ITEM_TYPE_ESP: + flow_dv_translate_item_esp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_ITEM_ESP; break; - case RTE_FLOW_ACTION_TYPE_VOID: + case RTE_FLOW_ITEM_TYPE_PORT_ID: + flow_dv_translate_item_port_id + (dev, match_mask, match_value, items, attr); + last_item = MLX5_FLOW_ITEM_PORT_ID; break; - case RTE_FLOW_ACTION_TYPE_PORT_ID: - case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: - if (flow_dv_translate_action_port_id(dev, action, - &port_id, error)) - return -rte_errno; - port_id_resource.port_id = port_id; - MLX5_ASSERT(!handle->rix_port_id_action); - if (flow_dv_port_id_action_resource_register - (dev, &port_id_resource, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.port_id_action->action; - action_flags |= MLX5_FLOW_ACTION_PORT_ID; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_PORT_ID; - sample_act->action_flags |= MLX5_FLOW_ACTION_PORT_ID; - num_of_dest++; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + flow_dv_translate_item_represented_port + (dev, match_mask, match_value, items, attr); + last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; break; - case RTE_FLOW_ACTION_TYPE_FLAG: - action_flags |= MLX5_FLOW_ACTION_FLAG; - wks->mark = 1; - if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { - struct rte_flow_action_mark mark = { - .id = MLX5_FLOW_MARK_DEFAULT, - }; - - if (flow_dv_convert_action_mark(dev, &mark, - mhdr_res, - error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MARK_EXT; - break; + case RTE_FLOW_ITEM_TYPE_ETH: + flow_dv_translate_item_eth(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = dev_flow->act_flags & + MLX5_FLOW_ACTION_DEFAULT_MISS && + !dev_flow->external ? + MLX5_PRIORITY_MAP_L3 : + MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + flow_dv_translate_item_vlan(dev_flow, + match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | + MLX5_FLOW_LAYER_INNER_VLAN) : + (MLX5_FLOW_LAYER_OUTER_L2 | + MLX5_FLOW_LAYER_OUTER_VLAN); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &item_flags, &tunnel); + flow_dv_translate_item_ipv4(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv4 *) + items->mask)->hdr.next_proto_id) { + next_protocol = + ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + next_protocol &= + ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; } - tag_be = mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT); - /* - * Only one FLAG or MARK is supported per device flow - * right now. So the pointer to the tag resource must be - * zero before the register process. - */ - MLX5_ASSERT(!handle->dvh.rix_tag); - if (flow_dv_tag_resource_register(dev, tag_be, - dev_flow, error)) - return -rte_errno; - MLX5_ASSERT(dev_flow->dv.tag_resource); - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.tag_resource->action; break; - case RTE_FLOW_ACTION_TYPE_MARK: - action_flags |= MLX5_FLOW_ACTION_MARK; - wks->mark = 1; - if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { - const struct rte_flow_action_mark *mark = - (const struct rte_flow_action_mark *) - actions->conf; - - if (flow_dv_convert_action_mark(dev, mark, - mhdr_res, - error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MARK_EXT; - break; + case RTE_FLOW_ITEM_TYPE_IPV6: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &item_flags, &tunnel); + flow_dv_translate_item_ipv6(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto) { + next_protocol = + ((const struct rte_flow_item_ipv6 *) + items->spec)->hdr.proto; + next_protocol &= + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; } - /* Fall-through */ - case MLX5_RTE_FLOW_ACTION_TYPE_MARK: - /* Legacy (non-extensive) MARK action. */ - tag_be = mlx5_flow_mark_set - (((const struct rte_flow_action_mark *) - (actions->conf))->id); - MLX5_ASSERT(!handle->dvh.rix_tag); - if (flow_dv_tag_resource_register(dev, tag_be, - dev_flow, error)) - return -rte_errno; - MLX5_ASSERT(dev_flow->dv.tag_resource); - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.tag_resource->action; break; - case RTE_FLOW_ACTION_TYPE_SET_META: - if (flow_dv_convert_action_set_meta - (dev, mhdr_res, attr, - (const struct rte_flow_action_set_meta *) - actions->conf, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_META; + case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: + flow_dv_translate_item_ipv6_frag_ext(match_mask, + match_value, + items, tunnel); + last_item = tunnel ? + MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : + MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header) { + next_protocol = + ((const struct rte_flow_item_ipv6_frag_ext *) + items->spec)->hdr.next_header; + next_protocol &= + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } break; - case RTE_FLOW_ACTION_TYPE_SET_TAG: - if (flow_dv_convert_action_set_tag - (dev, mhdr_res, - (const struct rte_flow_action_set_tag *) - actions->conf, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TAG; + case RTE_FLOW_ITEM_TYPE_TCP: + flow_dv_translate_item_tcp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; break; - case RTE_FLOW_ACTION_TYPE_DROP: - action_flags |= MLX5_FLOW_ACTION_DROP; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_DROP; + case RTE_FLOW_ITEM_TYPE_UDP: + flow_dv_translate_item_udp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; break; - case RTE_FLOW_ACTION_TYPE_QUEUE: - queue = actions->conf; - rss_desc->queue_num = 1; - rss_desc->queue[0] = queue->index; - action_flags |= MLX5_FLOW_ACTION_QUEUE; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_QUEUE; - sample_act->action_flags |= MLX5_FLOW_ACTION_QUEUE; - num_of_dest++; + case RTE_FLOW_ITEM_TYPE_GRE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; + gre_item = items; break; - case RTE_FLOW_ACTION_TYPE_RSS: - rss = actions->conf; - memcpy(rss_desc->queue, rss->queue, - rss->queue_num * sizeof(uint16_t)); - rss_desc->queue_num = rss->queue_num; - /* NULL RSS key indicates default RSS key. */ - rss_key = !rss->key ? rss_hash_default_key : rss->key; - memcpy(rss_desc->key, rss_key, MLX5_RSS_HASH_KEY_LEN); - /* - * rss->level and rss.types should be set in advance - * when expanding items for RSS. - */ - action_flags |= MLX5_FLOW_ACTION_RSS; - dev_flow->handle->fate_action = rss_desc->shared_rss ? - MLX5_FLOW_FATE_SHARED_RSS : - MLX5_FLOW_FATE_QUEUE; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + flow_dv_translate_item_gre_key(match_mask, + match_value, items); + last_item = MLX5_FLOW_LAYER_GRE_KEY; break; - case MLX5_RTE_FLOW_ACTION_TYPE_AGE: - owner_idx = (uint32_t)(uintptr_t)action->conf; - age_act = flow_aso_age_get_by_idx(dev, owner_idx); - if (flow->age == 0) { - flow->age = owner_idx; - __atomic_fetch_add(&age_act->refcnt, 1, - __ATOMIC_RELAXED); - } - age_act_pos = actions_n++; - action_flags |= MLX5_FLOW_ACTION_AGE; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_AGE: - non_shared_age = action->conf; - age_act_pos = actions_n++; - action_flags |= MLX5_FLOW_ACTION_AGE; + case RTE_FLOW_ITEM_TYPE_NVGRE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; break; - case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: - owner_idx = (uint32_t)(uintptr_t)action->conf; - cnt_act = flow_dv_counter_get_by_idx(dev, owner_idx, - NULL); - MLX5_ASSERT(cnt_act != NULL); - /** - * When creating meter drop flow in drop table, the - * counter should not overwrite the rte flow counter. - */ - if (attr->group == MLX5_FLOW_TABLE_LEVEL_METER && - dev_flow->dv.table_id == MLX5_MTR_TABLE_ID_DROP) { - dev_flow->dv.actions[actions_n++] = - cnt_act->action; - } else { - if (flow->counter == 0) { - flow->counter = owner_idx; - __atomic_fetch_add - (&cnt_act->shared_info.refcnt, - 1, __ATOMIC_RELAXED); - } - /* Save information first, will apply later. */ - action_flags |= MLX5_FLOW_ACTION_COUNT; - } + case RTE_FLOW_ITEM_TYPE_VXLAN: + flow_dv_translate_item_vxlan(dev, attr, + match_mask, match_value, + items, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN; break; - case RTE_FLOW_ACTION_TYPE_COUNT: - if (!priv->sh->cdev->config.devx) { - return rte_flow_error_set - (error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "count action not supported"); - } - /* Save information first, will apply later. */ - count = action->conf; - action_flags |= MLX5_FLOW_ACTION_COUNT; + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN_GPE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: - dev_flow->dv.actions[actions_n++] = - priv->sh->pop_vlan_action; - action_flags |= MLX5_FLOW_ACTION_OF_POP_VLAN; + case RTE_FLOW_ITEM_TYPE_GENEVE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GENEVE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: - if (!(action_flags & - MLX5_FLOW_ACTION_OF_SET_VLAN_VID)) - flow_dev_get_vlan_info_from_items(items, &vlan); - vlan.eth_proto = rte_be_to_cpu_16 - ((((const struct rte_flow_action_of_push_vlan *) - actions->conf)->ethertype)); - found_action = mlx5_flow_find_action - (actions + 1, - RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID); - if (found_action) - mlx5_update_vlan_vid_pcp(found_action, &vlan); - found_action = mlx5_flow_find_action - (actions + 1, - RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP); - if (found_action) - mlx5_update_vlan_vid_pcp(found_action, &vlan); - if (flow_dv_create_action_push_vlan - (dev, attr, &vlan, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.push_vlan_res->action; - action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN; + case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: + ret = flow_dv_translate_item_geneve_opt(dev, match_mask, + match_value, + items, error); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GENEVE TLV option"); + flow->geneve_tlv_option = 1; + last_item = MLX5_FLOW_LAYER_GENEVE_OPT; break; - case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP: - /* of_vlan_push action handled this action */ - MLX5_ASSERT(action_flags & - MLX5_FLOW_ACTION_OF_PUSH_VLAN); + case RTE_FLOW_ITEM_TYPE_MPLS: + flow_dv_translate_item_mpls(match_mask, match_value, + items, last_item, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_MPLS; break; - case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID: - if (action_flags & MLX5_FLOW_ACTION_OF_PUSH_VLAN) - break; - flow_dev_get_vlan_info_from_items(items, &vlan); - mlx5_update_vlan_vid_pcp(actions, &vlan); - /* If no VLAN push - this is a modify header action */ - if (flow_dv_convert_action_modify_vlan_vid - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID; + case RTE_FLOW_ITEM_TYPE_MARK: + flow_dv_translate_item_mark(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_MARK; break; - case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: - case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: - if (flow_dv_create_action_l2_encap(dev, actions, - dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_ENCAP; - if (action_flags & MLX5_FLOW_ACTION_SAMPLE) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ITEM_TYPE_META: + flow_dv_translate_item_meta(dev, match_mask, + match_value, attr, items); + last_item = MLX5_FLOW_ITEM_METADATA; break; - case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: - case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: - if (flow_dv_create_action_l2_decap(dev, dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_DECAP; + case RTE_FLOW_ITEM_TYPE_ICMP: + flow_dv_translate_item_icmp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP; break; - case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: - /* Handle encap with preceding decap. */ - if (action_flags & MLX5_FLOW_ACTION_DECAP) { - if (flow_dv_create_action_raw_encap - (dev, actions, dev_flow, attr, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } else { - /* Handle encap without preceding decap. */ - if (flow_dv_create_action_l2_encap - (dev, actions, dev_flow, attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } - action_flags |= MLX5_FLOW_ACTION_ENCAP; - if (action_flags & MLX5_FLOW_ACTION_SAMPLE) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ITEM_TYPE_ICMP6: + flow_dv_translate_item_icmp6(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP6; break; - case RTE_FLOW_ACTION_TYPE_RAW_DECAP: - while ((++action)->type == RTE_FLOW_ACTION_TYPE_VOID) - ; - if (action->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { - if (flow_dv_create_action_l2_decap - (dev, dev_flow, attr->transfer, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } - /* If decap is followed by encap, handle it at encap. */ - action_flags |= MLX5_FLOW_ACTION_DECAP; + case RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_item_tag(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_TAG; break; - case MLX5_RTE_FLOW_ACTION_TYPE_JUMP: - dev_flow->dv.actions[actions_n++] = - (void *)(uintptr_t)action->conf; - action_flags |= MLX5_FLOW_ACTION_JUMP; + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_mlx5_item_tag(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_TAG; break; - case RTE_FLOW_ACTION_TYPE_JUMP: - jump_group = ((const struct rte_flow_action_jump *) - action->conf)->group; - grp_info.std_tbl_fix = 0; - if (dev_flow->skip_scale & - (1 << MLX5_SCALE_JUMP_FLOW_GROUP_BIT)) - grp_info.skip_scale = 1; - else - grp_info.skip_scale = 0; - ret = mlx5_flow_group_to_table(dev, tunnel, - jump_group, - &table, - &grp_info, error); + case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + flow_dv_translate_item_tx_queue(dev, match_mask, + match_value, + items); + last_item = MLX5_FLOW_ITEM_TX_QUEUE; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + flow_dv_translate_item_gtp(match_mask, match_value, + items, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = flow_dv_translate_item_gtp_psc(match_mask, + match_value, + items); if (ret) - return ret; - tbl = flow_dv_tbl_resource_get(dev, table, attr->egress, - attr->transfer, - !!dev_flow->external, - tunnel, jump_group, 0, - 0, error); - if (!tbl) - return rte_flow_error_set - (error, errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "cannot create jump action."); - if (flow_dv_jump_tbl_resource_register - (dev, tbl, dev_flow, error)) { - flow_dv_tbl_resource_release(MLX5_SH(dev), tbl); - return rte_flow_error_set - (error, errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "cannot create jump action."); + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GTP PSC item"); + last_item = MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_ECPRI: + if (!mlx5_flex_parser_ecpri_exist(dev)) { + /* Create it only the first time to be used. */ + ret = mlx5_flex_parser_ecpri_alloc(dev); + if (ret) + return rte_flow_error_set + (error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, + "cannot create eCPRI parser"); + } + flow_dv_translate_item_ecpri(dev, match_mask, + match_value, items, + last_item); + /* No other protocol should follow eCPRI layer. */ + last_item = MLX5_FLOW_LAYER_ECPRI; + break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + flow_dv_translate_item_integrity(items, integrity_items, + &last_item); + break; + case RTE_FLOW_ITEM_TYPE_CONNTRACK: + flow_dv_translate_item_aso_ct(dev, match_mask, + match_value, items); + break; + case RTE_FLOW_ITEM_TYPE_FLEX: + flow_dv_translate_item_flex(dev, match_mask, + match_value, items, + dev_flow, tunnel != 0); + last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : + MLX5_FLOW_ITEM_OUTER_FLEX; + break; + default: + break; + } + item_flags |= last_item; + } + /* + * When E-Switch mode is enabled, we have two cases where we need to + * set the source port manually. + * The first one, is in case of NIC ingress steering rule, and the + * second is E-Switch rule where no port_id item was found. + * In both cases the source port is set according the current port + * in use. + */ + if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && + !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && + !(attr->egress && !attr->transfer)) { + if (flow_dv_translate_item_port_id(dev, match_mask, + match_value, NULL, attr)) + return -rte_errno; + } + if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + flow_dv_translate_item_integrity_post(match_mask, match_value, + integrity_items, + item_flags); + } + if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) + flow_dv_translate_item_vxlan_gpe(match_mask, match_value, + tunnel_item, item_flags); + else if (item_flags & MLX5_FLOW_LAYER_GENEVE) + flow_dv_translate_item_geneve(match_mask, match_value, + tunnel_item, item_flags); + else if (item_flags & MLX5_FLOW_LAYER_GRE) { + if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) + flow_dv_translate_item_gre(match_mask, match_value, + tunnel_item, item_flags); + else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) + flow_dv_translate_item_nvgre(match_mask, match_value, + tunnel_item, item_flags); + else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) + flow_dv_translate_item_gre_option(match_mask, match_value, + tunnel_item, gre_item, item_flags); + else + MLX5_ASSERT(false); + } + matcher->priority = priority; +#ifdef RTE_LIBRTE_MLX5_DEBUG + MLX5_ASSERT(!flow_dv_check_valid_spec(matcher->mask.buf, + dev_flow->dv.value.buf)); +#endif + /* + * Layers may be already initialized from prefix flow if this dev_flow + * is the suffix flow. + */ + handle->layers |= item_flags; + return ret; +} + +/** + * Fill the flow with DV spec, lock free + * (mutex should be acquired by caller). + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in, out] dev_flow + * Pointer to the sub flow. + * @param[in] attr + * Pointer to the flow attributes. + * @param[in] items + * Pointer to the list of items. + * @param[in] actions + * Pointer to the list of actions. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_translate(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_sh_config *dev_conf = &priv->sh->config; + struct rte_flow *flow = dev_flow->flow; + struct mlx5_flow_handle *handle = dev_flow->handle; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); + struct mlx5_flow_rss_desc *rss_desc; + uint64_t action_flags = 0; + struct mlx5_flow_dv_matcher matcher = { + .mask = { + .size = sizeof(matcher.mask.buf), + }, + }; + int actions_n = 0; + bool actions_end = false; + union { + struct mlx5_flow_dv_modify_hdr_resource res; + uint8_t len[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * + (MLX5_MAX_MODIFY_NUM + 1)]; + } mhdr_dummy; + struct mlx5_flow_dv_modify_hdr_resource *mhdr_res = &mhdr_dummy.res; + const struct rte_flow_action_count *count = NULL; + const struct rte_flow_action_age *non_shared_age = NULL; + union flow_dv_attr flow_attr = { .attr = 0 }; + uint32_t tag_be; + union mlx5_flow_tbl_key tbl_key; + uint32_t modify_action_position = UINT32_MAX; + struct rte_vlan_hdr vlan = { 0 }; + struct mlx5_flow_dv_dest_array_resource mdest_res; + struct mlx5_flow_dv_sample_resource sample_res; + void *sample_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; + const struct rte_flow_action_sample *sample = NULL; + struct mlx5_flow_sub_actions_list *sample_act; + uint32_t sample_act_pos = UINT32_MAX; + uint32_t age_act_pos = UINT32_MAX; + uint32_t num_of_dest = 0; + int tmp_actions_n = 0; + uint32_t table; + int ret = 0; + const struct mlx5_flow_tunnel *tunnel = NULL; + struct flow_grp_info grp_info = { + .external = !!dev_flow->external, + .transfer = !!attr->transfer, + .fdb_def_rule = !!priv->fdb_def_rule, + .skip_scale = dev_flow->skip_scale & + (1 << MLX5_SCALE_FLOW_GROUP_BIT), + .std_tbl_fix = true, + }; + + if (!wks) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "failed to push flow workspace"); + rss_desc = &wks->rss_desc; + memset(&mdest_res, 0, sizeof(struct mlx5_flow_dv_dest_array_resource)); + memset(&sample_res, 0, sizeof(struct mlx5_flow_dv_sample_resource)); + mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : + MLX5DV_FLOW_TABLE_TYPE_NIC_RX; + /* update normal path action resource into last index of array */ + sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM - 1]; + if (is_tunnel_offload_active(dev)) { + if (dev_flow->tunnel) { + RTE_VERIFY(dev_flow->tof_type == + MLX5_TUNNEL_OFFLOAD_MISS_RULE); + tunnel = dev_flow->tunnel; + } else { + tunnel = mlx5_get_tof(items, actions, + &dev_flow->tof_type); + dev_flow->tunnel = tunnel; + } + grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate + (dev, attr, tunnel, dev_flow->tof_type); + } + mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : + MLX5DV_FLOW_TABLE_TYPE_NIC_RX; + ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table, + &grp_info, error); + if (ret) + return ret; + dev_flow->dv.group = table; + if (attr->transfer) + mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; + /* number of actions must be set to 0 in case of dirty stack. */ + mhdr_res->actions_num = 0; + if (is_flow_tunnel_match_rule(dev_flow->tof_type)) { + /* + * do not add decap action if match rule drops packet + * HW rejects rules with decap & drop + * + * if tunnel match rule was inserted before matching tunnel set + * rule flow table used in the match rule must be registered. + * current implementation handles that in the + * flow_dv_match_register() at the function end. + */ + bool add_decap = true; + const struct rte_flow_action *ptr = actions; + + for (; ptr->type != RTE_FLOW_ACTION_TYPE_END; ptr++) { + if (ptr->type == RTE_FLOW_ACTION_TYPE_DROP) { + add_decap = false; + break; } - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.jump->action; - action_flags |= MLX5_FLOW_ACTION_JUMP; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_JUMP; - sample_act->action_flags |= MLX5_FLOW_ACTION_JUMP; - num_of_dest++; - break; - case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC: - case RTE_FLOW_ACTION_TYPE_SET_MAC_DST: - if (flow_dv_convert_action_modify_mac - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_MAC_SRC ? - MLX5_FLOW_ACTION_SET_MAC_SRC : - MLX5_FLOW_ACTION_SET_MAC_DST; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: - case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST: - if (flow_dv_convert_action_modify_ipv4 - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC ? - MLX5_FLOW_ACTION_SET_IPV4_SRC : - MLX5_FLOW_ACTION_SET_IPV4_DST; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC: - case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST: - if (flow_dv_convert_action_modify_ipv6 - (mhdr_res, actions, error)) + } + if (add_decap) { + if (flow_dv_create_action_l2_decap(dev, dev_flow, + attr->transfer, + error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC ? - MLX5_FLOW_ACTION_SET_IPV6_SRC : - MLX5_FLOW_ACTION_SET_IPV6_DST; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_DECAP; + } + } + for (; !actions_end ; actions++) { + const struct rte_flow_action_queue *queue; + const struct rte_flow_action_rss *rss; + const struct rte_flow_action *action = actions; + const uint8_t *rss_key; + struct mlx5_flow_tbl_resource *tbl; + struct mlx5_aso_age_action *age_act; + struct mlx5_flow_counter *cnt_act; + uint32_t port_id = 0; + struct mlx5_flow_dv_port_id_action_resource port_id_resource; + int action_type = actions->type; + const struct rte_flow_action *found_action = NULL; + uint32_t jump_group = 0; + uint32_t owner_idx; + struct mlx5_aso_ct_action *ct; + + if (!mlx5_flow_os_action_supported(action_type)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, + "action not supported"); + switch (action_type) { + case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET: + action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET; break; - case RTE_FLOW_ACTION_TYPE_SET_TP_SRC: - case RTE_FLOW_ACTION_TYPE_SET_TP_DST: - if (flow_dv_convert_action_modify_tp - (mhdr_res, actions, items, - &flow_attr, dev_flow, !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_TP_SRC ? - MLX5_FLOW_ACTION_SET_TP_SRC : - MLX5_FLOW_ACTION_SET_TP_DST; + case RTE_FLOW_ACTION_TYPE_VOID: break; - case RTE_FLOW_ACTION_TYPE_DEC_TTL: - if (flow_dv_convert_action_modify_dec_ttl - (mhdr_res, items, &flow_attr, dev_flow, - !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) + case RTE_FLOW_ACTION_TYPE_PORT_ID: + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + if (flow_dv_translate_action_port_id(dev, action, + &port_id, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_DEC_TTL; - break; - case RTE_FLOW_ACTION_TYPE_SET_TTL: - if (flow_dv_convert_action_modify_ttl - (mhdr_res, actions, items, &flow_attr, - dev_flow, !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) + port_id_resource.port_id = port_id; + MLX5_ASSERT(!handle->rix_port_id_action); + if (flow_dv_port_id_action_resource_register + (dev, &port_id_resource, dev_flow, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TTL; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.port_id_action->action; + action_flags |= MLX5_FLOW_ACTION_PORT_ID; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_PORT_ID; + sample_act->action_flags |= MLX5_FLOW_ACTION_PORT_ID; + num_of_dest++; break; - case RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ: - case RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ: - if (flow_dv_convert_action_modify_tcp_seq - (mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_FLAG: + action_flags |= MLX5_FLOW_ACTION_FLAG; + wks->mark = 1; + if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { + struct rte_flow_action_mark mark = { + .id = MLX5_FLOW_MARK_DEFAULT, + }; + + if (flow_dv_convert_action_mark(dev, &mark, + mhdr_res, + error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MARK_EXT; + break; + } + tag_be = mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT); + /* + * Only one FLAG or MARK is supported per device flow + * right now. So the pointer to the tag resource must be + * zero before the register process. + */ + MLX5_ASSERT(!handle->dvh.rix_tag); + if (flow_dv_tag_resource_register(dev, tag_be, + dev_flow, error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ ? - MLX5_FLOW_ACTION_INC_TCP_SEQ : - MLX5_FLOW_ACTION_DEC_TCP_SEQ; + MLX5_ASSERT(dev_flow->dv.tag_resource); + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.tag_resource->action; break; + case RTE_FLOW_ACTION_TYPE_MARK: + action_flags |= MLX5_FLOW_ACTION_MARK; + wks->mark = 1; + if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { + const struct rte_flow_action_mark *mark = + (const struct rte_flow_action_mark *) + actions->conf; - case RTE_FLOW_ACTION_TYPE_INC_TCP_ACK: - case RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK: - if (flow_dv_convert_action_modify_tcp_ack - (mhdr_res, actions, error)) + if (flow_dv_convert_action_mark(dev, mark, + mhdr_res, + error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MARK_EXT; + break; + } + /* Fall-through */ + case MLX5_RTE_FLOW_ACTION_TYPE_MARK: + /* Legacy (non-extensive) MARK action. */ + tag_be = mlx5_flow_mark_set + (((const struct rte_flow_action_mark *) + (actions->conf))->id); + MLX5_ASSERT(!handle->dvh.rix_tag); + if (flow_dv_tag_resource_register(dev, tag_be, + dev_flow, error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_INC_TCP_ACK ? - MLX5_FLOW_ACTION_INC_TCP_ACK : - MLX5_FLOW_ACTION_DEC_TCP_ACK; + MLX5_ASSERT(dev_flow->dv.tag_resource); + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.tag_resource->action; break; - case MLX5_RTE_FLOW_ACTION_TYPE_TAG: - if (flow_dv_convert_action_set_reg - (mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_SET_META: + if (flow_dv_convert_action_set_meta + (dev, mhdr_res, attr, + (const struct rte_flow_action_set_meta *) + actions->conf, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TAG; + action_flags |= MLX5_FLOW_ACTION_SET_META; break; - case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG: - if (flow_dv_convert_action_copy_mreg - (dev, mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_SET_TAG: + if (flow_dv_convert_action_set_tag + (dev, mhdr_res, + (const struct rte_flow_action_set_tag *) + actions->conf, error)) return -rte_errno; action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case MLX5_RTE_FLOW_ACTION_TYPE_DEFAULT_MISS: - action_flags |= MLX5_FLOW_ACTION_DEFAULT_MISS; - dev_flow->handle->fate_action = - MLX5_FLOW_FATE_DEFAULT_MISS; - break; - case RTE_FLOW_ACTION_TYPE_METER: - if (!wks->fm) - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, "Failed to get meter in flow."); - /* Set the meter action. */ - dev_flow->dv.actions[actions_n++] = - wks->fm->meter_action_g; - action_flags |= MLX5_FLOW_ACTION_METER; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV4_DSCP: - if (flow_dv_convert_action_modify_ipv4_dscp(mhdr_res, - actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_IPV4_DSCP; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV6_DSCP: - if (flow_dv_convert_action_modify_ipv6_dscp(mhdr_res, - actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_IPV6_DSCP; + case RTE_FLOW_ACTION_TYPE_DROP: + action_flags |= MLX5_FLOW_ACTION_DROP; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_DROP; break; - case RTE_FLOW_ACTION_TYPE_SAMPLE: - sample_act_pos = actions_n; - sample = (const struct rte_flow_action_sample *) - action->conf; - actions_n++; - action_flags |= MLX5_FLOW_ACTION_SAMPLE; - /* put encap action into group if work with port id */ - if ((action_flags & MLX5_FLOW_ACTION_ENCAP) && - (action_flags & MLX5_FLOW_ACTION_PORT_ID)) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ACTION_TYPE_QUEUE: + queue = actions->conf; + rss_desc->queue_num = 1; + rss_desc->queue[0] = queue->index; + action_flags |= MLX5_FLOW_ACTION_QUEUE; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_QUEUE; + sample_act->action_flags |= MLX5_FLOW_ACTION_QUEUE; + num_of_dest++; break; - case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: - if (flow_dv_convert_action_modify_field - (dev, mhdr_res, actions, attr, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; + case RTE_FLOW_ACTION_TYPE_RSS: + rss = actions->conf; + memcpy(rss_desc->queue, rss->queue, + rss->queue_num * sizeof(uint16_t)); + rss_desc->queue_num = rss->queue_num; + /* NULL RSS key indicates default RSS key. */ + rss_key = !rss->key ? rss_hash_default_key : rss->key; + memcpy(rss_desc->key, rss_key, MLX5_RSS_HASH_KEY_LEN); + /* + * rss->level and rss.types should be set in advance + * when expanding items for RSS. + */ + action_flags |= MLX5_FLOW_ACTION_RSS; + dev_flow->handle->fate_action = rss_desc->shared_rss ? + MLX5_FLOW_FATE_SHARED_RSS : + MLX5_FLOW_FATE_QUEUE; break; - case RTE_FLOW_ACTION_TYPE_CONNTRACK: + case MLX5_RTE_FLOW_ACTION_TYPE_AGE: owner_idx = (uint32_t)(uintptr_t)action->conf; - ct = flow_aso_ct_get_by_idx(dev, owner_idx); - if (!ct) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "Failed to get CT object."); - if (mlx5_aso_ct_available(priv->sh, ct)) - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "CT is unavailable."); - if (ct->is_original) - dev_flow->dv.actions[actions_n] = - ct->dr_action_orig; - else - dev_flow->dv.actions[actions_n] = - ct->dr_action_rply; - if (flow->ct == 0) { - flow->indirect_type = - MLX5_INDIRECT_ACTION_TYPE_CT; - flow->ct = owner_idx; - __atomic_fetch_add(&ct->refcnt, 1, + age_act = flow_aso_age_get_by_idx(dev, owner_idx); + if (flow->age == 0) { + flow->age = owner_idx; + __atomic_fetch_add(&age_act->refcnt, 1, __ATOMIC_RELAXED); } - actions_n++; - action_flags |= MLX5_FLOW_ACTION_CT; + age_act_pos = actions_n++; + action_flags |= MLX5_FLOW_ACTION_AGE; break; - case RTE_FLOW_ACTION_TYPE_END: - actions_end = true; - if (mhdr_res->actions_num) { - /* create modify action if needed. */ - if (flow_dv_modify_hdr_resource_register - (dev, mhdr_res, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[modify_action_position] = - handle->dvh.modify_hdr->action; - } - /* - * Handle AGE and COUNT action by single HW counter - * when they are not shared. + case RTE_FLOW_ACTION_TYPE_AGE: + non_shared_age = action->conf; + age_act_pos = actions_n++; + action_flags |= MLX5_FLOW_ACTION_AGE; + break; + case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: + owner_idx = (uint32_t)(uintptr_t)action->conf; + cnt_act = flow_dv_counter_get_by_idx(dev, owner_idx, + NULL); + MLX5_ASSERT(cnt_act != NULL); + /** + * When creating meter drop flow in drop table, the + * counter should not overwrite the rte flow counter. */ - if (action_flags & MLX5_FLOW_ACTION_AGE) { - if ((non_shared_age && count) || - !flow_hit_aso_supported(priv->sh, attr)) { - /* Creates age by counters. */ - cnt_act = flow_dv_prepare_counter - (dev, dev_flow, - flow, count, - non_shared_age, - error); - if (!cnt_act) - return -rte_errno; - dev_flow->dv.actions[age_act_pos] = - cnt_act->action; - break; - } - if (!flow->age && non_shared_age) { - flow->age = flow_dv_aso_age_alloc - (dev, error); - if (!flow->age) - return -rte_errno; - flow_dv_aso_age_params_init - (dev, flow->age, - non_shared_age->context ? - non_shared_age->context : - (void *)(uintptr_t) - (dev_flow->flow_idx), - non_shared_age->timeout); - } - age_act = flow_aso_age_get_by_idx(dev, - flow->age); - dev_flow->dv.actions[age_act_pos] = - age_act->dr_action; - } - if (action_flags & MLX5_FLOW_ACTION_COUNT) { - /* - * Create one count action, to be used - * by all sub-flows. - */ - cnt_act = flow_dv_prepare_counter(dev, dev_flow, - flow, count, - NULL, error); - if (!cnt_act) - return -rte_errno; + if (attr->group == MLX5_FLOW_TABLE_LEVEL_METER && + dev_flow->dv.table_id == MLX5_MTR_TABLE_ID_DROP) { dev_flow->dv.actions[actions_n++] = - cnt_act->action; + cnt_act->action; + } else { + if (flow->counter == 0) { + flow->counter = owner_idx; + __atomic_fetch_add + (&cnt_act->shared_info.refcnt, + 1, __ATOMIC_RELAXED); + } + /* Save information first, will apply later. */ + action_flags |= MLX5_FLOW_ACTION_COUNT; } - default: break; - } - if (mhdr_res->actions_num && - modify_action_position == UINT32_MAX) - modify_action_position = actions_n++; - } - for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { - int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); - int item_type = items->type; - - if (!mlx5_flow_os_item_supported(item_type)) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "item not supported"); - switch (item_type) { - case RTE_FLOW_ITEM_TYPE_ESP: - flow_dv_translate_item_esp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_ITEM_ESP; + case RTE_FLOW_ACTION_TYPE_COUNT: + if (!priv->sh->cdev->config.devx) { + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "count action not supported"); + } + /* Save information first, will apply later. */ + count = action->conf; + action_flags |= MLX5_FLOW_ACTION_COUNT; break; - case RTE_FLOW_ITEM_TYPE_PORT_ID: - flow_dv_translate_item_port_id - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_PORT_ID; + case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: + dev_flow->dv.actions[actions_n++] = + priv->sh->pop_vlan_action; + action_flags |= MLX5_FLOW_ACTION_OF_POP_VLAN; break; - case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: - flow_dv_translate_item_represented_port - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; + case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: + if (!(action_flags & + MLX5_FLOW_ACTION_OF_SET_VLAN_VID)) + flow_dev_get_vlan_info_from_items(items, &vlan); + vlan.eth_proto = rte_be_to_cpu_16 + ((((const struct rte_flow_action_of_push_vlan *) + actions->conf)->ethertype)); + found_action = mlx5_flow_find_action + (actions + 1, + RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID); + if (found_action) + mlx5_update_vlan_vid_pcp(found_action, &vlan); + found_action = mlx5_flow_find_action + (actions + 1, + RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP); + if (found_action) + mlx5_update_vlan_vid_pcp(found_action, &vlan); + if (flow_dv_create_action_push_vlan + (dev, attr, &vlan, dev_flow, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.push_vlan_res->action; + action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN; break; - case RTE_FLOW_ITEM_TYPE_ETH: - flow_dv_translate_item_eth(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = action_flags & - MLX5_FLOW_ACTION_DEFAULT_MISS && - !dev_flow->external ? - MLX5_PRIORITY_MAP_L3 : - MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : - MLX5_FLOW_LAYER_OUTER_L2; + case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP: + /* of_vlan_push action handled this action */ + MLX5_ASSERT(action_flags & + MLX5_FLOW_ACTION_OF_PUSH_VLAN); break; - case RTE_FLOW_ITEM_TYPE_VLAN: - flow_dv_translate_item_vlan(dev_flow, - match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | - MLX5_FLOW_LAYER_INNER_VLAN) : - (MLX5_FLOW_LAYER_OUTER_L2 | - MLX5_FLOW_LAYER_OUTER_VLAN); + case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID: + if (action_flags & MLX5_FLOW_ACTION_OF_PUSH_VLAN) + break; + flow_dev_get_vlan_info_from_items(items, &vlan); + mlx5_update_vlan_vid_pcp(actions, &vlan); + /* If no VLAN push - this is a modify header action */ + if (flow_dv_convert_action_modify_vlan_vid + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID; break; - case RTE_FLOW_ITEM_TYPE_IPV4: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv4(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : - MLX5_FLOW_LAYER_OUTER_L3_IPV4; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv4 *) - items->mask)->hdr.next_proto_id) { - next_protocol = - ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - next_protocol &= - ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: + if (flow_dv_create_action_l2_encap(dev, actions, + dev_flow, + attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_ENCAP; + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_IPV6: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv6(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : - MLX5_FLOW_LAYER_OUTER_L3_IPV6; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto) { - next_protocol = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol &= - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: + if (flow_dv_create_action_l2_decap(dev, dev_flow, + attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_DECAP; break; - case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: - flow_dv_translate_item_ipv6_frag_ext(match_mask, - match_value, - items, tunnel); - last_item = tunnel ? - MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : - MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header) { - next_protocol = - ((const struct rte_flow_item_ipv6_frag_ext *) - items->spec)->hdr.next_header; - next_protocol &= - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + /* Handle encap with preceding decap. */ + if (action_flags & MLX5_FLOW_ACTION_DECAP) { + if (flow_dv_create_action_raw_encap + (dev, actions, dev_flow, attr, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; } else { - /* Reset for inner layer. */ - next_protocol = 0xff; + /* Handle encap without preceding decap. */ + if (flow_dv_create_action_l2_encap + (dev, actions, dev_flow, attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; } + action_flags |= MLX5_FLOW_ACTION_ENCAP; + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_TCP: - flow_dv_translate_item_tcp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : - MLX5_FLOW_LAYER_OUTER_L4_TCP; - break; - case RTE_FLOW_ITEM_TYPE_UDP: - flow_dv_translate_item_udp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : - MLX5_FLOW_LAYER_OUTER_L4_UDP; - break; - case RTE_FLOW_ITEM_TYPE_GRE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - gre_item = items; + case RTE_FLOW_ACTION_TYPE_RAW_DECAP: + while ((++action)->type == RTE_FLOW_ACTION_TYPE_VOID) + ; + if (action->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { + if (flow_dv_create_action_l2_decap + (dev, dev_flow, attr->transfer, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + } + /* If decap is followed by encap, handle it at encap. */ + action_flags |= MLX5_FLOW_ACTION_DECAP; break; - case RTE_FLOW_ITEM_TYPE_GRE_KEY: - flow_dv_translate_item_gre_key(match_mask, - match_value, items); - last_item = MLX5_FLOW_LAYER_GRE_KEY; + case MLX5_RTE_FLOW_ACTION_TYPE_JUMP: + dev_flow->dv.actions[actions_n++] = + (void *)(uintptr_t)action->conf; + action_flags |= MLX5_FLOW_ACTION_JUMP; break; - case RTE_FLOW_ITEM_TYPE_GRE_OPTION: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_JUMP: + jump_group = ((const struct rte_flow_action_jump *) + action->conf)->group; + grp_info.std_tbl_fix = 0; + if (dev_flow->skip_scale & + (1 << MLX5_SCALE_JUMP_FLOW_GROUP_BIT)) + grp_info.skip_scale = 1; + else + grp_info.skip_scale = 0; + ret = mlx5_flow_group_to_table(dev, tunnel, + jump_group, + &table, + &grp_info, error); + if (ret) + return ret; + tbl = flow_dv_tbl_resource_get(dev, table, attr->egress, + attr->transfer, + !!dev_flow->external, + tunnel, jump_group, 0, + 0, error); + if (!tbl) + return rte_flow_error_set + (error, errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create jump action."); + if (flow_dv_jump_tbl_resource_register + (dev, tbl, dev_flow, error)) { + flow_dv_tbl_resource_release(MLX5_SH(dev), tbl); + return rte_flow_error_set + (error, errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create jump action."); + } + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.jump->action; + action_flags |= MLX5_FLOW_ACTION_JUMP; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_JUMP; + sample_act->action_flags |= MLX5_FLOW_ACTION_JUMP; + num_of_dest++; break; - case RTE_FLOW_ITEM_TYPE_NVGRE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC: + case RTE_FLOW_ACTION_TYPE_SET_MAC_DST: + if (flow_dv_convert_action_modify_mac + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_MAC_SRC ? + MLX5_FLOW_ACTION_SET_MAC_SRC : + MLX5_FLOW_ACTION_SET_MAC_DST; break; - case RTE_FLOW_ITEM_TYPE_VXLAN: - flow_dv_translate_item_vxlan(dev, attr, - match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN; + case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: + case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST: + if (flow_dv_convert_action_modify_ipv4 + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC ? + MLX5_FLOW_ACTION_SET_IPV4_SRC : + MLX5_FLOW_ACTION_SET_IPV4_DST; break; - case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN_GPE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC: + case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST: + if (flow_dv_convert_action_modify_ipv6 + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC ? + MLX5_FLOW_ACTION_SET_IPV6_SRC : + MLX5_FLOW_ACTION_SET_IPV6_DST; break; - case RTE_FLOW_ITEM_TYPE_GENEVE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GENEVE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_TP_SRC: + case RTE_FLOW_ACTION_TYPE_SET_TP_DST: + if (flow_dv_convert_action_modify_tp + (mhdr_res, actions, items, + &flow_attr, dev_flow, !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_TP_SRC ? + MLX5_FLOW_ACTION_SET_TP_SRC : + MLX5_FLOW_ACTION_SET_TP_DST; break; - case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: - ret = flow_dv_translate_item_geneve_opt(dev, match_mask, - match_value, - items, error); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GENEVE TLV option"); - flow->geneve_tlv_option = 1; - last_item = MLX5_FLOW_LAYER_GENEVE_OPT; + case RTE_FLOW_ACTION_TYPE_DEC_TTL: + if (flow_dv_convert_action_modify_dec_ttl + (mhdr_res, items, &flow_attr, dev_flow, + !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_DEC_TTL; break; - case RTE_FLOW_ITEM_TYPE_MPLS: - flow_dv_translate_item_mpls(match_mask, match_value, - items, last_item, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_MPLS; + case RTE_FLOW_ACTION_TYPE_SET_TTL: + if (flow_dv_convert_action_modify_ttl + (mhdr_res, actions, items, &flow_attr, + dev_flow, !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TTL; break; - case RTE_FLOW_ITEM_TYPE_MARK: - flow_dv_translate_item_mark(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_MARK; + case RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ: + case RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ: + if (flow_dv_convert_action_modify_tcp_seq + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ ? + MLX5_FLOW_ACTION_INC_TCP_SEQ : + MLX5_FLOW_ACTION_DEC_TCP_SEQ; break; - case RTE_FLOW_ITEM_TYPE_META: - flow_dv_translate_item_meta(dev, match_mask, - match_value, attr, items); - last_item = MLX5_FLOW_ITEM_METADATA; + + case RTE_FLOW_ACTION_TYPE_INC_TCP_ACK: + case RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK: + if (flow_dv_convert_action_modify_tcp_ack + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_INC_TCP_ACK ? + MLX5_FLOW_ACTION_INC_TCP_ACK : + MLX5_FLOW_ACTION_DEC_TCP_ACK; break; - case RTE_FLOW_ITEM_TYPE_ICMP: - flow_dv_translate_item_icmp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP; + case MLX5_RTE_FLOW_ACTION_TYPE_TAG: + if (flow_dv_convert_action_set_reg + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case RTE_FLOW_ITEM_TYPE_ICMP6: - flow_dv_translate_item_icmp6(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP6; + case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG: + if (flow_dv_convert_action_copy_mreg + (dev, mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; + case MLX5_RTE_FLOW_ACTION_TYPE_DEFAULT_MISS: + action_flags |= MLX5_FLOW_ACTION_DEFAULT_MISS; + dev_flow->handle->fate_action = + MLX5_FLOW_FATE_DEFAULT_MISS; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_mlx5_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; + case RTE_FLOW_ACTION_TYPE_METER: + if (!wks->fm) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "Failed to get meter in flow."); + /* Set the meter action. */ + dev_flow->dv.actions[actions_n++] = + wks->fm->meter_action_g; + action_flags |= MLX5_FLOW_ACTION_METER; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - flow_dv_translate_item_tx_queue(dev, match_mask, - match_value, - items); - last_item = MLX5_FLOW_ITEM_TX_QUEUE; + case RTE_FLOW_ACTION_TYPE_SET_IPV4_DSCP: + if (flow_dv_convert_action_modify_ipv4_dscp(mhdr_res, + actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_IPV4_DSCP; break; - case RTE_FLOW_ITEM_TYPE_GTP: - flow_dv_translate_item_gtp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GTP; + case RTE_FLOW_ACTION_TYPE_SET_IPV6_DSCP: + if (flow_dv_convert_action_modify_ipv6_dscp(mhdr_res, + actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_IPV6_DSCP; break; - case RTE_FLOW_ITEM_TYPE_GTP_PSC: - ret = flow_dv_translate_item_gtp_psc(match_mask, - match_value, - items); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GTP PSC item"); - last_item = MLX5_FLOW_LAYER_GTP_PSC; + case RTE_FLOW_ACTION_TYPE_SAMPLE: + sample_act_pos = actions_n; + sample = (const struct rte_flow_action_sample *) + action->conf; + actions_n++; + action_flags |= MLX5_FLOW_ACTION_SAMPLE; + /* put encap action into group if work with port id */ + if ((action_flags & MLX5_FLOW_ACTION_ENCAP) && + (action_flags & MLX5_FLOW_ACTION_PORT_ID)) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_ECPRI: - if (!mlx5_flex_parser_ecpri_exist(dev)) { - /* Create it only the first time to be used. */ - ret = mlx5_flex_parser_ecpri_alloc(dev); - if (ret) - return rte_flow_error_set - (error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, + case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: + if (flow_dv_convert_action_modify_field + (dev, mhdr_res, actions, attr, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; + break; + case RTE_FLOW_ACTION_TYPE_CONNTRACK: + owner_idx = (uint32_t)(uintptr_t)action->conf; + ct = flow_aso_ct_get_by_idx(dev, owner_idx); + if (!ct) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, - "cannot create eCPRI parser"); + "Failed to get CT object."); + if (mlx5_aso_ct_available(priv->sh, ct)) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "CT is unavailable."); + if (ct->is_original) + dev_flow->dv.actions[actions_n] = + ct->dr_action_orig; + else + dev_flow->dv.actions[actions_n] = + ct->dr_action_rply; + if (flow->ct == 0) { + flow->indirect_type = + MLX5_INDIRECT_ACTION_TYPE_CT; + flow->ct = owner_idx; + __atomic_fetch_add(&ct->refcnt, 1, + __ATOMIC_RELAXED); } - flow_dv_translate_item_ecpri(dev, match_mask, - match_value, items, - last_item); - /* No other protocol should follow eCPRI layer. */ - last_item = MLX5_FLOW_LAYER_ECPRI; - break; - case RTE_FLOW_ITEM_TYPE_INTEGRITY: - flow_dv_translate_item_integrity(items, integrity_items, - &last_item); - break; - case RTE_FLOW_ITEM_TYPE_CONNTRACK: - flow_dv_translate_item_aso_ct(dev, match_mask, - match_value, items); - break; - case RTE_FLOW_ITEM_TYPE_FLEX: - flow_dv_translate_item_flex(dev, match_mask, - match_value, items, - dev_flow, tunnel != 0); - last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : - MLX5_FLOW_ITEM_OUTER_FLEX; + actions_n++; + action_flags |= MLX5_FLOW_ACTION_CT; break; + case RTE_FLOW_ACTION_TYPE_END: + actions_end = true; + if (mhdr_res->actions_num) { + /* create modify action if needed. */ + if (flow_dv_modify_hdr_resource_register + (dev, mhdr_res, dev_flow, error)) + return -rte_errno; + dev_flow->dv.actions[modify_action_position] = + handle->dvh.modify_hdr->action; + } + /* + * Handle AGE and COUNT action by single HW counter + * when they are not shared. + */ + if (action_flags & MLX5_FLOW_ACTION_AGE) { + if ((non_shared_age && count) || + !flow_hit_aso_supported(priv->sh, attr)) { + /* Creates age by counters. */ + cnt_act = flow_dv_prepare_counter + (dev, dev_flow, + flow, count, + non_shared_age, + error); + if (!cnt_act) + return -rte_errno; + dev_flow->dv.actions[age_act_pos] = + cnt_act->action; + break; + } + if (!flow->age && non_shared_age) { + flow->age = flow_dv_aso_age_alloc + (dev, error); + if (!flow->age) + return -rte_errno; + flow_dv_aso_age_params_init + (dev, flow->age, + non_shared_age->context ? + non_shared_age->context : + (void *)(uintptr_t) + (dev_flow->flow_idx), + non_shared_age->timeout); + } + age_act = flow_aso_age_get_by_idx(dev, + flow->age); + dev_flow->dv.actions[age_act_pos] = + age_act->dr_action; + } + if (action_flags & MLX5_FLOW_ACTION_COUNT) { + /* + * Create one count action, to be used + * by all sub-flows. + */ + cnt_act = flow_dv_prepare_counter(dev, dev_flow, + flow, count, + NULL, error); + if (!cnt_act) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + cnt_act->action; + } default: break; } - item_flags |= last_item; - } - /* - * When E-Switch mode is enabled, we have two cases where we need to - * set the source port manually. - * The first one, is in case of NIC ingress steering rule, and the - * second is E-Switch rule where no port_id item was found. - * In both cases the source port is set according the current port - * in use. - */ - if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && - !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && - !(attr->egress && !attr->transfer)) { - if (flow_dv_translate_item_port_id(dev, match_mask, - match_value, NULL, attr)) - return -rte_errno; - } - if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { - flow_dv_translate_item_integrity_post(match_mask, match_value, - integrity_items, - item_flags); - } - if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) - flow_dv_translate_item_vxlan_gpe(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GENEVE) - flow_dv_translate_item_geneve(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GRE) { - if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) - flow_dv_translate_item_gre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) - flow_dv_translate_item_nvgre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) - flow_dv_translate_item_gre_option(match_mask, match_value, - tunnel_item, gre_item, item_flags); - else - MLX5_ASSERT(false); + if (mhdr_res->actions_num && + modify_action_position == UINT32_MAX) + modify_action_position = actions_n++; } -#ifdef RTE_LIBRTE_MLX5_DEBUG - MLX5_ASSERT(!flow_dv_check_valid_spec(matcher.mask.buf, - dev_flow->dv.value.buf)); -#endif - /* - * Layers may be already initialized from prefix flow if this dev_flow - * is the suffix flow. - */ - handle->layers |= item_flags; + dev_flow->act_flags = action_flags; + ret = flow_dv_translate_items(dev, dev_flow, attr, items, &matcher, + error); + if (ret) + return -rte_errno; if (action_flags & MLX5_FLOW_ACTION_RSS) flow_dv_hashfields_set(dev_flow->handle->layers, rss_desc, @@ -14153,7 +14197,6 @@ flow_dv_translate(struct rte_eth_dev *dev, actions_n = tmp_actions_n; } dev_flow->dv.actions_n = actions_n; - dev_flow->act_flags = action_flags; if (wks->skip_matcher_reg) return 0; /* Register matcher. */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 02/18] net/mlx5: split flow item matcher and value translation 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker 2022-10-14 11:48 ` [v3 01/18] net/mlx5: split flow item translation Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 03/18] net/mlx5: add hardware steering item translation function Alex Vesker ` (15 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> As hardware steering mode translates flow matcher and value in two different stages, split the flow item matcher and value translation to help reuse the code. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow.h | 32 + drivers/net/mlx5/mlx5_flow_dv.c | 2314 +++++++++++++++---------------- 2 files changed, 1185 insertions(+), 1161 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 0fa1735b1a..2ebb8496f2 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1264,6 +1264,38 @@ struct mlx5_flow_workspace { uint32_t skip_matcher_reg:1; /* Indicates if need to skip matcher register in translate. */ uint32_t mark:1; /* Indicates if flow contains mark action. */ + uint32_t vport_meta_tag; /* Used for vport index match. */ +}; + +/* Matcher translate type. */ +enum MLX5_SET_MATCHER { + MLX5_SET_MATCHER_SW_V = 1 << 0, + MLX5_SET_MATCHER_SW_M = 1 << 1, + MLX5_SET_MATCHER_HS_V = 1 << 2, + MLX5_SET_MATCHER_HS_M = 1 << 3, +}; + +#define MLX5_SET_MATCHER_SW (MLX5_SET_MATCHER_SW_V | MLX5_SET_MATCHER_SW_M) +#define MLX5_SET_MATCHER_HS (MLX5_SET_MATCHER_HS_V | MLX5_SET_MATCHER_HS_M) +#define MLX5_SET_MATCHER_V (MLX5_SET_MATCHER_SW_V | MLX5_SET_MATCHER_HS_V) +#define MLX5_SET_MATCHER_M (MLX5_SET_MATCHER_SW_M | MLX5_SET_MATCHER_HS_M) + +/* Flow matcher workspace intermediate data. */ +struct mlx5_dv_matcher_workspace { + uint8_t priority; /* Flow priority. */ + uint64_t last_item; /* Last item in pattern. */ + uint64_t item_flags; /* Flow item pattern flags. */ + uint64_t action_flags; /* Flow action flags. */ + bool external; /* External flow or not. */ + uint32_t vlan_tag:12; /* Flow item VLAN tag. */ + uint8_t next_protocol; /* Tunnel next protocol */ + uint32_t geneve_tlv_option; /* Flow item Geneve TLV option. */ + uint32_t group; /* Flow group. */ + uint16_t udp_dport; /* Flow item UDP port. */ + const struct rte_flow_attr *attr; /* Flow attribute. */ + struct mlx5_flow_rss_desc *rss_desc; /* RSS descriptor. */ + const struct rte_flow_item *tunnel_item; /* Flow tunnel item. */ + const struct rte_flow_item *gre_item; /* Flow GRE item. */ }; struct mlx5_flow_split_info { diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 70a3279e2f..0589cafc30 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -63,6 +63,25 @@ #define MLX5DV_FLOW_VLAN_PCP_MASK_BE RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK) #define MLX5DV_FLOW_VLAN_VID_MASK_BE RTE_BE16(MLX5DV_FLOW_VLAN_VID_MASK) +#define MLX5_ITEM_VALID(item, key_type) \ + (((MLX5_SET_MATCHER_SW & (key_type)) && !((item)->spec)) || \ + ((MLX5_SET_MATCHER_HS_V == (key_type)) && !((item)->spec)) || \ + ((MLX5_SET_MATCHER_HS_M == (key_type)) && !((item)->mask))) + +#define MLX5_ITEM_UPDATE(item, key_type, v, m, gm) \ + do { \ + if ((key_type) == MLX5_SET_MATCHER_SW_V) { \ + v = (item)->spec; \ + m = (item)->mask ? (item)->mask : (gm); \ + } else if ((key_type) == MLX5_SET_MATCHER_HS_V) { \ + v = (item)->spec; \ + m = (v); \ + } else { \ + v = (item)->mask ? (item)->mask : (gm); \ + m = (v); \ + } \ + } while (0) + union flow_dv_attr { struct { uint32_t valid:1; @@ -8323,70 +8342,61 @@ flow_dv_check_valid_spec(void *match_mask, void *match_value) static inline void flow_dv_set_match_ip_version(uint32_t group, void *headers_v, - void *headers_m, + uint32_t key_type, uint8_t ip_version) { - if (group == 0) - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, 0xf); + if (group == 0 && (key_type & MLX5_SET_MATCHER_M)) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, 0xf); else - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, ip_version); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, ip_version); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, 0); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ethertype, 0); } /** - * Add Ethernet item to matcher and to the value. + * Add Ethernet item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] grpup + * Flow matcher group. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_eth(void *matcher, void *key, - const struct rte_flow_item *item, int inner, - uint32_t group) +flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_eth *eth_m = item->mask; - const struct rte_flow_item_eth *eth_v = item->spec; + const struct rte_flow_item_eth *eth_vv = item->spec; + const struct rte_flow_item_eth *eth_m; + const struct rte_flow_item_eth *eth_v; const struct rte_flow_item_eth nic_mask = { .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", .src.addr_bytes = "\xff\xff\xff\xff\xff\xff", .type = RTE_BE16(0xffff), .has_vlan = 0, }; - void *hdrs_m; void *hdrs_v; char *l24_v; unsigned int i; - if (!eth_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!eth_m) - eth_m = &nic_mask; - if (inner) { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); + MLX5_ITEM_UPDATE(item, key_type, eth_v, eth_m, &nic_mask); + if (!eth_vv) + eth_vv = eth_v; + if (inner) hdrs_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); + else hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, dmac_47_16), - ð_m->dst, sizeof(eth_m->dst)); /* The value must be in the range of the mask. */ l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, dmac_47_16); for (i = 0; i < sizeof(eth_m->dst); ++i) l24_v[i] = eth_m->dst.addr_bytes[i] & eth_v->dst.addr_bytes[i]; - memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, smac_47_16), - ð_m->src, sizeof(eth_m->src)); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, smac_47_16); /* The value must be in the range of the mask. */ for (i = 0; i < sizeof(eth_m->dst); ++i) @@ -8400,145 +8410,149 @@ flow_dv_translate_item_eth(void *matcher, void *key, * eCPRI over Ether layer will use type value 0xAEFE. */ if (eth_m->type == 0xFFFF) { + rte_be16_t type = eth_v->type; + + /* + * When set the matcher mask, refer to the original spec + * value. + */ + if (key_type == MLX5_SET_MATCHER_SW_M) { + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + type = eth_vv->type; + } /* Set cvlan_tag mask for any single\multi\un-tagged case. */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); - switch (eth_v->type) { + switch (type) { case RTE_BE16(RTE_ETHER_TYPE_VLAN): MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); return; case RTE_BE16(RTE_ETHER_TYPE_QINQ): - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); return; case RTE_BE16(RTE_ETHER_TYPE_IPV4): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 4); + flow_dv_set_match_ip_version(group, hdrs_v, key_type, + 4); return; case RTE_BE16(RTE_ETHER_TYPE_IPV6): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 6); + flow_dv_set_match_ip_version(group, hdrs_v, key_type, + 6); return; default: break; } } - if (eth_m->has_vlan) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); - if (eth_v->has_vlan) { - /* - * Here, when also has_more_vlan field in VLAN item is - * not set, only single-tagged packets will be matched. - */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + /* + * Only SW steering value should refer to the mask value. + * Other cases are using the fake masks, just ignore the mask. + */ + if (eth_v->has_vlan && eth_m->has_vlan) { + /* + * Here, when also has_more_vlan field in VLAN item is + * not set, only single-tagged packets will be matched. + */ + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + if (key_type != MLX5_SET_MATCHER_HS_M && eth_vv->has_vlan) return; - } } - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, ethertype, - rte_be_to_cpu_16(eth_m->type)); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype); *(uint16_t *)(l24_v) = eth_m->type & eth_v->type; } /** - * Add VLAN item to matcher and to the value. + * Add VLAN item to the value. * - * @param[in, out] dev_flow - * Flow descriptor. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] wks + * Item workspace. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow, - void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_vlan *vlan_m = item->mask; - const struct rte_flow_item_vlan *vlan_v = item->spec; - void *hdrs_m; + const struct rte_flow_item_vlan *vlan_m; + const struct rte_flow_item_vlan *vlan_v; + const struct rte_flow_item_vlan *vlan_vv = item->spec; void *hdrs_v; - uint16_t tci_m; uint16_t tci_v; if (inner) { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); } else { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); /* * This is workaround, masks are not supported, * and pre-validated. */ - if (vlan_v) - dev_flow->handle->vf_vlan.tag = - rte_be_to_cpu_16(vlan_v->tci) & 0x0fff; + if (vlan_vv) + wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->tci) & 0x0fff; } /* * When VLAN item exists in flow, mark packet as tagged, * even if TCI is not specified. */ - if (!MLX5_GET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag)) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); + if (!MLX5_GET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag)) MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); - } - if (!vlan_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!vlan_m) - vlan_m = &rte_flow_item_vlan_mask; - tci_m = rte_be_to_cpu_16(vlan_m->tci); + MLX5_ITEM_UPDATE(item, key_type, vlan_v, vlan_m, + &rte_flow_item_vlan_mask); tci_v = rte_be_to_cpu_16(vlan_m->tci & vlan_v->tci); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_vid, tci_m); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_vid, tci_v); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_cfi, tci_m >> 12); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_cfi, tci_v >> 12); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_prio, tci_m >> 13); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_prio, tci_v >> 13); /* * HW is optimized for IPv4/IPv6. In such cases, avoid setting * ethertype, and use ip_version field instead. */ if (vlan_m->inner_type == 0xFFFF) { - switch (vlan_v->inner_type) { + rte_be16_t inner_type = vlan_v->inner_type; + + /* + * When set the matcher mask, refer to the original spec + * value. + */ + if (key_type == MLX5_SET_MATCHER_SW_M) + inner_type = vlan_vv->inner_type; + switch (inner_type) { case RTE_BE16(RTE_ETHER_TYPE_VLAN): - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); + if (key_type & MLX5_SET_MATCHER_V) + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, + cvlan_tag, 0); return; case RTE_BE16(RTE_ETHER_TYPE_IPV4): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 4); + flow_dv_set_match_ip_version + (wks->group, hdrs_v, key_type, 4); return; case RTE_BE16(RTE_ETHER_TYPE_IPV6): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 6); + flow_dv_set_match_ip_version + (wks->group, hdrs_v, key_type, 6); return; default: break; } } if (vlan_m->has_more_vlan && vlan_v->has_more_vlan) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); /* Only one vlan_tag bit can be set. */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); + if (key_type & MLX5_SET_MATCHER_V) + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); return; } - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, ethertype, - rte_be_to_cpu_16(vlan_m->inner_type)); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, ethertype, rte_be_to_cpu_16(vlan_m->inner_type & vlan_v->inner_type)); } /** - * Add IPV4 item to matcher and to the value. + * Add IPV4 item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8547,14 +8561,15 @@ flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow, * Item is inner pattern. * @param[in] group * The group to insert the rule. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv4(void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_ipv4(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_ipv4 *ipv4_m = item->mask; - const struct rte_flow_item_ipv4 *ipv4_v = item->spec; + const struct rte_flow_item_ipv4 *ipv4_m; + const struct rte_flow_item_ipv4 *ipv4_v; const struct rte_flow_item_ipv4 nic_mask = { .hdr = { .src_addr = RTE_BE32(0xffffffff), @@ -8564,68 +8579,41 @@ flow_dv_translate_item_ipv4(void *matcher, void *key, .time_to_live = 0xff, }, }; - void *headers_m; void *headers_v; - char *l24_m; char *l24_v; - uint8_t tos, ihl_m, ihl_v; + uint8_t tos; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - flow_dv_set_match_ip_version(group, headers_v, headers_m, 4); - if (!ipv4_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + flow_dv_set_match_ip_version(group, headers_v, key_type, 4); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv4_m) - ipv4_m = &nic_mask; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - dst_ipv4_dst_ipv6.ipv4_layout.ipv4); + MLX5_ITEM_UPDATE(item, key_type, ipv4_v, ipv4_m, &nic_mask); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, dst_ipv4_dst_ipv6.ipv4_layout.ipv4); - *(uint32_t *)l24_m = ipv4_m->hdr.dst_addr; *(uint32_t *)l24_v = ipv4_m->hdr.dst_addr & ipv4_v->hdr.dst_addr; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - src_ipv4_src_ipv6.ipv4_layout.ipv4); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, src_ipv4_src_ipv6.ipv4_layout.ipv4); - *(uint32_t *)l24_m = ipv4_m->hdr.src_addr; *(uint32_t *)l24_v = ipv4_m->hdr.src_addr & ipv4_v->hdr.src_addr; tos = ipv4_m->hdr.type_of_service & ipv4_v->hdr.type_of_service; - ihl_m = ipv4_m->hdr.version_ihl & RTE_IPV4_HDR_IHL_MASK; - ihl_v = ipv4_v->hdr.version_ihl & RTE_IPV4_HDR_IHL_MASK; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_ihl, ihl_m); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_ihl, ihl_m & ihl_v); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ecn, - ipv4_m->hdr.type_of_service); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_ihl, + ipv4_v->hdr.ihl & ipv4_m->hdr.ihl); + if (key_type == MLX5_SET_MATCHER_SW_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, + ipv4_v->hdr.type_of_service); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, tos); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_dscp, - ipv4_m->hdr.type_of_service >> 2); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp, tos >> 2); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv4_m->hdr.next_proto_id); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv4_v->hdr.next_proto_id & ipv4_m->hdr.next_proto_id); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ttl_hoplimit, - ipv4_m->hdr.time_to_live); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit, ipv4_v->hdr.time_to_live & ipv4_m->hdr.time_to_live); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, - !!(ipv4_m->hdr.fragment_offset)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, !!(ipv4_v->hdr.fragment_offset & ipv4_m->hdr.fragment_offset)); } /** - * Add IPV6 item to matcher and to the value. + * Add IPV6 item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8634,14 +8622,15 @@ flow_dv_translate_item_ipv4(void *matcher, void *key, * Item is inner pattern. * @param[in] group * The group to insert the rule. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv6(void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_ipv6(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_ipv6 *ipv6_m = item->mask; - const struct rte_flow_item_ipv6 *ipv6_v = item->spec; + const struct rte_flow_item_ipv6 *ipv6_m; + const struct rte_flow_item_ipv6 *ipv6_v; const struct rte_flow_item_ipv6 nic_mask = { .hdr = { .src_addr = @@ -8655,287 +8644,217 @@ flow_dv_translate_item_ipv6(void *matcher, void *key, .hop_limits = 0xff, }, }; - void *headers_m; void *headers_v; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - char *l24_m; char *l24_v; - uint32_t vtc_m; uint32_t vtc_v; int i; int size; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - flow_dv_set_match_ip_version(group, headers_v, headers_m, 6); - if (!ipv6_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + flow_dv_set_match_ip_version(group, headers_v, key_type, 6); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv6_m) - ipv6_m = &nic_mask; + MLX5_ITEM_UPDATE(item, key_type, ipv6_v, ipv6_m, &nic_mask); size = sizeof(ipv6_m->hdr.dst_addr); - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - dst_ipv4_dst_ipv6.ipv6_layout.ipv6); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, dst_ipv4_dst_ipv6.ipv6_layout.ipv6); - memcpy(l24_m, ipv6_m->hdr.dst_addr, size); for (i = 0; i < size; ++i) - l24_v[i] = l24_m[i] & ipv6_v->hdr.dst_addr[i]; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - src_ipv4_src_ipv6.ipv6_layout.ipv6); + l24_v[i] = ipv6_m->hdr.dst_addr[i] & ipv6_v->hdr.dst_addr[i]; l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, src_ipv4_src_ipv6.ipv6_layout.ipv6); - memcpy(l24_m, ipv6_m->hdr.src_addr, size); for (i = 0; i < size; ++i) - l24_v[i] = l24_m[i] & ipv6_v->hdr.src_addr[i]; + l24_v[i] = ipv6_m->hdr.src_addr[i] & ipv6_v->hdr.src_addr[i]; /* TOS. */ - vtc_m = rte_be_to_cpu_32(ipv6_m->hdr.vtc_flow); vtc_v = rte_be_to_cpu_32(ipv6_m->hdr.vtc_flow & ipv6_v->hdr.vtc_flow); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ecn, vtc_m >> 20); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, vtc_v >> 20); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_dscp, vtc_m >> 22); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp, vtc_v >> 22); /* Label. */ - if (inner) { - MLX5_SET(fte_match_set_misc, misc_m, inner_ipv6_flow_label, - vtc_m); + if (inner) MLX5_SET(fte_match_set_misc, misc_v, inner_ipv6_flow_label, vtc_v); - } else { - MLX5_SET(fte_match_set_misc, misc_m, outer_ipv6_flow_label, - vtc_m); + else MLX5_SET(fte_match_set_misc, misc_v, outer_ipv6_flow_label, vtc_v); - } /* Protocol. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv6_m->hdr.proto); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv6_v->hdr.proto & ipv6_m->hdr.proto); /* Hop limit. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ttl_hoplimit, - ipv6_m->hdr.hop_limits); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit, ipv6_v->hdr.hop_limits & ipv6_m->hdr.hop_limits); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, - !!(ipv6_m->has_frag_ext)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, !!(ipv6_v->has_frag_ext & ipv6_m->has_frag_ext)); } /** - * Add IPV6 fragment extension item to matcher and to the value. + * Add IPV6 fragment extension item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv6_frag_ext(void *matcher, void *key, +flow_dv_translate_item_ipv6_frag_ext(void *key, const struct rte_flow_item *item, - int inner) + int inner, uint32_t key_type) { - const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_m = item->mask; - const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_v = item->spec; + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_m; + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_v; const struct rte_flow_item_ipv6_frag_ext nic_mask = { .hdr = { .next_header = 0xff, .frag_data = RTE_BE16(0xffff), }, }; - void *headers_m; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); /* IPv6 fragment extension item exists, so packet is IP fragment. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 1); - if (!ipv6_frag_ext_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv6_frag_ext_m) - ipv6_frag_ext_m = &nic_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv6_frag_ext_m->hdr.next_header); + MLX5_ITEM_UPDATE(item, key_type, ipv6_frag_ext_v, + ipv6_frag_ext_m, &nic_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv6_frag_ext_v->hdr.next_header & ipv6_frag_ext_m->hdr.next_header); } /** - * Add TCP item to matcher and to the value. + * Add TCP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_tcp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_tcp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_tcp *tcp_m = item->mask; - const struct rte_flow_item_tcp *tcp_v = item->spec; - void *headers_m; + const struct rte_flow_item_tcp *tcp_m; + const struct rte_flow_item_tcp *tcp_v; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_TCP); - if (!tcp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_TCP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!tcp_m) - tcp_m = &rte_flow_item_tcp_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_sport, - rte_be_to_cpu_16(tcp_m->hdr.src_port)); + MLX5_ITEM_UPDATE(item, key_type, tcp_v, tcp_m, + &rte_flow_item_tcp_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_sport, rte_be_to_cpu_16(tcp_v->hdr.src_port & tcp_m->hdr.src_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_dport, - rte_be_to_cpu_16(tcp_m->hdr.dst_port)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_dport, rte_be_to_cpu_16(tcp_v->hdr.dst_port & tcp_m->hdr.dst_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_flags, - tcp_m->hdr.tcp_flags); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_flags, - (tcp_v->hdr.tcp_flags & tcp_m->hdr.tcp_flags)); + tcp_v->hdr.tcp_flags & tcp_m->hdr.tcp_flags); } /** - * Add ESP item to matcher and to the value. + * Add ESP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_esp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_esp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_esp *esp_m = item->mask; - const struct rte_flow_item_esp *esp_v = item->spec; - void *headers_m; + const struct rte_flow_item_esp *esp_m; + const struct rte_flow_item_esp *esp_v; void *headers_v; - char *spi_m; char *spi_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ESP); - if (!esp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_ESP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!esp_m) - esp_m = &rte_flow_item_esp_mask; - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + MLX5_ITEM_UPDATE(item, key_type, esp_v, esp_m, + &rte_flow_item_esp_mask); headers_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - if (inner) { - spi_m = MLX5_ADDR_OF(fte_match_set_misc, headers_m, inner_esp_spi); - spi_v = MLX5_ADDR_OF(fte_match_set_misc, headers_v, inner_esp_spi); - } else { - spi_m = MLX5_ADDR_OF(fte_match_set_misc, headers_m, outer_esp_spi); - spi_v = MLX5_ADDR_OF(fte_match_set_misc, headers_v, outer_esp_spi); - } - *(uint32_t *)spi_m = esp_m->hdr.spi; + spi_v = inner ? MLX5_ADDR_OF(fte_match_set_misc, headers_v, + inner_esp_spi) : MLX5_ADDR_OF(fte_match_set_misc + , headers_v, outer_esp_spi); *(uint32_t *)spi_v = esp_m->hdr.spi & esp_v->hdr.spi; } /** - * Add UDP item to matcher and to the value. + * Add UDP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_udp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_udp(void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_udp *udp_m = item->mask; - const struct rte_flow_item_udp *udp_v = item->spec; - void *headers_m; + const struct rte_flow_item_udp *udp_m; + const struct rte_flow_item_udp *udp_v; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_UDP); - if (!udp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_UDP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!udp_m) - udp_m = &rte_flow_item_udp_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_sport, - rte_be_to_cpu_16(udp_m->hdr.src_port)); + MLX5_ITEM_UPDATE(item, key_type, udp_v, udp_m, + &rte_flow_item_udp_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_sport, rte_be_to_cpu_16(udp_v->hdr.src_port & udp_m->hdr.src_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, - rte_be_to_cpu_16(udp_m->hdr.dst_port)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, rte_be_to_cpu_16(udp_v->hdr.dst_port & udp_m->hdr.dst_port)); + /* Force get UDP dport in case to be used in VXLAN translate. */ + if (key_type & MLX5_SET_MATCHER_SW) { + udp_v = item->spec; + wks->udp_dport = rte_be_to_cpu_16(udp_v->hdr.dst_port & + udp_m->hdr.dst_port); + } } /** - * Add GRE optional Key item to matcher and to the value. + * Add GRE optional Key item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8944,55 +8863,46 @@ flow_dv_translate_item_udp(void *matcher, void *key, * Item is inner pattern. */ static void -flow_dv_translate_item_gre_key(void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_gre_key(void *key, const struct rte_flow_item *item, + uint32_t key_type) { - const rte_be32_t *key_m = item->mask; - const rte_be32_t *key_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const rte_be32_t *key_m; + const rte_be32_t *key_v; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); rte_be32_t gre_key_default_mask = RTE_BE32(UINT32_MAX); /* GRE K bit must be on and should already be validated */ - MLX5_SET(fte_match_set_misc, misc_m, gre_k_present, 1); MLX5_SET(fte_match_set_misc, misc_v, gre_k_present, 1); - if (!key_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!key_m) - key_m = &gre_key_default_mask; - MLX5_SET(fte_match_set_misc, misc_m, gre_key_h, - rte_be_to_cpu_32(*key_m) >> 8); + MLX5_ITEM_UPDATE(item, key_type, key_v, key_m, + &gre_key_default_mask); MLX5_SET(fte_match_set_misc, misc_v, gre_key_h, rte_be_to_cpu_32((*key_v) & (*key_m)) >> 8); - MLX5_SET(fte_match_set_misc, misc_m, gre_key_l, - rte_be_to_cpu_32(*key_m) & 0xFF); MLX5_SET(fte_match_set_misc, misc_v, gre_key_l, rte_be_to_cpu_32((*key_v) & (*key_m)) & 0xFF); } /** - * Add GRE item to matcher and to the value. + * Add GRE item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gre(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t pattern_flags) +flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item, + uint64_t pattern_flags, uint32_t key_type) { static const struct rte_flow_item_gre empty_gre = {0,}; const struct rte_flow_item_gre *gre_m = item->mask; const struct rte_flow_item_gre *gre_v = item->spec; - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); struct { union { @@ -9010,8 +8920,11 @@ flow_dv_translate_item_gre(void *matcher, void *key, } gre_crks_rsvd0_ver_m, gre_crks_rsvd0_ver_v; uint16_t protocol_m, protocol_v; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_GRE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, + IPPROTO_GRE); if (!gre_v) { gre_v = &empty_gre; gre_m = &empty_gre; @@ -9019,20 +8932,18 @@ flow_dv_translate_item_gre(void *matcher, void *key, if (!gre_m) gre_m = &rte_flow_item_gre_mask; } + if (key_type & MLX5_SET_MATCHER_M) + gre_v = gre_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + gre_m = gre_v; gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->c_rsvd0_ver); gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->c_rsvd0_ver); - MLX5_SET(fte_match_set_misc, misc_m, gre_c_present, - gre_crks_rsvd0_ver_m.c_present); MLX5_SET(fte_match_set_misc, misc_v, gre_c_present, gre_crks_rsvd0_ver_v.c_present & gre_crks_rsvd0_ver_m.c_present); - MLX5_SET(fte_match_set_misc, misc_m, gre_k_present, - gre_crks_rsvd0_ver_m.k_present); MLX5_SET(fte_match_set_misc, misc_v, gre_k_present, gre_crks_rsvd0_ver_v.k_present & gre_crks_rsvd0_ver_m.k_present); - MLX5_SET(fte_match_set_misc, misc_m, gre_s_present, - gre_crks_rsvd0_ver_m.s_present); MLX5_SET(fte_match_set_misc, misc_v, gre_s_present, gre_crks_rsvd0_ver_v.s_present & gre_crks_rsvd0_ver_m.s_present); @@ -9043,17 +8954,17 @@ flow_dv_translate_item_gre(void *matcher, void *key, protocol_v = mlx5_translate_tunnel_etypes(pattern_flags); if (protocol_v) protocol_m = 0xFFFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + protocol_v = protocol_m; } - MLX5_SET(fte_match_set_misc, misc_m, gre_protocol, protocol_m); MLX5_SET(fte_match_set_misc, misc_v, gre_protocol, protocol_m & protocol_v); } /** - * Add GRE optional items to matcher and to the value. + * Add GRE optional items to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -9062,13 +8973,16 @@ flow_dv_translate_item_gre(void *matcher, void *key, * Pointer to gre_item. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gre_option(void *matcher, void *key, +flow_dv_translate_item_gre_option(void *key, const struct rte_flow_item *item, const struct rte_flow_item *gre_item, - uint64_t pattern_flags) + uint64_t pattern_flags, uint32_t key_type) { + void *misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); const struct rte_flow_item_gre_opt *option_m = item->mask; const struct rte_flow_item_gre_opt *option_v = item->spec; const struct rte_flow_item_gre *gre_m = gre_item->mask; @@ -9077,8 +8991,6 @@ flow_dv_translate_item_gre_option(void *matcher, void *key, struct rte_flow_item gre_key_item; uint16_t c_rsvd0_ver_m, c_rsvd0_ver_v; uint16_t protocol_m, protocol_v; - void *misc5_m; - void *misc5_v; /* * If only match key field, keep using misc for matching. @@ -9087,11 +8999,10 @@ flow_dv_translate_item_gre_option(void *matcher, void *key, */ if (!(option_m->sequence.sequence || option_m->checksum_rsvd.checksum)) { - flow_dv_translate_item_gre(matcher, key, gre_item, - pattern_flags); + flow_dv_translate_item_gre(key, gre_item, pattern_flags, key_type); gre_key_item.spec = &option_v->key.key; gre_key_item.mask = &option_m->key.key; - flow_dv_translate_item_gre_key(matcher, key, &gre_key_item); + flow_dv_translate_item_gre_key(key, &gre_key_item, key_type); return; } if (!gre_v) { @@ -9126,57 +9037,49 @@ flow_dv_translate_item_gre_option(void *matcher, void *key, c_rsvd0_ver_v |= RTE_BE16(0x8000); c_rsvd0_ver_m |= RTE_BE16(0x8000); } + if (key_type & MLX5_SET_MATCHER_M) { + c_rsvd0_ver_v = c_rsvd0_ver_m; + protocol_v = protocol_m; + option_v = option_m; + } /* * Hardware parses GRE optional field into the fixed location, * do not need to adjust the tunnel dword indices. */ - misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); - misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_0, rte_be_to_cpu_32((c_rsvd0_ver_v | protocol_v << 16) & (c_rsvd0_ver_m | protocol_m << 16))); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_0, - rte_be_to_cpu_32(c_rsvd0_ver_m | protocol_m << 16)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_1, rte_be_to_cpu_32(option_v->checksum_rsvd.checksum & option_m->checksum_rsvd.checksum)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_1, - rte_be_to_cpu_32(option_m->checksum_rsvd.checksum)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_2, rte_be_to_cpu_32(option_v->key.key & option_m->key.key)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_2, - rte_be_to_cpu_32(option_m->key.key)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_3, rte_be_to_cpu_32(option_v->sequence.sequence & option_m->sequence.sequence)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_3, - rte_be_to_cpu_32(option_m->sequence.sequence)); } /** * Add NVGRE item to matcher and to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_nvgre(void *matcher, void *key, - const struct rte_flow_item *item, - unsigned long pattern_flags) +flow_dv_translate_item_nvgre(void *key, const struct rte_flow_item *item, + unsigned long pattern_flags, uint32_t key_type) { - const struct rte_flow_item_nvgre *nvgre_m = item->mask; - const struct rte_flow_item_nvgre *nvgre_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const struct rte_flow_item_nvgre *nvgre_m; + const struct rte_flow_item_nvgre *nvgre_v; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); const char *tni_flow_id_m; const char *tni_flow_id_v; - char *gre_key_m; char *gre_key_v; int size; int i; @@ -9195,158 +9098,145 @@ flow_dv_translate_item_nvgre(void *matcher, void *key, .mask = &gre_mask, .last = NULL, }; - flow_dv_translate_item_gre(matcher, key, &gre_item, pattern_flags); - if (!nvgre_v) + flow_dv_translate_item_gre(key, &gre_item, pattern_flags, key_type); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!nvgre_m) - nvgre_m = &rte_flow_item_nvgre_mask; + MLX5_ITEM_UPDATE(item, key_type, nvgre_v, nvgre_m, + &rte_flow_item_nvgre_mask); tni_flow_id_m = (const char *)nvgre_m->tni; tni_flow_id_v = (const char *)nvgre_v->tni; size = sizeof(nvgre_m->tni) + sizeof(nvgre_m->flow_id); - gre_key_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, gre_key_h); gre_key_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, gre_key_h); - memcpy(gre_key_m, tni_flow_id_m, size); for (i = 0; i < size; ++i) - gre_key_v[i] = gre_key_m[i] & tni_flow_id_v[i]; + gre_key_v[i] = tni_flow_id_m[i] & tni_flow_id_v[i]; } /** - * Add VXLAN item to matcher and to the value. + * Add VXLAN item to the value. * * @param[in] dev * Pointer to the Ethernet device structure. * @param[in] attr * Flow rule attributes. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] wks + * Matcher workspace. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_vxlan(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, - void *matcher, void *key, - const struct rte_flow_item *item, - int inner) + void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_vxlan *vxlan_m = item->mask; - const struct rte_flow_item_vxlan *vxlan_v = item->spec; - void *headers_m; + const struct rte_flow_item_vxlan *vxlan_m; + const struct rte_flow_item_vxlan *vxlan_v; + const struct rte_flow_item_vxlan *vxlan_vv = item->spec; void *headers_v; - void *misc5_m; + void *misc_v; void *misc5_v; + uint32_t tunnel_v; uint32_t *tunnel_header_v; - uint32_t *tunnel_header_m; + char *vni_v; uint16_t dport; + int size; + int i; struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_item_vxlan nic_mask = { .vni = "\xff\xff\xff", .rsvd1 = 0xff, }; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); dport = item->type == RTE_FLOW_ITEM_TYPE_VXLAN ? MLX5_UDP_PORT_VXLAN : MLX5_UDP_PORT_VXLAN_GPE; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, dport); - } - dport = MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport); - if (!vxlan_v) - return; - if (!vxlan_m) { - if ((!attr->group && !priv->sh->tunnel_header_0_1) || - (attr->group && !priv->sh->misc5_cap)) - vxlan_m = &rte_flow_item_vxlan_mask; + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xFFFF); else - vxlan_m = &nic_mask; + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, dport); } + /* + * Read the UDP dport to check if the value satisfies the VXLAN + * matching with MISC5 for CX5. + */ + if (wks->udp_dport) + dport = wks->udp_dport; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, vxlan_v, vxlan_m, &nic_mask); + if (item->mask == &nic_mask && + ((!attr->group && !priv->sh->tunnel_header_0_1) || + (attr->group && !priv->sh->misc5_cap))) + vxlan_m = &rte_flow_item_vxlan_mask; if ((priv->sh->steering_format_version == - MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 && - dport != MLX5_UDP_PORT_VXLAN) || - (!attr->group && !attr->transfer && !priv->sh->tunnel_header_0_1) || + MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 && + dport != MLX5_UDP_PORT_VXLAN) || + (!attr->group && !attr->transfer) || ((attr->group || attr->transfer) && !priv->sh->misc5_cap)) { - void *misc_m; - void *misc_v; - char *vni_m; - char *vni_v; - int size; - int i; - misc_m = MLX5_ADDR_OF(fte_match_param, - matcher, misc_parameters); misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); size = sizeof(vxlan_m->vni); - vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni); vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni); - memcpy(vni_m, vxlan_m->vni, size); for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & vxlan_v->vni[i]; + vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i]; return; } - misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5); - misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5, misc5_v, tunnel_header_1); - tunnel_header_m = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5, - misc5_m, - tunnel_header_1); - *tunnel_header_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) | - (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 | - (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16; - if (*tunnel_header_v) - *tunnel_header_m = vxlan_m->vni[0] | - vxlan_m->vni[1] << 8 | - vxlan_m->vni[2] << 16; - else - *tunnel_header_m = 0x0; - *tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24; - if (vxlan_v->rsvd1 & vxlan_m->rsvd1) - *tunnel_header_m |= vxlan_m->rsvd1 << 24; + tunnel_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) | + (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 | + (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16; + *tunnel_header_v = tunnel_v; + if (key_type == MLX5_SET_MATCHER_SW_M) { + tunnel_v = (vxlan_vv->vni[0] & vxlan_m->vni[0]) | + (vxlan_vv->vni[1] & vxlan_m->vni[1]) << 8 | + (vxlan_vv->vni[2] & vxlan_m->vni[2]) << 16; + if (!tunnel_v) + *tunnel_header_v = 0x0; + if (vxlan_vv->rsvd1 & vxlan_m->rsvd1) + *tunnel_header_v |= vxlan_v->rsvd1 << 24; + } else { + *tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24; + } } /** - * Add VXLAN-GPE item to matcher and to the value. + * Add VXLAN-GPE item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] pattern_flags + * Item pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, - const struct rte_flow_item *item, - const uint64_t pattern_flags) +flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item, + const uint64_t pattern_flags, + uint32_t key_type) { static const struct rte_flow_item_vxlan_gpe dummy_vxlan_gpe_hdr = {0, }; const struct rte_flow_item_vxlan_gpe *vxlan_m = item->mask; const struct rte_flow_item_vxlan_gpe *vxlan_v = item->spec; /* The item was validated to be on the outer side */ - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = - MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_3); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - char *vni_m = - MLX5_ADDR_OF(fte_match_set_misc3, misc_m, outer_vxlan_gpe_vni); char *vni_v = MLX5_ADDR_OF(fte_match_set_misc3, misc_v, outer_vxlan_gpe_vni); int i, size = sizeof(vxlan_m->vni); @@ -9355,9 +9245,12 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, uint8_t m_protocol, v_protocol; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_VXLAN_GPE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + MLX5_UDP_PORT_VXLAN_GPE); } if (!vxlan_v) { vxlan_v = &dummy_vxlan_gpe_hdr; @@ -9366,15 +9259,18 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, if (!vxlan_m) vxlan_m = &rte_flow_item_vxlan_gpe_mask; } - memcpy(vni_m, vxlan_m->vni, size); + if (key_type & MLX5_SET_MATCHER_M) + vxlan_v = vxlan_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + vxlan_m = vxlan_v; for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & vxlan_v->vni[i]; + vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i]; if (vxlan_m->flags) { flags_m = vxlan_m->flags; flags_v = vxlan_v->flags; } - MLX5_SET(fte_match_set_misc3, misc_m, outer_vxlan_gpe_flags, flags_m); - MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags, flags_v); + MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags, + flags_m & flags_v); m_protocol = vxlan_m->protocol; v_protocol = vxlan_v->protocol; if (!m_protocol) { @@ -9387,50 +9283,50 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, v_protocol = RTE_VXLAN_GPE_TYPE_IPV6; if (v_protocol) m_protocol = 0xFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + v_protocol = m_protocol; } - MLX5_SET(fte_match_set_misc3, misc_m, - outer_vxlan_gpe_next_protocol, m_protocol); MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_next_protocol, m_protocol & v_protocol); } /** - * Add Geneve item to matcher and to the value. + * Add Geneve item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] pattern_flags + * Item pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_geneve(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t pattern_flags) +flow_dv_translate_item_geneve(void *key, const struct rte_flow_item *item, + uint64_t pattern_flags, uint32_t key_type) { static const struct rte_flow_item_geneve empty_geneve = {0,}; const struct rte_flow_item_geneve *geneve_m = item->mask; const struct rte_flow_item_geneve *geneve_v = item->spec; /* GENEVE flow item validation allows single tunnel item */ - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); uint16_t gbhdr_m; uint16_t gbhdr_v; - char *vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, geneve_vni); char *vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, geneve_vni); size_t size = sizeof(geneve_m->vni), i; uint16_t protocol_m, protocol_v; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_GENEVE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + MLX5_UDP_PORT_GENEVE); } if (!geneve_v) { geneve_v = &empty_geneve; @@ -9439,17 +9335,16 @@ flow_dv_translate_item_geneve(void *matcher, void *key, if (!geneve_m) geneve_m = &rte_flow_item_geneve_mask; } - memcpy(vni_m, geneve_m->vni, size); + if (key_type & MLX5_SET_MATCHER_M) + geneve_v = geneve_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + geneve_m = geneve_v; for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & geneve_v->vni[i]; + vni_v[i] = geneve_m->vni[i] & geneve_v->vni[i]; gbhdr_m = rte_be_to_cpu_16(geneve_m->ver_opt_len_o_c_rsvd0); gbhdr_v = rte_be_to_cpu_16(geneve_v->ver_opt_len_o_c_rsvd0); - MLX5_SET(fte_match_set_misc, misc_m, geneve_oam, - MLX5_GENEVE_OAMF_VAL(gbhdr_m)); MLX5_SET(fte_match_set_misc, misc_v, geneve_oam, MLX5_GENEVE_OAMF_VAL(gbhdr_v) & MLX5_GENEVE_OAMF_VAL(gbhdr_m)); - MLX5_SET(fte_match_set_misc, misc_m, geneve_opt_len, - MLX5_GENEVE_OPTLEN_VAL(gbhdr_m)); MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, MLX5_GENEVE_OPTLEN_VAL(gbhdr_v) & MLX5_GENEVE_OPTLEN_VAL(gbhdr_m)); @@ -9460,8 +9355,10 @@ flow_dv_translate_item_geneve(void *matcher, void *key, protocol_v = mlx5_translate_tunnel_etypes(pattern_flags); if (protocol_v) protocol_m = 0xFFFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + protocol_v = protocol_m; } - MLX5_SET(fte_match_set_misc, misc_m, geneve_protocol_type, protocol_m); MLX5_SET(fte_match_set_misc, misc_v, geneve_protocol_type, protocol_m & protocol_v); } @@ -9471,10 +9368,8 @@ flow_dv_translate_item_geneve(void *matcher, void *key, * * @param dev[in, out] * Pointer to rte_eth_dev structure. - * @param[in, out] tag_be24 - * Tag value in big endian then R-shift 8. - * @parm[in, out] dev_flow - * Pointer to the dev_flow. + * @param[in] item + * Flow pattern to translate. * @param[out] error * pointer to error structure. * @@ -9551,38 +9446,38 @@ flow_dev_geneve_tlv_option_resource_register(struct rte_eth_dev *dev, } /** - * Add Geneve TLV option item to matcher. + * Add Geneve TLV option item to value. * * @param[in, out] dev * Pointer to rte_eth_dev structure. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. * @param[out] error * Pointer to error structure. */ static int -flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, +flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type, struct rte_flow_error *error) { - const struct rte_flow_item_geneve_opt *geneve_opt_m = item->mask; - const struct rte_flow_item_geneve_opt *geneve_opt_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const struct rte_flow_item_geneve_opt *geneve_opt_m; + const struct rte_flow_item_geneve_opt *geneve_opt_v; + const struct rte_flow_item_geneve_opt *geneve_opt_vv = item->spec; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); rte_be32_t opt_data_key = 0, opt_data_mask = 0; + uint32_t *data; int ret = 0; - if (!geneve_opt_v) + if (MLX5_ITEM_VALID(item, key_type)) return -1; - if (!geneve_opt_m) - geneve_opt_m = &rte_flow_item_geneve_opt_mask; + MLX5_ITEM_UPDATE(item, key_type, geneve_opt_v, geneve_opt_m, + &rte_flow_item_geneve_opt_mask); ret = flow_dev_geneve_tlv_option_resource_register(dev, item, error); if (ret) { @@ -9596,17 +9491,21 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, * If the option length was not requested but the GENEVE TLV option item * is present we set the option length field implicitly. */ - if (!MLX5_GET16(fte_match_set_misc, misc_m, geneve_opt_len)) { - MLX5_SET(fte_match_set_misc, misc_m, geneve_opt_len, - MLX5_GENEVE_OPTLEN_MASK); - MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, - geneve_opt_v->option_len + 1); - } - MLX5_SET(fte_match_set_misc, misc_m, geneve_tlv_option_0_exist, 1); - MLX5_SET(fte_match_set_misc, misc_v, geneve_tlv_option_0_exist, 1); + if (!MLX5_GET16(fte_match_set_misc, misc_v, geneve_opt_len)) { + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, + MLX5_GENEVE_OPTLEN_MASK); + else + MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, + geneve_opt_v->option_len + 1); + } /* Set the data. */ - if (geneve_opt_v->data) { - memcpy(&opt_data_key, geneve_opt_v->data, + if (key_type == MLX5_SET_MATCHER_SW_V) + data = geneve_opt_vv->data; + else + data = geneve_opt_v->data; + if (data) { + memcpy(&opt_data_key, data, RTE_MIN((uint32_t)(geneve_opt_v->option_len * 4), sizeof(opt_data_key))); MLX5_ASSERT((uint32_t)(geneve_opt_v->option_len * 4) <= @@ -9616,9 +9515,6 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, sizeof(opt_data_mask))); MLX5_ASSERT((uint32_t)(geneve_opt_v->option_len * 4) <= sizeof(opt_data_mask)); - MLX5_SET(fte_match_set_misc3, misc3_m, - geneve_tlv_option_0_data, - rte_be_to_cpu_32(opt_data_mask)); MLX5_SET(fte_match_set_misc3, misc3_v, geneve_tlv_option_0_data, rte_be_to_cpu_32(opt_data_key & opt_data_mask)); @@ -9627,10 +9523,8 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, } /** - * Add MPLS item to matcher and to the value. + * Add MPLS item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -9639,93 +9533,78 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, * The protocol layer indicated in previous item. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_mpls(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t prev_layer, - int inner) +flow_dv_translate_item_mpls(void *key, const struct rte_flow_item *item, + uint64_t prev_layer, int inner, + uint32_t key_type) { - const uint32_t *in_mpls_m = item->mask; - const uint32_t *in_mpls_v = item->spec; - uint32_t *out_mpls_m = 0; + const uint32_t *in_mpls_m; + const uint32_t *in_mpls_v; uint32_t *out_mpls_v = 0; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - void *misc2_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_2); void *misc2_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2); - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); switch (prev_layer) { case MLX5_FLOW_LAYER_OUTER_L4_UDP: if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, - 0xffff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_MPLS); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xffff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, MLX5_UDP_PORT_MPLS); } break; case MLX5_FLOW_LAYER_GRE: /* Fall-through. */ case MLX5_FLOW_LAYER_GRE_KEY: if (!MLX5_GET16(fte_match_set_misc, misc_v, gre_protocol)) { - MLX5_SET(fte_match_set_misc, misc_m, gre_protocol, - 0xffff); - MLX5_SET(fte_match_set_misc, misc_v, gre_protocol, - RTE_ETHER_TYPE_MPLS); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_misc, misc_v, + gre_protocol, 0xffff); + else + MLX5_SET(fte_match_set_misc, misc_v, + gre_protocol, RTE_ETHER_TYPE_MPLS); } break; default: break; } - if (!in_mpls_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!in_mpls_m) - in_mpls_m = (const uint32_t *)&rte_flow_item_mpls_mask; + MLX5_ITEM_UPDATE(item, key_type, in_mpls_v, in_mpls_m, + &rte_flow_item_mpls_mask); switch (prev_layer) { case MLX5_FLOW_LAYER_OUTER_L4_UDP: - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_m, - outer_first_mpls_over_udp); out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls_over_udp); break; case MLX5_FLOW_LAYER_GRE: - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_m, - outer_first_mpls_over_gre); out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls_over_gre); break; default: /* Inner MPLS not over GRE is not supported. */ - if (!inner) { - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, - misc2_m, - outer_first_mpls); + if (!inner) out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls); - } break; } - if (out_mpls_m && out_mpls_v) { - *out_mpls_m = *in_mpls_m; + if (out_mpls_v) *out_mpls_v = *in_mpls_v & *in_mpls_m; - } } /** * Add metadata register item to matcher * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] reg_type @@ -9736,12 +9615,9 @@ flow_dv_translate_item_mpls(void *matcher, void *key, * Register mask */ static void -flow_dv_match_meta_reg(void *matcher, void *key, - enum modify_reg reg_type, +flow_dv_match_meta_reg(void *key, enum modify_reg reg_type, uint32_t data, uint32_t mask) { - void *misc2_m = - MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2); void *misc2_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2); uint32_t temp; @@ -9749,11 +9625,9 @@ flow_dv_match_meta_reg(void *matcher, void *key, data &= mask; switch (reg_type) { case REG_A: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_a, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_a, data); break; case REG_B: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_b, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_b, data); break; case REG_C_0: @@ -9762,40 +9636,31 @@ flow_dv_match_meta_reg(void *matcher, void *key, * source vport index and META item value, we should set * this field according to specified mask, not as whole one. */ - temp = MLX5_GET(fte_match_set_misc2, misc2_m, metadata_reg_c_0); - temp |= mask; - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_0, temp); temp = MLX5_GET(fte_match_set_misc2, misc2_v, metadata_reg_c_0); - temp &= ~mask; + if (mask) + temp &= ~mask; temp |= data; MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_0, temp); break; case REG_C_1: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_1, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_1, data); break; case REG_C_2: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_2, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_2, data); break; case REG_C_3: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_3, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_3, data); break; case REG_C_4: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_4, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_4, data); break; case REG_C_5: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_5, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_5, data); break; case REG_C_6: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_6, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_6, data); break; case REG_C_7: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_7, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_7, data); break; default: @@ -9804,34 +9669,71 @@ flow_dv_match_meta_reg(void *matcher, void *key, } } +/** + * Add metadata register item to matcher + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] reg_type + * Type of device metadata register + * @param[in] value + * Register value + * @param[in] mask + * Register mask + */ +static void +flow_dv_match_meta_reg_all(void *matcher, void *key, enum modify_reg reg_type, + uint32_t data, uint32_t mask) +{ + flow_dv_match_meta_reg(key, reg_type, data, mask); + flow_dv_match_meta_reg(matcher, reg_type, mask, mask); +} + /** * Add MARK item to matcher * * @param[in] dev * The device to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_mark(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_mark(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_item_mark *mark; uint32_t value; - uint32_t mask; - - mark = item->mask ? (const void *)item->mask : - &rte_flow_item_mark_mask; - mask = mark->id & priv->sh->dv_mark_mask; - mark = (const void *)item->spec; - MLX5_ASSERT(mark); - value = mark->id & priv->sh->dv_mark_mask & mask; + uint32_t mask = 0; + + if (key_type & MLX5_SET_MATCHER_SW) { + mark = item->mask ? (const void *)item->mask : + &rte_flow_item_mark_mask; + mask = mark->id; + if (key_type == MLX5_SET_MATCHER_SW_M) { + value = mask; + } else { + mark = (const void *)item->spec; + MLX5_ASSERT(mark); + value = mark->id; + } + } else { + mark = (key_type == MLX5_SET_MATCHER_HS_V) ? + (const void *)item->spec : (const void *)item->mask; + MLX5_ASSERT(mark); + value = mark->id; + if (key_type == MLX5_SET_MATCHER_HS_M) + mask = value; + } + mask &= priv->sh->dv_mark_mask; + value &= mask; if (mask) { enum modify_reg reg; @@ -9847,7 +9749,7 @@ flow_dv_translate_item_mark(struct rte_eth_dev *dev, mask <<= shl_c0; value <<= shl_c0; } - flow_dv_match_meta_reg(matcher, key, reg, value, mask); + flow_dv_match_meta_reg(key, reg, value, mask); } } @@ -9856,65 +9758,66 @@ flow_dv_translate_item_mark(struct rte_eth_dev *dev, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] attr * Attributes of flow that includes this item. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_meta(struct rte_eth_dev *dev, - void *matcher, void *key, + void *key, const struct rte_flow_attr *attr, - const struct rte_flow_item *item) + const struct rte_flow_item *item, + uint32_t key_type) { const struct rte_flow_item_meta *meta_m; const struct rte_flow_item_meta *meta_v; + uint32_t value; + uint32_t mask = 0; + int reg; - meta_m = (const void *)item->mask; - if (!meta_m) - meta_m = &rte_flow_item_meta_mask; - meta_v = (const void *)item->spec; - if (meta_v) { - int reg; - uint32_t value = meta_v->data; - uint32_t mask = meta_m->data; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, meta_v, meta_m, + &rte_flow_item_meta_mask); + value = meta_v->data; + mask = meta_m->data; + if (key_type == MLX5_SET_MATCHER_HS_M) + mask = value; + reg = flow_dv_get_metadata_reg(dev, attr, NULL); + if (reg < 0) + return; + MLX5_ASSERT(reg != REG_NON); + if (reg == REG_C_0) { + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t msk_c0 = priv->sh->dv_regc0_mask; + uint32_t shl_c0 = rte_bsf32(msk_c0); - reg = flow_dv_get_metadata_reg(dev, attr, NULL); - if (reg < 0) - return; - MLX5_ASSERT(reg != REG_NON); - if (reg == REG_C_0) { - struct mlx5_priv *priv = dev->data->dev_private; - uint32_t msk_c0 = priv->sh->dv_regc0_mask; - uint32_t shl_c0 = rte_bsf32(msk_c0); - - mask &= msk_c0; - mask <<= shl_c0; - value <<= shl_c0; - } - flow_dv_match_meta_reg(matcher, key, reg, value, mask); + mask &= msk_c0; + mask <<= shl_c0; + value <<= shl_c0; } + flow_dv_match_meta_reg(key, reg, value, mask); } /** * Add vport metadata Reg C0 item to matcher * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. - * @param[in] reg - * Flow pattern to translate. + * @param[in] value + * Register value + * @param[in] mask + * Register mask */ static void -flow_dv_translate_item_meta_vport(void *matcher, void *key, - uint32_t value, uint32_t mask) +flow_dv_translate_item_meta_vport(void *key, uint32_t value, uint32_t mask) { - flow_dv_match_meta_reg(matcher, key, REG_C_0, value, mask); + flow_dv_match_meta_reg(key, REG_C_0, value, mask); } /** @@ -9922,17 +9825,17 @@ flow_dv_translate_item_meta_vport(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { const struct mlx5_rte_flow_item_tag *tag_v = item->spec; const struct mlx5_rte_flow_item_tag *tag_m = item->mask; @@ -9941,6 +9844,8 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, MLX5_ASSERT(tag_v); value = tag_v->data; mask = tag_m ? tag_m->data : UINT32_MAX; + if (key_type & MLX5_SET_MATCHER_M) + value = mask; if (tag_v->id == REG_C_0) { struct mlx5_priv *priv = dev->data->dev_private; uint32_t msk_c0 = priv->sh->dv_regc0_mask; @@ -9950,7 +9855,7 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, mask <<= shl_c0; value <<= shl_c0; } - flow_dv_match_meta_reg(matcher, key, tag_v->id, value, mask); + flow_dv_match_meta_reg(key, tag_v->id, value, mask); } /** @@ -9958,50 +9863,50 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_tag(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_tag(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { - const struct rte_flow_item_tag *tag_v = item->spec; - const struct rte_flow_item_tag *tag_m = item->mask; + const struct rte_flow_item_tag *tag_vv = item->spec; + const struct rte_flow_item_tag *tag_v; + const struct rte_flow_item_tag *tag_m; enum modify_reg reg; + uint32_t index; - MLX5_ASSERT(tag_v); - tag_m = tag_m ? tag_m : &rte_flow_item_tag_mask; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, tag_v, tag_m, + &rte_flow_item_tag_mask); + /* When set mask, the index should be from spec. */ + index = tag_vv ? tag_vv->index : tag_v->index; /* Get the metadata register index for the tag. */ - reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, tag_v->index, NULL); + reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, index, NULL); MLX5_ASSERT(reg > 0); - flow_dv_match_meta_reg(matcher, key, reg, tag_v->data, tag_m->data); + flow_dv_match_meta_reg(key, reg, tag_v->data, tag_m->data); } /** * Add source vport match to the specified matcher. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] port * Source vport value to match - * @param[in] mask - * Mask */ static void -flow_dv_translate_item_source_vport(void *matcher, void *key, - int16_t port, uint16_t mask) +flow_dv_translate_item_source_vport(void *key, + int16_t port) { - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - MLX5_SET(fte_match_set_misc, misc_m, source_port, mask); MLX5_SET(fte_match_set_misc, misc_v, source_port, port); } @@ -10010,31 +9915,34 @@ flow_dv_translate_item_source_vport(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] + * @param[in] attr * Flow attributes. + * @param[in] key_type + * Set flow matcher mask or value. * * @return * 0 on success, a negative errno value otherwise. */ static int -flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, - const struct rte_flow_attr *attr) +flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + const struct rte_flow_attr *attr, + uint32_t key_type) { const struct rte_flow_item_port_id *pid_m = item ? item->mask : NULL; const struct rte_flow_item_port_id *pid_v = item ? item->spec : NULL; struct mlx5_priv *priv; uint16_t mask, id; + uint32_t vport_meta; if (pid_v && pid_v->id == MLX5_PORT_ESW_MGR) { - flow_dv_translate_item_source_vport(matcher, key, - mlx5_flow_get_esw_manager_vport_id(dev), 0xffff); + flow_dv_translate_item_source_vport(key, + key_type & MLX5_SET_MATCHER_V ? + mlx5_flow_get_esw_manager_vport_id(dev) : 0xffff); return 0; } mask = pid_m ? pid_m->id : 0xffff; @@ -10042,6 +9950,13 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, priv = mlx5_port_to_eswitch_info(id, item == NULL); if (!priv) return -rte_errno; + if (key_type & MLX5_SET_MATCHER_M) { + id = mask; + vport_meta = priv->vport_meta_mask; + } else { + id = priv->vport_id; + vport_meta = priv->vport_meta_tag; + } /* * Translate to vport field or to metadata, depending on mode. * Kernel can use either misc.source_port or half of C0 metadata @@ -10055,20 +9970,17 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, */ if (mask == 0xffff && priv->vport_id == 0xffff && priv->pf_bond < 0 && attr->transfer) - flow_dv_translate_item_source_vport - (matcher, key, priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); /* * We should always set the vport metadata register, * otherwise the SW steering library can drop * the rule if wire vport metadata value is not zero, * it depends on kernel configuration. */ - flow_dv_translate_item_meta_vport(matcher, key, - priv->vport_meta_tag, - priv->vport_meta_mask); + flow_dv_translate_item_meta_vport + (key, vport_meta, priv->vport_meta_mask); } else { - flow_dv_translate_item_source_vport(matcher, key, - priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); } return 0; } @@ -10078,8 +9990,6 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -10091,21 +10001,25 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, * 0 on success, a negative errno value otherwise. */ static int -flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, - void *key, +flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *key, const struct rte_flow_item *item, - const struct rte_flow_attr *attr) + const struct rte_flow_attr *attr, + uint32_t key_type) { const struct rte_flow_item_ethdev *pid_m = item ? item->mask : NULL; const struct rte_flow_item_ethdev *pid_v = item ? item->spec : NULL; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); struct mlx5_priv *priv; uint16_t mask, id; + uint32_t vport_meta; + MLX5_ASSERT(wks); if (!pid_m && !pid_v) return 0; if (pid_v && pid_v->port_id == UINT16_MAX) { - flow_dv_translate_item_source_vport(matcher, key, - mlx5_flow_get_esw_manager_vport_id(dev), UINT16_MAX); + flow_dv_translate_item_source_vport(key, + key_type & MLX5_SET_MATCHER_V ? + mlx5_flow_get_esw_manager_vport_id(dev) : 0xffff); return 0; } mask = pid_m ? pid_m->port_id : UINT16_MAX; @@ -10113,6 +10027,14 @@ flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, priv = mlx5_port_to_eswitch_info(id, item == NULL); if (!priv) return -rte_errno; + if (key_type & MLX5_SET_MATCHER_M) { + id = mask; + vport_meta = priv->vport_meta_mask; + } else { + id = priv->vport_id; + vport_meta = priv->vport_meta_tag; + wks->vport_meta_tag = vport_meta; + } /* * Translate to vport field or to metadata, depending on mode. * Kernel can use either misc.source_port or half of C0 metadata @@ -10125,119 +10047,133 @@ flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, * save the extra vport match. */ if (mask == UINT16_MAX && priv->vport_id == UINT16_MAX && - priv->pf_bond < 0 && attr->transfer) - flow_dv_translate_item_source_vport - (matcher, key, priv->vport_id, mask); + priv->pf_bond < 0 && attr->transfer && + priv->sh->config.dv_flow_en != 2) + flow_dv_translate_item_source_vport(key, id); /* * We should always set the vport metadata register, * otherwise the SW steering library can drop * the rule if wire vport metadata value is not zero, * it depends on kernel configuration. */ - flow_dv_translate_item_meta_vport(matcher, key, - priv->vport_meta_tag, + flow_dv_translate_item_meta_vport(key, vport_meta, priv->vport_meta_mask); } else { - flow_dv_translate_item_source_vport(matcher, key, - priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); } return 0; } /** - * Add ICMP6 item to matcher and to the value. + * Translate port-id item to eswitch match on port-id. * + * @param[in] dev + * The devich to configure through. * @param[in, out] matcher * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] attr + * Flow attributes. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +static int +flow_dv_translate_item_port_id_all(struct rte_eth_dev *dev, + void *matcher, void *key, + const struct rte_flow_item *item, + const struct rte_flow_attr *attr) +{ + int ret; + + ret = flow_dv_translate_item_port_id + (dev, matcher, item, attr, MLX5_SET_MATCHER_SW_M); + if (ret) + return ret; + ret = flow_dv_translate_item_port_id + (dev, key, item, attr, MLX5_SET_MATCHER_SW_V); + return ret; +} + + +/** + * Add ICMP6 item to the value. + * + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_icmp6(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_icmp6(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_icmp6 *icmp6_m = item->mask; - const struct rte_flow_item_icmp6 *icmp6_v = item->spec; - void *headers_m; + const struct rte_flow_item_icmp6 *icmp6_m; + const struct rte_flow_item_icmp6 *icmp6_v; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ICMPV6); - if (!icmp6_v) + + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, 0xFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, + IPPROTO_ICMPV6); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!icmp6_m) - icmp6_m = &rte_flow_item_icmp6_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, icmpv6_type, icmp6_m->type); + MLX5_ITEM_UPDATE(item, key_type, icmp6_v, icmp6_m, + &rte_flow_item_icmp6_mask); MLX5_SET(fte_match_set_misc3, misc3_v, icmpv6_type, icmp6_v->type & icmp6_m->type); - MLX5_SET(fte_match_set_misc3, misc3_m, icmpv6_code, icmp6_m->code); MLX5_SET(fte_match_set_misc3, misc3_v, icmpv6_code, icmp6_v->code & icmp6_m->code); } /** - * Add ICMP item to matcher and to the value. + * Add ICMP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_icmp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_icmp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_icmp *icmp_m = item->mask; - const struct rte_flow_item_icmp *icmp_v = item->spec; + const struct rte_flow_item_icmp *icmp_m; + const struct rte_flow_item_icmp *icmp_v; uint32_t icmp_header_data_m = 0; uint32_t icmp_header_data_v = 0; - void *headers_m; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ICMP); - if (!icmp_v) + + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_ICMP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!icmp_m) - icmp_m = &rte_flow_item_icmp_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_type, - icmp_m->hdr.icmp_type); + MLX5_ITEM_UPDATE(item, key_type, icmp_v, icmp_m, + &rte_flow_item_icmp_mask); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_type, icmp_v->hdr.icmp_type & icmp_m->hdr.icmp_type); - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_code, - icmp_m->hdr.icmp_code); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_code, icmp_v->hdr.icmp_code & icmp_m->hdr.icmp_code); icmp_header_data_m = rte_be_to_cpu_16(icmp_m->hdr.icmp_seq_nb); @@ -10246,64 +10182,51 @@ flow_dv_translate_item_icmp(void *matcher, void *key, icmp_header_data_v = rte_be_to_cpu_16(icmp_v->hdr.icmp_seq_nb); icmp_header_data_v |= rte_be_to_cpu_16(icmp_v->hdr.icmp_ident) << 16; - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_header_data, - icmp_header_data_m); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_header_data, icmp_header_data_v & icmp_header_data_m); } } /** - * Add GTP item to matcher and to the value. + * Add GTP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gtp(void *matcher, void *key, - const struct rte_flow_item *item, int inner) +flow_dv_translate_item_gtp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_gtp *gtp_m = item->mask; - const struct rte_flow_item_gtp *gtp_v = item->spec; - void *headers_m; + const struct rte_flow_item_gtp *gtp_m; + const struct rte_flow_item_gtp *gtp_v; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); uint16_t dport = RTE_GTPU_UDP_PORT; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, dport); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, dport); } - if (!gtp_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!gtp_m) - gtp_m = &rte_flow_item_gtp_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_flags, - gtp_m->v_pt_rsv_flags); + MLX5_ITEM_UPDATE(item, key_type, gtp_v, gtp_m, + &rte_flow_item_gtp_mask); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags, gtp_v->v_pt_rsv_flags & gtp_m->v_pt_rsv_flags); - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_type, gtp_m->msg_type); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_type, gtp_v->msg_type & gtp_m->msg_type); - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_teid, - rte_be_to_cpu_32(gtp_m->teid)); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_teid, rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid)); } @@ -10311,21 +10234,19 @@ flow_dv_translate_item_gtp(void *matcher, void *key, /** * Add GTP PSC item to matcher. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static int -flow_dv_translate_item_gtp_psc(void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_gtp_psc(void *key, const struct rte_flow_item *item, + uint32_t key_type) { - const struct rte_flow_item_gtp_psc *gtp_psc_m = item->mask; - const struct rte_flow_item_gtp_psc *gtp_psc_v = item->spec; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); + const struct rte_flow_item_gtp_psc *gtp_psc_m; + const struct rte_flow_item_gtp_psc *gtp_psc_v; void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); union { uint32_t w32; @@ -10335,52 +10256,40 @@ flow_dv_translate_item_gtp_psc(void *matcher, void *key, uint8_t next_ext_header_type; }; } dw_2; + union { + uint32_t w32; + struct { + uint8_t len; + uint8_t type_flags; + uint8_t qfi; + uint8_t reserved; + }; + } dw_0; uint8_t gtp_flags; /* Always set E-flag match on one, regardless of GTP item settings. */ - gtp_flags = MLX5_GET(fte_match_set_misc3, misc3_m, gtpu_msg_flags); - gtp_flags |= MLX5_GTP_EXT_HEADER_FLAG; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_flags, gtp_flags); gtp_flags = MLX5_GET(fte_match_set_misc3, misc3_v, gtpu_msg_flags); gtp_flags |= MLX5_GTP_EXT_HEADER_FLAG; MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags, gtp_flags); /*Set next extension header type. */ dw_2.seq_num = 0; dw_2.npdu_num = 0; - dw_2.next_ext_header_type = 0xff; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_dw_2, - rte_cpu_to_be_32(dw_2.w32)); - dw_2.seq_num = 0; - dw_2.npdu_num = 0; - dw_2.next_ext_header_type = 0x85; + if (key_type & MLX5_SET_MATCHER_M) + dw_2.next_ext_header_type = 0xff; + else + dw_2.next_ext_header_type = 0x85; MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_dw_2, rte_cpu_to_be_32(dw_2.w32)); - if (gtp_psc_v) { - union { - uint32_t w32; - struct { - uint8_t len; - uint8_t type_flags; - uint8_t qfi; - uint8_t reserved; - }; - } dw_0; - - /*Set extension header PDU type and Qos. */ - if (!gtp_psc_m) - gtp_psc_m = &rte_flow_item_gtp_psc_mask; - dw_0.w32 = 0; - dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_m->hdr.type); - dw_0.qfi = gtp_psc_m->hdr.qfi; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_first_ext_dw_0, - rte_cpu_to_be_32(dw_0.w32)); - dw_0.w32 = 0; - dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_v->hdr.type & - gtp_psc_m->hdr.type); - dw_0.qfi = gtp_psc_v->hdr.qfi & gtp_psc_m->hdr.qfi; - MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_first_ext_dw_0, - rte_cpu_to_be_32(dw_0.w32)); - } + if (MLX5_ITEM_VALID(item, key_type)) + return 0; + MLX5_ITEM_UPDATE(item, key_type, gtp_psc_v, + gtp_psc_m, &rte_flow_item_gtp_psc_mask); + dw_0.w32 = 0; + dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_v->hdr.type & + gtp_psc_m->hdr.type); + dw_0.qfi = gtp_psc_v->hdr.qfi & gtp_psc_m->hdr.qfi; + MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_first_ext_dw_0, + rte_cpu_to_be_32(dw_0.w32)); return 0; } @@ -10389,29 +10298,27 @@ flow_dv_translate_item_gtp_psc(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] last_item * Last item flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, - uint64_t last_item) +flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint64_t last_item, uint32_t key_type) { struct mlx5_priv *priv = dev->data->dev_private; - const struct rte_flow_item_ecpri *ecpri_m = item->mask; - const struct rte_flow_item_ecpri *ecpri_v = item->spec; + const struct rte_flow_item_ecpri *ecpri_m; + const struct rte_flow_item_ecpri *ecpri_v; + const struct rte_flow_item_ecpri *ecpri_vv = item->spec; struct rte_ecpri_common_hdr common; - void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_4); void *misc4_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_4); uint32_t *samples; - void *dw_m; void *dw_v; /* @@ -10419,21 +10326,22 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, * match on eCPRI EtherType implicitly. */ if (last_item & MLX5_FLOW_LAYER_OUTER_L2) { - void *hdrs_m, *hdrs_v, *l2m, *l2v; + void *hdrs_v, *l2v; - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - l2m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, ethertype); l2v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype); - if (*(uint16_t *)l2m == 0 && *(uint16_t *)l2v == 0) { - *(uint16_t *)l2m = UINT16_MAX; - *(uint16_t *)l2v = RTE_BE16(RTE_ETHER_TYPE_ECPRI); + if (*(uint16_t *)l2v == 0) { + if (key_type & MLX5_SET_MATCHER_M) + *(uint16_t *)l2v = UINT16_MAX; + else + *(uint16_t *)l2v = + RTE_BE16(RTE_ETHER_TYPE_ECPRI); } } - if (!ecpri_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ecpri_m) - ecpri_m = &rte_flow_item_ecpri_mask; + MLX5_ITEM_UPDATE(item, key_type, ecpri_v, ecpri_m, + &rte_flow_item_ecpri_mask); /* * Maximal four DW samples are supported in a single matching now. * Two are used now for a eCPRI matching: @@ -10445,16 +10353,11 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, return; samples = priv->sh->ecpri_parser.ids; /* Need to take the whole DW as the mask to fill the entry. */ - dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m, - prog_sample_field_value_0); dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v, prog_sample_field_value_0); /* Already big endian (network order) in the header. */ - *(uint32_t *)dw_m = ecpri_m->hdr.common.u32; *(uint32_t *)dw_v = ecpri_v->hdr.common.u32 & ecpri_m->hdr.common.u32; /* Sample#0, used for matching type, offset 0. */ - MLX5_SET(fte_match_set_misc4, misc4_m, - prog_sample_field_id_0, samples[0]); /* It makes no sense to set the sample ID in the mask field. */ MLX5_SET(fte_match_set_misc4, misc4_v, prog_sample_field_id_0, samples[0]); @@ -10463,21 +10366,19 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, * Some wildcard rules only matching type field should be supported. */ if (ecpri_m->hdr.dummy[0]) { - common.u32 = rte_be_to_cpu_32(ecpri_v->hdr.common.u32); + if (key_type == MLX5_SET_MATCHER_SW_M) + common.u32 = rte_be_to_cpu_32(ecpri_vv->hdr.common.u32); + else + common.u32 = rte_be_to_cpu_32(ecpri_v->hdr.common.u32); switch (common.type) { case RTE_ECPRI_MSG_TYPE_IQ_DATA: case RTE_ECPRI_MSG_TYPE_RTC_CTRL: case RTE_ECPRI_MSG_TYPE_DLY_MSR: - dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m, - prog_sample_field_value_1); dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v, prog_sample_field_value_1); - *(uint32_t *)dw_m = ecpri_m->hdr.dummy[0]; *(uint32_t *)dw_v = ecpri_v->hdr.dummy[0] & ecpri_m->hdr.dummy[0]; /* Sample#1, to match message body, offset 4. */ - MLX5_SET(fte_match_set_misc4, misc4_m, - prog_sample_field_id_1, samples[1]); MLX5_SET(fte_match_set_misc4, misc4_v, prog_sample_field_id_1, samples[1]); break; @@ -10542,7 +10443,7 @@ flow_dv_translate_item_aso_ct(struct rte_eth_dev *dev, reg_id = mlx5_flow_get_reg_id(dev, MLX5_ASO_CONNTRACK, 0, &error); if (reg_id == REG_NON) return; - flow_dv_match_meta_reg(matcher, key, (enum modify_reg)reg_id, + flow_dv_match_meta_reg_all(matcher, key, (enum modify_reg)reg_id, reg_value, reg_mask); } @@ -11328,42 +11229,48 @@ flow_dv_translate_create_counter(struct rte_eth_dev *dev, * * @param[in] dev * Pointer to the dev struct. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) + void *key, + const struct rte_flow_item *item, + uint32_t key_type) { const struct mlx5_rte_flow_item_tx_queue *queue_m; const struct mlx5_rte_flow_item_tx_queue *queue_v; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); - void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - struct mlx5_txq_ctrl *txq; - uint32_t queue, mask; + const struct mlx5_rte_flow_item_tx_queue queue_mask = { + .queue = UINT32_MAX, + }; + void *misc_v = + MLX5_ADDR_OF(fte_match_param, key, misc_parameters); + struct mlx5_txq_ctrl *txq = NULL; + uint32_t queue; - queue_m = (const void *)item->mask; - queue_v = (const void *)item->spec; - if (!queue_v) + MLX5_ITEM_UPDATE(item, key_type, queue_v, queue_m, &queue_mask); + if (!queue_m || !queue_v) return; - txq = mlx5_txq_get(dev, queue_v->queue); - if (!txq) - return; - if (txq->is_hairpin) - queue = txq->obj->sq->id; - else - queue = txq->obj->sq_obj.sq->id; - mask = queue_m == NULL ? UINT32_MAX : queue_m->queue; - MLX5_SET(fte_match_set_misc, misc_m, source_sqn, mask); - MLX5_SET(fte_match_set_misc, misc_v, source_sqn, queue & mask); - mlx5_txq_release(dev, queue_v->queue); + if (key_type & MLX5_SET_MATCHER_V) { + txq = mlx5_txq_get(dev, queue_v->queue); + if (!txq) + return; + if (txq->is_hairpin) + queue = txq->obj->sq->id; + else + queue = txq->obj->sq_obj.sq->id; + if (key_type == MLX5_SET_MATCHER_SW_V) + queue &= queue_m->queue; + } else { + queue = queue_m->queue; + } + MLX5_SET(fte_match_set_misc, misc_v, source_sqn, queue); + if (txq) + mlx5_txq_release(dev, queue_v->queue); } /** @@ -13029,7 +12936,298 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, } /** - * Translate the flow item to matcher. + * Fill the flow matcher with DV spec. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] items + * Pointer to the list of items. + * @param[in] wks + * Pointer to the matcher workspace. + * @param[in] key + * Pointer to the flow matcher key. + * @param[in] key_type + * Key type. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_translate_items(struct rte_eth_dev *dev, + const struct rte_flow_item *items, + struct mlx5_dv_matcher_workspace *wks, + void *key, uint32_t key_type, + struct rte_flow_error *error) +{ + struct mlx5_flow_rss_desc *rss_desc = wks->rss_desc; + uint8_t next_protocol = wks->next_protocol; + int tunnel = !!(wks->item_flags & MLX5_FLOW_LAYER_TUNNEL); + int item_type = items->type; + uint64_t last_item = wks->last_item; + int ret; + + switch (item_type) { + case RTE_FLOW_ITEM_TYPE_ESP: + flow_dv_translate_item_esp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_ITEM_ESP; + break; + case RTE_FLOW_ITEM_TYPE_PORT_ID: + flow_dv_translate_item_port_id + (dev, key, items, wks->attr, key_type); + last_item = MLX5_FLOW_ITEM_PORT_ID; + break; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + flow_dv_translate_item_represented_port + (dev, key, items, wks->attr, key_type); + last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; + break; + case RTE_FLOW_ITEM_TYPE_ETH: + flow_dv_translate_item_eth(key, items, tunnel, + wks->group, key_type); + wks->priority = wks->action_flags & + MLX5_FLOW_ACTION_DEFAULT_MISS && + !wks->external ? + MLX5_PRIORITY_MAP_L3 : + MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + flow_dv_translate_item_vlan(key, items, tunnel, wks, key_type); + wks->priority = MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | + MLX5_FLOW_LAYER_INNER_VLAN) : + (MLX5_FLOW_LAYER_OUTER_L2 | + MLX5_FLOW_LAYER_OUTER_VLAN); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &wks->item_flags, &tunnel); + flow_dv_translate_item_ipv4(key, items, tunnel, + wks->group, key_type); + wks->priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv4 *) + items->mask)->hdr.next_proto_id) { + next_protocol = + ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + next_protocol &= + ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &wks->item_flags, &tunnel); + flow_dv_translate_item_ipv6(key, items, tunnel, + wks->group, key_type); + wks->priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto) { + next_protocol = + ((const struct rte_flow_item_ipv6 *) + items->spec)->hdr.proto; + next_protocol &= + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6 *) + (items->mask))->hdr.proto; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6 *) + (items->spec))->hdr.proto; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: + flow_dv_translate_item_ipv6_frag_ext + (key, items, tunnel, key_type); + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : + MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header) { + next_protocol = + ((const struct rte_flow_item_ipv6_frag_ext *) + items->spec)->hdr.next_header; + next_protocol &= + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *) + (items->mask))->hdr.next_header; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *) + (items->spec))->hdr.next_header; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_TCP: + flow_dv_translate_item_tcp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; + break; + case RTE_FLOW_ITEM_TYPE_UDP: + flow_dv_translate_item_udp(key, items, tunnel, wks, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; + break; + case RTE_FLOW_ITEM_TYPE_GRE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + wks->gre_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + flow_dv_translate_item_gre_key(key, items, key_type); + last_item = MLX5_FLOW_LAYER_GRE_KEY; + break; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_NVGRE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + flow_dv_translate_item_vxlan(dev, wks->attr, key, + items, tunnel, wks, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_VXLAN_GPE; + break; + case RTE_FLOW_ITEM_TYPE_GENEVE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GENEVE; + break; + case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: + ret = flow_dv_translate_item_geneve_opt + (dev, key, items, key_type, error); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GENEVE TLV option"); + wks->geneve_tlv_option = 1; + last_item = MLX5_FLOW_LAYER_GENEVE_OPT; + break; + case RTE_FLOW_ITEM_TYPE_MPLS: + flow_dv_translate_item_mpls(key, items, last_item, + tunnel, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_MPLS; + break; + case RTE_FLOW_ITEM_TYPE_MARK: + flow_dv_translate_item_mark(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_MARK; + break; + case RTE_FLOW_ITEM_TYPE_META: + flow_dv_translate_item_meta + (dev, key, wks->attr, items, key_type); + last_item = MLX5_FLOW_ITEM_METADATA; + break; + case RTE_FLOW_ITEM_TYPE_ICMP: + flow_dv_translate_item_icmp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP; + break; + case RTE_FLOW_ITEM_TYPE_ICMP6: + flow_dv_translate_item_icmp6(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP6; + break; + case RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_item_tag(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TAG; + break; + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_mlx5_item_tag(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TAG; + break; + case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + flow_dv_translate_item_tx_queue(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TX_QUEUE; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + flow_dv_translate_item_gtp(key, items, tunnel, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = flow_dv_translate_item_gtp_psc(key, items, key_type); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GTP PSC item"); + last_item = MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_ECPRI: + if (!mlx5_flex_parser_ecpri_exist(dev)) { + /* Create it only the first time to be used. */ + ret = mlx5_flex_parser_ecpri_alloc(dev); + if (ret) + return rte_flow_error_set + (error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, + "cannot create eCPRI parser"); + } + flow_dv_translate_item_ecpri + (dev, key, items, last_item, key_type); + /* No other protocol should follow eCPRI layer. */ + last_item = MLX5_FLOW_LAYER_ECPRI; + break; + default: + break; + } + wks->item_flags |= last_item; + wks->last_item = last_item; + wks->next_protocol = next_protocol; + return 0; +} + +/** + * Fill the SW steering flow with DV spec. * * @param[in] dev * Pointer to rte_eth_dev structure. @@ -13039,7 +13237,7 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * Pointer to the flow attributes. * @param[in] items * Pointer to the list of items. - * @param[in] matcher + * @param[in, out] matcher * Pointer to the flow matcher. * @param[out] error * Pointer to the error structure. @@ -13048,287 +13246,41 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -flow_dv_translate_items(struct rte_eth_dev *dev, - struct mlx5_flow *dev_flow, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - struct mlx5_flow_dv_matcher *matcher, - struct rte_flow_error *error) +flow_dv_translate_items_sws(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item *items, + struct mlx5_flow_dv_matcher *matcher, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow *flow = dev_flow->flow; - struct mlx5_flow_handle *handle = dev_flow->handle; - struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); - struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc; - uint64_t item_flags = 0; - uint64_t last_item = 0; void *match_mask = matcher->mask.buf; void *match_value = dev_flow->dv.value.buf; - uint8_t next_protocol = 0xff; - uint16_t priority = 0; + struct mlx5_dv_matcher_workspace wks = { + .action_flags = dev_flow->act_flags, + .item_flags = 0, + .external = dev_flow->external, + .next_protocol = 0xff, + .group = dev_flow->dv.group, + .attr = attr, + .rss_desc = &((struct mlx5_flow_workspace *) + mlx5_flow_get_thread_workspace())->rss_desc, + }; + struct mlx5_dv_matcher_workspace wks_m = wks; const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; - const struct rte_flow_item *tunnel_item = NULL; - const struct rte_flow_item *gre_item = NULL; int ret = 0; + int tunnel; for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { - int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); - int item_type = items->type; - - if (!mlx5_flow_os_item_supported(item_type)) + if (!mlx5_flow_os_item_supported(items->type)) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, NULL, "item not supported"); - switch (item_type) { - case RTE_FLOW_ITEM_TYPE_ESP: - flow_dv_translate_item_esp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_ITEM_ESP; - break; - case RTE_FLOW_ITEM_TYPE_PORT_ID: - flow_dv_translate_item_port_id - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_PORT_ID; - break; - case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: - flow_dv_translate_item_represented_port - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; - break; - case RTE_FLOW_ITEM_TYPE_ETH: - flow_dv_translate_item_eth(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = dev_flow->act_flags & - MLX5_FLOW_ACTION_DEFAULT_MISS && - !dev_flow->external ? - MLX5_PRIORITY_MAP_L3 : - MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : - MLX5_FLOW_LAYER_OUTER_L2; - break; - case RTE_FLOW_ITEM_TYPE_VLAN: - flow_dv_translate_item_vlan(dev_flow, - match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | - MLX5_FLOW_LAYER_INNER_VLAN) : - (MLX5_FLOW_LAYER_OUTER_L2 | - MLX5_FLOW_LAYER_OUTER_VLAN); - break; - case RTE_FLOW_ITEM_TYPE_IPV4: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv4(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : - MLX5_FLOW_LAYER_OUTER_L3_IPV4; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv4 *) - items->mask)->hdr.next_proto_id) { - next_protocol = - ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - next_protocol &= - ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_IPV6: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv6(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : - MLX5_FLOW_LAYER_OUTER_L3_IPV6; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto) { - next_protocol = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol &= - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: - flow_dv_translate_item_ipv6_frag_ext(match_mask, - match_value, - items, tunnel); - last_item = tunnel ? - MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : - MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header) { - next_protocol = - ((const struct rte_flow_item_ipv6_frag_ext *) - items->spec)->hdr.next_header; - next_protocol &= - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_TCP: - flow_dv_translate_item_tcp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : - MLX5_FLOW_LAYER_OUTER_L4_TCP; - break; - case RTE_FLOW_ITEM_TYPE_UDP: - flow_dv_translate_item_udp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : - MLX5_FLOW_LAYER_OUTER_L4_UDP; - break; - case RTE_FLOW_ITEM_TYPE_GRE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - gre_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GRE_KEY: - flow_dv_translate_item_gre_key(match_mask, - match_value, items); - last_item = MLX5_FLOW_LAYER_GRE_KEY; - break; - case RTE_FLOW_ITEM_TYPE_GRE_OPTION: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_NVGRE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_VXLAN: - flow_dv_translate_item_vxlan(dev, attr, - match_mask, match_value, - items, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN; - break; - case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN_GPE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GENEVE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GENEVE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: - ret = flow_dv_translate_item_geneve_opt(dev, match_mask, - match_value, - items, error); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GENEVE TLV option"); - flow->geneve_tlv_option = 1; - last_item = MLX5_FLOW_LAYER_GENEVE_OPT; - break; - case RTE_FLOW_ITEM_TYPE_MPLS: - flow_dv_translate_item_mpls(match_mask, match_value, - items, last_item, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_MPLS; - break; - case RTE_FLOW_ITEM_TYPE_MARK: - flow_dv_translate_item_mark(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_MARK; - break; - case RTE_FLOW_ITEM_TYPE_META: - flow_dv_translate_item_meta(dev, match_mask, - match_value, attr, items); - last_item = MLX5_FLOW_ITEM_METADATA; - break; - case RTE_FLOW_ITEM_TYPE_ICMP: - flow_dv_translate_item_icmp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP; - break; - case RTE_FLOW_ITEM_TYPE_ICMP6: - flow_dv_translate_item_icmp6(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP6; - break; - case RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; - break; - case MLX5_RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_mlx5_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; - break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - flow_dv_translate_item_tx_queue(dev, match_mask, - match_value, - items); - last_item = MLX5_FLOW_ITEM_TX_QUEUE; - break; - case RTE_FLOW_ITEM_TYPE_GTP: - flow_dv_translate_item_gtp(match_mask, match_value, - items, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GTP; - break; - case RTE_FLOW_ITEM_TYPE_GTP_PSC: - ret = flow_dv_translate_item_gtp_psc(match_mask, - match_value, - items); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GTP PSC item"); - last_item = MLX5_FLOW_LAYER_GTP_PSC; - break; - case RTE_FLOW_ITEM_TYPE_ECPRI: - if (!mlx5_flex_parser_ecpri_exist(dev)) { - /* Create it only the first time to be used. */ - ret = mlx5_flex_parser_ecpri_alloc(dev); - if (ret) - return rte_flow_error_set - (error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, - "cannot create eCPRI parser"); - } - flow_dv_translate_item_ecpri(dev, match_mask, - match_value, items, - last_item); - /* No other protocol should follow eCPRI layer. */ - last_item = MLX5_FLOW_LAYER_ECPRI; - break; + tunnel = !!(wks.item_flags & MLX5_FLOW_LAYER_TUNNEL); + switch (items->type) { case RTE_FLOW_ITEM_TYPE_INTEGRITY: flow_dv_translate_item_integrity(items, integrity_items, - &last_item); + &wks.last_item); break; case RTE_FLOW_ITEM_TYPE_CONNTRACK: flow_dv_translate_item_aso_ct(dev, match_mask, @@ -13338,13 +13290,22 @@ flow_dv_translate_items(struct rte_eth_dev *dev, flow_dv_translate_item_flex(dev, match_mask, match_value, items, dev_flow, tunnel != 0); - last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : - MLX5_FLOW_ITEM_OUTER_FLEX; + wks.last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : + MLX5_FLOW_ITEM_OUTER_FLEX; break; + default: + ret = flow_dv_translate_items(dev, items, &wks_m, + match_mask, MLX5_SET_MATCHER_SW_M, error); + if (ret) + return ret; + ret = flow_dv_translate_items(dev, items, &wks, + match_value, MLX5_SET_MATCHER_SW_V, error); + if (ret) + return ret; break; } - item_flags |= last_item; + wks.item_flags |= wks.last_item; } /* * When E-Switch mode is enabled, we have two cases where we need to @@ -13354,48 +13315,82 @@ flow_dv_translate_items(struct rte_eth_dev *dev, * In both cases the source port is set according the current port * in use. */ - if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && - !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && + if (!(wks.item_flags & MLX5_FLOW_ITEM_PORT_ID) && + !(wks.item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && !(attr->egress && !attr->transfer)) { - if (flow_dv_translate_item_port_id(dev, match_mask, + if (flow_dv_translate_item_port_id_all(dev, match_mask, match_value, NULL, attr)) return -rte_errno; } - if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + if (wks.item_flags & MLX5_FLOW_ITEM_INTEGRITY) { flow_dv_translate_item_integrity_post(match_mask, match_value, integrity_items, - item_flags); - } - if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) - flow_dv_translate_item_vxlan_gpe(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GENEVE) - flow_dv_translate_item_geneve(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GRE) { - if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) - flow_dv_translate_item_gre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) - flow_dv_translate_item_nvgre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) - flow_dv_translate_item_gre_option(match_mask, match_value, - tunnel_item, gre_item, item_flags); - else + wks.item_flags); + } + if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { + flow_dv_translate_item_vxlan_gpe(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_vxlan_gpe(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GENEVE) { + flow_dv_translate_item_geneve(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_geneve(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GRE) { + if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) { + flow_dv_translate_item_gre(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_gre(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) { + flow_dv_translate_item_nvgre(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_nvgre(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) { + flow_dv_translate_item_gre_option(match_mask, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_gre_option(match_value, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else { MLX5_ASSERT(false); + } } - matcher->priority = priority; + dev_flow->handle->vf_vlan.tag = wks.vlan_tag; + matcher->priority = wks.priority; #ifdef RTE_LIBRTE_MLX5_DEBUG - MLX5_ASSERT(!flow_dv_check_valid_spec(matcher->mask.buf, - dev_flow->dv.value.buf)); + MLX5_ASSERT(!flow_dv_check_valid_spec(match_mask, match_value)); #endif /* * Layers may be already initialized from prefix flow if this dev_flow * is the suffix flow. */ - handle->layers |= item_flags; - return ret; + dev_flow->handle->layers |= wks.item_flags; + dev_flow->flow->geneve_tlv_option = wks.geneve_tlv_option; + return 0; } /** @@ -14124,7 +14119,7 @@ flow_dv_translate(struct rte_eth_dev *dev, modify_action_position = actions_n++; } dev_flow->act_flags = action_flags; - ret = flow_dv_translate_items(dev, dev_flow, attr, items, &matcher, + ret = flow_dv_translate_items_sws(dev, dev_flow, attr, items, &matcher, error); if (ret) return -rte_errno; @@ -16690,27 +16685,23 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev, struct mlx5_flow_dv_match_params value = { .size = sizeof(value.buf), }; - struct mlx5_flow_dv_match_params matcher = { - .size = sizeof(matcher.buf), - }; struct mlx5_priv *priv = dev->data->dev_private; uint8_t misc_mask; if (match_src_port && priv->sh->esw_mode) { if (item && item->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT) - ret = flow_dv_translate_item_represented_port(dev, matcher.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_represented_port(dev, value.buf, + item, attr, MLX5_SET_MATCHER_SW_V); else - ret = flow_dv_translate_item_port_id(dev, matcher.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_port_id(dev, value.buf, + item, attr, MLX5_SET_MATCHER_SW_V); if (ret) { DRV_LOG(ERR, "Failed to create meter policy%d flow's" " value with port.", color); return -1; } } - flow_dv_match_meta_reg(matcher.buf, value.buf, - (enum modify_reg)color_reg_c_idx, + flow_dv_match_meta_reg(value.buf, (enum modify_reg)color_reg_c_idx, rte_col_2_mlx5_col(color), UINT32_MAX); misc_mask = flow_dv_matcher_enable(value.buf); __flow_dv_adjust_buf_size(&value.size, misc_mask); @@ -16742,9 +16733,6 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, }, .tbl = tbl_rsc, }; - struct mlx5_flow_dv_match_params value = { - .size = sizeof(value.buf), - }; struct mlx5_flow_cb_ctx ctx = { .error = error, .data = &matcher, @@ -16757,10 +16745,10 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, if (match_src_port && priv->sh->esw_mode) { if (item && item->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT) ret = flow_dv_translate_item_represented_port(dev, matcher.mask.buf, - value.buf, item, attr); + item, attr, MLX5_SET_MATCHER_SW_M); else - ret = flow_dv_translate_item_port_id(dev, matcher.mask.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_port_id(dev, matcher.mask.buf, + item, attr, MLX5_SET_MATCHER_SW_M); if (ret) { DRV_LOG(ERR, "Failed to register meter policy%d matcher" " with port.", priority); @@ -16769,7 +16757,7 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, } tbl_data = container_of(tbl_rsc, struct mlx5_flow_tbl_data_entry, tbl); if (priority < RTE_COLOR_RED) - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg(matcher.mask.buf, (enum modify_reg)color_reg_c_idx, 0, color_mask); matcher.priority = priority; matcher.crc = rte_raw_cksum((const void *)matcher.mask.buf, @@ -17305,7 +17293,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, tbl_data = container_of(mtrmng->drop_tbl[domain], struct mlx5_flow_tbl_data_entry, tbl); if (!mtrmng->def_matcher[domain]) { - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg_all(matcher.mask.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, 0); matcher.priority = MLX5_MTRS_DEFAULT_RULE_PRIORITY; @@ -17325,7 +17313,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, if (!mtrmng->def_rule[domain]) { i = 0; actions[i++] = priv->sh->dr_drop_action; - flow_dv_match_meta_reg(matcher_para.buf, value.buf, + flow_dv_match_meta_reg_all(matcher_para.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, 0); misc_mask = flow_dv_matcher_enable(value.buf); __flow_dv_adjust_buf_size(&value.size, misc_mask); @@ -17344,7 +17332,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, MLX5_ASSERT(mtrmng->max_mtr_bits); if (!mtrmng->drop_matcher[domain][mtrmng->max_mtr_bits - 1]) { /* Create matchers for Drop. */ - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg_all(matcher.mask.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, (mtr_id_mask << mtr_id_offset)); matcher.priority = MLX5_REG_BITS - mtrmng->max_mtr_bits; @@ -17364,7 +17352,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, drop_matcher = mtrmng->drop_matcher[domain][mtrmng->max_mtr_bits - 1]; /* Create drop rule, matching meter_id only. */ - flow_dv_match_meta_reg(matcher_para.buf, value.buf, + flow_dv_match_meta_reg_all(matcher_para.buf, value.buf, (enum modify_reg)mtr_id_reg_c, (mtr_idx << mtr_id_offset), UINT32_MAX); i = 0; @@ -18846,8 +18834,12 @@ flow_dv_discover_priorities(struct rte_eth_dev *dev, flow.dv.actions[0] = action; flow.dv.actions_n = 1; memset(ð, 0, sizeof(eth)); - flow_dv_translate_item_eth(matcher.mask.buf, flow.dv.value.buf, - &item, /* inner */ false, /* group */ 0); + flow_dv_translate_item_eth(matcher.mask.buf, &item, + /* inner */ false, /* group */ 0, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_eth(flow.dv.value.buf, &item, + /* inner */ false, /* group */ 0, + MLX5_SET_MATCHER_SW_V); matcher.crc = rte_raw_cksum(matcher.mask.buf, matcher.mask.size); for (i = 0; i < vprio_n; i++) { /* Configure the next proposed maximum priority. */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 03/18] net/mlx5: add hardware steering item translation function 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker 2022-10-14 11:48 ` [v3 01/18] net/mlx5: split flow item translation Alex Vesker 2022-10-14 11:48 ` [v3 02/18] net/mlx5: split flow item matcher and value translation Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 04/18] net/mlx5: add port to metadata conversion Alex Vesker ` (14 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> As hardware steering root table flows still work under FW steering mode. This commit provides shared item tranlsation code for hardware steering root table flows. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow.c | 10 +-- drivers/net/mlx5/mlx5_flow.h | 52 ++++++++++++- drivers/net/mlx5/mlx5_flow_dv.c | 134 ++++++++++++++++++++++++-------- 3 files changed, 155 insertions(+), 41 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index e4744b0a67..81bed6f6a3 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -7108,7 +7108,7 @@ mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) struct rte_flow_item_port_id port_spec = { .id = MLX5_PORT_ESW_MGR, }; - struct mlx5_rte_flow_item_tx_queue txq_spec = { + struct mlx5_rte_flow_item_sq txq_spec = { .queue = txq, }; struct rte_flow_item pattern[] = { @@ -7118,7 +7118,7 @@ mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) }, { .type = (enum rte_flow_item_type) - MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, + MLX5_RTE_FLOW_ITEM_TYPE_SQ, .spec = &txq_spec, }, { @@ -7504,16 +7504,16 @@ mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, .egress = 1, .priority = 0, }; - struct mlx5_rte_flow_item_tx_queue queue_spec = { + struct mlx5_rte_flow_item_sq queue_spec = { .queue = queue, }; - struct mlx5_rte_flow_item_tx_queue queue_mask = { + struct mlx5_rte_flow_item_sq queue_mask = { .queue = UINT32_MAX, }; struct rte_flow_item items[] = { { .type = (enum rte_flow_item_type) - MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, + MLX5_RTE_FLOW_ITEM_TYPE_SQ, .spec = &queue_spec, .last = NULL, .mask = &queue_mask, diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2ebb8496f2..288e09d5ba 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -28,7 +28,7 @@ enum mlx5_rte_flow_item_type { MLX5_RTE_FLOW_ITEM_TYPE_END = INT_MIN, MLX5_RTE_FLOW_ITEM_TYPE_TAG, - MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, + MLX5_RTE_FLOW_ITEM_TYPE_SQ, MLX5_RTE_FLOW_ITEM_TYPE_VLAN, MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL, }; @@ -95,7 +95,7 @@ struct mlx5_flow_action_copy_mreg { }; /* Matches on source queue. */ -struct mlx5_rte_flow_item_tx_queue { +struct mlx5_rte_flow_item_sq { uint32_t queue; }; @@ -159,7 +159,7 @@ enum mlx5_feature_name { #define MLX5_FLOW_LAYER_GENEVE (1u << 26) /* Queue items. */ -#define MLX5_FLOW_ITEM_TX_QUEUE (1u << 27) +#define MLX5_FLOW_ITEM_SQ (1u << 27) /* Pattern tunnel Layer bits (continued). */ #define MLX5_FLOW_LAYER_GTP (1u << 28) @@ -196,6 +196,9 @@ enum mlx5_feature_name { #define MLX5_FLOW_ITEM_PORT_REPRESENTOR (UINT64_C(1) << 41) #define MLX5_FLOW_ITEM_REPRESENTED_PORT (UINT64_C(1) << 42) +/* Meter color item */ +#define MLX5_FLOW_ITEM_METER_COLOR (UINT64_C(1) << 44) + /* Outer Masks. */ #define MLX5_FLOW_LAYER_OUTER_L3 \ (MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6) @@ -1006,6 +1009,18 @@ flow_items_to_tunnel(const struct rte_flow_item items[]) return items[0].spec; } +/* HW steering flow attributes. */ +struct mlx5_flow_attr { + uint32_t port_id; /* Port index. */ + uint32_t group; /* Flow group. */ + uint32_t priority; /* Original Priority. */ + /* rss level, used by priority adjustment. */ + uint32_t rss_level; + /* Action flags, used by priority adjustment. */ + uint32_t act_flags; + uint32_t tbl_type; /* Flow table type. */ +}; + /* Flow structure. */ struct rte_flow { uint32_t dev_handles; @@ -1766,6 +1781,32 @@ mlx5_translate_tunnel_etypes(uint64_t pattern_flags) int flow_hw_q_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error); + +/* + * Convert rte_mtr_color to mlx5 color. + * + * @param[in] rcol + * rte_mtr_color. + * + * @return + * mlx5 color. + */ +static inline int +rte_col_2_mlx5_col(enum rte_color rcol) +{ + switch (rcol) { + case RTE_COLOR_GREEN: + return MLX5_FLOW_COLOR_GREEN; + case RTE_COLOR_YELLOW: + return MLX5_FLOW_COLOR_YELLOW; + case RTE_COLOR_RED: + return MLX5_FLOW_COLOR_RED; + default: + break; + } + return MLX5_FLOW_COLOR_UNDEFINED; +} + int mlx5_flow_group_to_table(struct rte_eth_dev *dev, const struct mlx5_flow_tunnel *tunnel, uint32_t group, uint32_t *table, @@ -2122,4 +2163,9 @@ int mlx5_flow_get_item_vport_id(struct rte_eth_dev *dev, bool *all_ports, struct rte_flow_error *error); +int flow_dv_translate_items_hws(const struct rte_flow_item *items, + struct mlx5_flow_attr *attr, void *key, + uint32_t key_type, uint64_t *item_flags, + uint8_t *match_criteria, + struct rte_flow_error *error); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 0589cafc30..0cf757898d 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -216,31 +216,6 @@ flow_dv_attr_init(const struct rte_flow_item *item, union flow_dv_attr *attr, attr->valid = 1; } -/* - * Convert rte_mtr_color to mlx5 color. - * - * @param[in] rcol - * rte_mtr_color. - * - * @return - * mlx5 color. - */ -static inline int -rte_col_2_mlx5_col(enum rte_color rcol) -{ - switch (rcol) { - case RTE_COLOR_GREEN: - return MLX5_FLOW_COLOR_GREEN; - case RTE_COLOR_YELLOW: - return MLX5_FLOW_COLOR_YELLOW; - case RTE_COLOR_RED: - return MLX5_FLOW_COLOR_RED; - default: - break; - } - return MLX5_FLOW_COLOR_UNDEFINED; -} - struct field_modify_info { uint32_t size; /* Size of field in protocol header, in bytes. */ uint32_t offset; /* Offset of field in protocol header, in bytes. */ @@ -7342,8 +7317,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, return ret; last_item = MLX5_FLOW_ITEM_TAG; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - last_item = MLX5_FLOW_ITEM_TX_QUEUE; + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: + last_item = MLX5_FLOW_ITEM_SQ; break; case MLX5_RTE_FLOW_ITEM_TYPE_TAG: break; @@ -8223,7 +8198,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, * work due to metadata regC0 mismatch. */ if ((!attr->transfer && attr->egress) && priv->representor && - !(item_flags & MLX5_FLOW_ITEM_TX_QUEUE)) + !(item_flags & MLX5_FLOW_ITEM_SQ)) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, @@ -11242,9 +11217,9 @@ flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, const struct rte_flow_item *item, uint32_t key_type) { - const struct mlx5_rte_flow_item_tx_queue *queue_m; - const struct mlx5_rte_flow_item_tx_queue *queue_v; - const struct mlx5_rte_flow_item_tx_queue queue_mask = { + const struct mlx5_rte_flow_item_sq *queue_m; + const struct mlx5_rte_flow_item_sq *queue_v; + const struct mlx5_rte_flow_item_sq queue_mask = { .queue = UINT32_MAX, }; void *misc_v = @@ -13184,9 +13159,9 @@ flow_dv_translate_items(struct rte_eth_dev *dev, flow_dv_translate_mlx5_item_tag(dev, key, items, key_type); last_item = MLX5_FLOW_ITEM_TAG; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: flow_dv_translate_item_tx_queue(dev, key, items, key_type); - last_item = MLX5_FLOW_ITEM_TX_QUEUE; + last_item = MLX5_FLOW_ITEM_SQ; break; case RTE_FLOW_ITEM_TYPE_GTP: flow_dv_translate_item_gtp(key, items, tunnel, key_type); @@ -13226,6 +13201,99 @@ flow_dv_translate_items(struct rte_eth_dev *dev, return 0; } +/** + * Fill the HW steering flow with DV spec. + * + * @param[in] items + * Pointer to the list of items. + * @param[in] attr + * Pointer to the flow attributes. + * @param[in] key + * Pointer to the flow matcher key. + * @param[in] key_type + * Key type. + * @param[in, out] item_flags + * Pointer to the flow item flags. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +flow_dv_translate_items_hws(const struct rte_flow_item *items, + struct mlx5_flow_attr *attr, void *key, + uint32_t key_type, uint64_t *item_flags, + uint8_t *match_criteria, + struct rte_flow_error *error) +{ + struct mlx5_flow_rss_desc rss_desc = { .level = attr->rss_level }; + struct rte_flow_attr rattr = { + .group = attr->group, + .priority = attr->priority, + .ingress = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_NIC_RX), + .egress = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_NIC_TX), + .transfer = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_FDB), + }; + struct mlx5_dv_matcher_workspace wks = { + .action_flags = attr->act_flags, + .item_flags = item_flags ? *item_flags : 0, + .external = 0, + .next_protocol = 0xff, + .attr = &rattr, + .rss_desc = &rss_desc, + }; + int ret; + + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { + if (!mlx5_flow_os_item_supported(items->type)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "item not supported"); + ret = flow_dv_translate_items(&rte_eth_devices[attr->port_id], + items, &wks, key, key_type, NULL); + if (ret) + return ret; + } + if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { + flow_dv_translate_item_vxlan_gpe(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GENEVE) { + flow_dv_translate_item_geneve(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GRE) { + if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) { + flow_dv_translate_item_gre(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) { + flow_dv_translate_item_gre_option(key, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + key_type); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) { + flow_dv_translate_item_nvgre(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else { + MLX5_ASSERT(false); + } + } + + if (match_criteria) + *match_criteria = flow_dv_matcher_enable(key); + if (item_flags) + *item_flags = wks.item_flags; + return 0; +} + /** * Fill the SW steering flow with DV spec. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 04/18] net/mlx5: add port to metadata conversion 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (2 preceding siblings ...) 2022-10-14 11:48 ` [v3 03/18] net/mlx5: add hardware steering item translation function Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 05/18] common/mlx5: query set capability of registers Alex Vesker ` (13 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Dariusz Sosnowski From: Dariusz Sosnowski <dsosnowski@nvidia.com> This patch initial version of functions used to: - convert between ethdev port_id and internal tag/mask value, - convert between IB context and internal tag/mask value. Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- drivers/net/mlx5/linux/mlx5_os.c | 10 +++++- drivers/net/mlx5/mlx5.c | 1 + drivers/net/mlx5/mlx5_flow.c | 6 ++++ drivers/net/mlx5/mlx5_flow.h | 52 ++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 29 ++++++++++++++++++ 5 files changed, 97 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 60677eb8d7..98c6374547 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1541,8 +1541,16 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, if (!priv->hrxqs) goto error; rte_rwlock_init(&priv->ind_tbls_lock); - if (priv->sh->config.dv_flow_en == 2) + if (priv->sh->config.dv_flow_en == 2) { +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + if (priv->vport_meta_mask) + flow_hw_set_port_info(eth_dev); return eth_dev; +#else + DRV_LOG(ERR, "DV support is missing for HWS."); + goto error; +#endif + } /* Port representor shares the same max priority with pf port. */ if (!priv->sh->flow_priority_check_flag) { /* Supported Verbs flow priority number detection. */ diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 752b60d769..1d10932619 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1944,6 +1944,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_flex_item_port_cleanup(dev); #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) flow_hw_resource_release(dev); + flow_hw_clear_port_info(dev); #endif if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 81bed6f6a3..bdb0613d4a 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -33,6 +33,12 @@ #include "mlx5_common_os.h" #include "rte_pmd_mlx5.h" +/* + * Shared array for quick translation between port_id and vport mask/values + * used for HWS rules. + */ +struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; + struct tunnel_default_miss_ctx { uint16_t *queue; __extension__ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 288e09d5ba..17102623c1 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1323,6 +1323,58 @@ struct mlx5_flow_split_info { uint64_t prefix_layers; /**< Prefix subflow layers. */ }; +struct flow_hw_port_info { + uint32_t regc_mask; + uint32_t regc_value; + uint32_t is_wire:1; +}; + +extern struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; + +/* + * Get metadata match tag and mask for given rte_eth_dev port. + * Used in HWS rule creation. + */ +static __rte_always_inline const struct flow_hw_port_info * +flow_hw_conv_port_id(const uint16_t port_id) +{ + struct flow_hw_port_info *port_info; + + if (port_id >= RTE_MAX_ETHPORTS) + return NULL; + port_info = &mlx5_flow_hw_port_infos[port_id]; + return !!port_info->regc_mask ? port_info : NULL; +} + +#ifdef HAVE_IBV_FLOW_DV_SUPPORT +/* + * Get metadata match tag and mask for the uplink port represented + * by given IB context. Used in HWS context creation. + */ +static __rte_always_inline const struct flow_hw_port_info * +flow_hw_get_wire_port(struct ibv_context *ibctx) +{ + struct ibv_device *ibdev = ibctx->device; + uint16_t port_id; + + MLX5_ETH_FOREACH_DEV(port_id, NULL) { + const struct mlx5_priv *priv = + rte_eth_devices[port_id].data->dev_private; + + if (priv && priv->master) { + struct ibv_context *port_ibctx = priv->sh->cdev->ctx; + + if (port_ibctx->device == ibdev) + return flow_hw_conv_port_id(port_id); + } + } + return NULL; +} +#endif + +void flow_hw_set_port_info(struct rte_eth_dev *dev); +void flow_hw_clear_port_info(struct rte_eth_dev *dev); + typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 12498794a5..fe809a83b9 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2208,6 +2208,35 @@ flow_hw_resource_release(struct rte_eth_dev *dev) priv->nb_queue = 0; } +/* Sets vport tag and mask, for given port, used in HWS rules. */ +void +flow_hw_set_port_info(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint16_t port_id = dev->data->port_id; + struct flow_hw_port_info *info; + + MLX5_ASSERT(port_id < RTE_MAX_ETHPORTS); + info = &mlx5_flow_hw_port_infos[port_id]; + info->regc_mask = priv->vport_meta_mask; + info->regc_value = priv->vport_meta_tag; + info->is_wire = priv->master; +} + +/* Clears vport tag and mask used for HWS rules. */ +void +flow_hw_clear_port_info(struct rte_eth_dev *dev) +{ + uint16_t port_id = dev->data->port_id; + struct flow_hw_port_info *info; + + MLX5_ASSERT(port_id < RTE_MAX_ETHPORTS); + info = &mlx5_flow_hw_port_infos[port_id]; + info->regc_mask = 0; + info->regc_value = 0; + info->is_wire = 0; +} + /** * Create shared action. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 05/18] common/mlx5: query set capability of registers 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (3 preceding siblings ...) 2022-10-14 11:48 ` [v3 04/18] net/mlx5: add port to metadata conversion Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 06/18] net/mlx5: provide the available tag registers Alex Vesker ` (12 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Bing Zhao From: Bing Zhao <bingz@nvidia.com> In the flow table capabilities, new fields are added to query the capability to set, add, copy to a REG_C_x. The set capability are queried and saved for the future usage. Signed-off-by: Bing Zhao <bingz@nvidia.com> --- drivers/common/mlx5/mlx5_devx_cmds.c | 30 +++++++++++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 2 ++ drivers/common/mlx5/mlx5_prm.h | 45 +++++++++++++++++++++++++--- 3 files changed, 73 insertions(+), 4 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 76f0b6724f..9c185366d0 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -1064,6 +1064,24 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->modify_outer_ip_ecn = MLX5_GET (flow_table_nic_cap, hcattr, ft_header_modify_nic_receive.outer_ip_ecn); + attr->set_reg_c = 0xff; + if (attr->nic_flow_table) { +#define GET_RX_REG_X_BITS \ + MLX5_GET(flow_table_nic_cap, hcattr, \ + ft_header_modify_nic_receive.metadata_reg_c_x) +#define GET_TX_REG_X_BITS \ + MLX5_GET(flow_table_nic_cap, hcattr, \ + ft_header_modify_nic_transmit.metadata_reg_c_x) + + uint32_t tx_reg, rx_reg; + + tx_reg = GET_TX_REG_X_BITS; + rx_reg = GET_RX_REG_X_BITS; + attr->set_reg_c &= (rx_reg & tx_reg); + +#undef GET_RX_REG_X_BITS +#undef GET_TX_REG_X_BITS + } attr->pkt_integrity_match = mlx5_devx_query_pkt_integrity_match(hcattr); attr->inner_ipv4_ihl = MLX5_GET (flow_table_nic_cap, hcattr, @@ -1163,6 +1181,18 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->esw_mgr_vport_id = MLX5_GET(esw_cap, hcattr, esw_manager_vport_number); } + if (attr->eswitch_manager) { + uint32_t esw_reg; + + hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + if (!hcattr) + return rc; + esw_reg = MLX5_GET(flow_table_esw_cap, hcattr, + ft_header_modify_esw_fdb.metadata_reg_c_x); + attr->set_reg_c &= esw_reg; + } return 0; error: rc = (rc > 0) ? -rc : rc; diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index cceaf3411d..a10aa3331b 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -263,6 +263,8 @@ struct mlx5_hca_attr { uint32_t crypto_wrapped_import_method:1; uint16_t esw_mgr_vport_id; /* E-Switch Mgr vport ID . */ uint16_t max_wqe_sz_sq; + uint32_t set_reg_c:8; + uint32_t nic_flow_table:1; uint32_t modify_outer_ip_ecn:1; }; diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 9c1c93f916..ca4763f53d 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1295,6 +1295,7 @@ enum { MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP = 0xc << 1, MLX5_GET_HCA_CAP_OP_MOD_ROCE = 0x4 << 1, MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE = 0x7 << 1, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE = 0x8 << 1, MLX5_SET_HCA_CAP_OP_MOD_ESW = 0x9 << 1, MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1, MLX5_GET_HCA_CAP_OP_MOD_CRYPTO = 0x1A << 1, @@ -1892,6 +1893,7 @@ struct mlx5_ifc_roce_caps_bits { }; struct mlx5_ifc_ft_fields_support_bits { + /* set_action_field_support */ u8 outer_dmac[0x1]; u8 outer_smac[0x1]; u8 outer_ether_type[0x1]; @@ -1919,7 +1921,7 @@ struct mlx5_ifc_ft_fields_support_bits { u8 outer_gre_key[0x1]; u8 outer_vxlan_vni[0x1]; u8 reserved_at_1a[0x5]; - u8 source_eswitch_port[0x1]; + u8 source_eswitch_port[0x1]; /* end of DW0 */ u8 inner_dmac[0x1]; u8 inner_smac[0x1]; u8 inner_ether_type[0x1]; @@ -1943,8 +1945,33 @@ struct mlx5_ifc_ft_fields_support_bits { u8 inner_tcp_sport[0x1]; u8 inner_tcp_dport[0x1]; u8 inner_tcp_flags[0x1]; - u8 reserved_at_37[0x9]; - u8 reserved_at_40[0x40]; + u8 reserved_at_37[0x9]; /* end of DW1 */ + u8 reserved_at_40[0x20]; /* end of DW2 */ + u8 reserved_at_60[0x18]; + union { + struct { + u8 metadata_reg_c_7[0x1]; + u8 metadata_reg_c_6[0x1]; + u8 metadata_reg_c_5[0x1]; + u8 metadata_reg_c_4[0x1]; + u8 metadata_reg_c_3[0x1]; + u8 metadata_reg_c_2[0x1]; + u8 metadata_reg_c_1[0x1]; + u8 metadata_reg_c_0[0x1]; + }; + u8 metadata_reg_c_x[0x8]; + }; /* end of DW3 */ + /* set_action_field_support_2 */ + u8 reserved_at_80[0x80]; + /* add_action_field_support */ + u8 reserved_at_100[0x80]; + /* add_action_field_support_2 */ + u8 reserved_at_180[0x80]; + /* copy_action_field_support */ + u8 reserved_at_200[0x80]; + /* copy_action_field_support_2 */ + u8 reserved_at_280[0x80]; + u8 reserved_at_300[0x100]; }; /* @@ -1989,9 +2016,18 @@ struct mlx5_ifc_flow_table_nic_cap_bits { u8 reserved_at_e00[0x200]; struct mlx5_ifc_ft_fields_support_bits ft_header_modify_nic_receive; - u8 reserved_at_1080[0x380]; struct mlx5_ifc_ft_fields_support_2_bits ft_field_support_2_nic_receive; + u8 reserved_at_1480[0x780]; + struct mlx5_ifc_ft_fields_support_bits + ft_header_modify_nic_transmit; + u8 reserved_at_2000[0x6000]; +}; + +struct mlx5_ifc_flow_table_esw_cap_bits { + u8 reserved_at_0[0x800]; + struct mlx5_ifc_ft_fields_support_bits ft_header_modify_esw_fdb; + u8 reserved_at_C00[0x7400]; }; /* @@ -2046,6 +2082,7 @@ union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_qos_cap_bits qos_cap; struct mlx5_ifc_virtio_emulation_cap_bits vdpa_caps; struct mlx5_ifc_flow_table_nic_cap_bits flow_table_nic_cap; + struct mlx5_ifc_flow_table_esw_cap_bits flow_table_esw_cap; struct mlx5_ifc_esw_cap_bits esw_cap; struct mlx5_ifc_roce_caps_bits roce_caps; u8 reserved_at_0[0x8000]; -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 06/18] net/mlx5: provide the available tag registers 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (4 preceding siblings ...) 2022-10-14 11:48 ` [v3 05/18] common/mlx5: query set capability of registers Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 07/18] net/mlx5: Add additional glue functions for HWS Alex Vesker ` (11 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Bing Zhao From: Bing Zhao <bingz@nvidia.com> The available tags that can be used by the application are fixed after startup. A global array is used to store the information and transfer the TAG item directly from the ID to the REG_C_x. Signed-off-by: Bing Zhao <bingz@nvidia.com> --- drivers/net/mlx5/linux/mlx5_os.c | 2 + drivers/net/mlx5/mlx5.c | 2 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_defs.h | 2 + drivers/net/mlx5/mlx5_flow.c | 11 +++++ drivers/net/mlx5/mlx5_flow.h | 27 ++++++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 76 ++++++++++++++++++++++++++++++++ 7 files changed, 121 insertions(+) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 98c6374547..aed55e6a62 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1545,6 +1545,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, #ifdef HAVE_IBV_FLOW_DV_SUPPORT if (priv->vport_meta_mask) flow_hw_set_port_info(eth_dev); + /* Only HWS requires this information. */ + flow_hw_init_tags_set(eth_dev); return eth_dev; #else DRV_LOG(ERR, "DV support is missing for HWS."); diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 1d10932619..b39ef1ecbe 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1945,6 +1945,8 @@ mlx5_dev_close(struct rte_eth_dev *dev) #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) flow_hw_resource_release(dev); flow_hw_clear_port_info(dev); + if (priv->sh->config.dv_flow_en == 2) + flow_hw_clear_tags_set(dev); #endif if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 3c9e6bad53..741be2df98 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1200,6 +1200,7 @@ struct mlx5_dev_ctx_shared { uint32_t drop_action_check_flag:1; /* Check Flag for drop action. */ uint32_t flow_priority_check_flag:1; /* Check Flag for flow priority. */ uint32_t metadata_regc_check_flag:1; /* Check Flag for metadata REGC. */ + uint32_t hws_tags:1; /* Check if tags info for HWS initialized. */ uint32_t max_port; /* Maximal IB device port index. */ struct mlx5_bond_info bond; /* Bonding information. */ struct mlx5_common_device *cdev; /* Backend mlx5 device. */ diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index 018d3f0f0c..585afb0a98 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -139,6 +139,8 @@ #define MLX5_XMETA_MODE_META32 2 /* Provide info on patrial hw miss. Implies MLX5_XMETA_MODE_META16 */ #define MLX5_XMETA_MODE_MISS_INFO 3 +/* Only valid in HWS, 32bits extended META without MARK support in FDB. */ +#define MLX5_XMETA_MODE_META32_HWS 4 /* Tx accurate scheduling on timestamps parameters. */ #define MLX5_TXPP_WAIT_INIT_TS 1000ul /* How long to wait timestamp. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index bdb0613d4a..84bc471392 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -39,6 +39,17 @@ */ struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; +/* + * A global structure to save the available REG_C_x for tags usage. + * The Meter color REG (ASO) and the last available one will be reserved + * for PMD internal usage. + * Since there is no "port" concept in the driver, it is assumed that the + * available tags set will be the minimum intersection. + * 3 - in FDB mode / 5 - in legacy mode + */ +uint32_t mlx5_flow_hw_avl_tags_init_cnt; +enum modify_reg mlx5_flow_hw_avl_tags[MLX5_FLOW_HW_TAGS_MAX] = {REG_NON}; + struct tunnel_default_miss_ctx { uint16_t *queue; __extension__ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 17102623c1..2002f6ef4b 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1331,6 +1331,10 @@ struct flow_hw_port_info { extern struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; +#define MLX5_FLOW_HW_TAGS_MAX 8 +extern uint32_t mlx5_flow_hw_avl_tags_init_cnt; +extern enum modify_reg mlx5_flow_hw_avl_tags[]; + /* * Get metadata match tag and mask for given rte_eth_dev port. * Used in HWS rule creation. @@ -1372,9 +1376,32 @@ flow_hw_get_wire_port(struct ibv_context *ibctx) } #endif +/* + * Convert metadata or tag to the actual register. + * META: Can only be used to match in the FDB in this stage, fixed C_1. + * TAG: C_x expect meter color reg and the reserved ones. + * TODO: Per port / device, FDB or NIC for Meta matching. + */ +static __rte_always_inline int +flow_hw_get_reg_id(enum rte_flow_item_type type, uint32_t id) +{ + switch (type) { + case RTE_FLOW_ITEM_TYPE_META: + return REG_C_1; + case RTE_FLOW_ITEM_TYPE_TAG: + MLX5_ASSERT(id < MLX5_FLOW_HW_TAGS_MAX); + return mlx5_flow_hw_avl_tags[id]; + default: + return REG_NON; + } +} + void flow_hw_set_port_info(struct rte_eth_dev *dev); void flow_hw_clear_port_info(struct rte_eth_dev *dev); +void flow_hw_init_tags_set(struct rte_eth_dev *dev); +void flow_hw_clear_tags_set(struct rte_eth_dev *dev); + typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index fe809a83b9..78c741bb91 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2237,6 +2237,82 @@ flow_hw_clear_port_info(struct rte_eth_dev *dev) info->is_wire = 0; } +/* + * Initialize the information of available tag registers and an intersection + * of all the probed devices' REG_C_Xs. + * PS. No port concept in steering part, right now it cannot be per port level. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + */ +void flow_hw_init_tags_set(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t meta_mode = priv->sh->config.dv_xmeta_en; + uint8_t masks = (uint8_t)priv->sh->cdev->config.hca_attr.set_reg_c; + uint32_t i, j; + enum modify_reg copy[MLX5_FLOW_HW_TAGS_MAX] = {REG_NON}; + uint8_t unset = 0; + uint8_t copy_masks = 0; + + /* + * The CAPA is global for common device but only used in net. + * It is shared per eswitch domain. + */ + if (!!priv->sh->hws_tags) + return; + unset |= 1 << (priv->mtr_color_reg - REG_C_0); + unset |= 1 << (REG_C_6 - REG_C_0); + if (meta_mode == MLX5_XMETA_MODE_META32_HWS) { + unset |= 1 << (REG_C_1 - REG_C_0); + unset |= 1 << (REG_C_0 - REG_C_0); + } + masks &= ~unset; + if (mlx5_flow_hw_avl_tags_init_cnt) { + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) { + if (mlx5_flow_hw_avl_tags[i] != REG_NON && !!((1 << i) & masks)) { + copy[mlx5_flow_hw_avl_tags[i] - REG_C_0] = + mlx5_flow_hw_avl_tags[i]; + copy_masks |= (1 << i); + } + } + if (copy_masks != masks) { + j = 0; + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) + if (!!((1 << i) & copy_masks)) + mlx5_flow_hw_avl_tags[j++] = copy[i]; + } + } else { + j = 0; + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) { + if (!!((1 << i) & masks)) + mlx5_flow_hw_avl_tags[j++] = + (enum modify_reg)(i + (uint32_t)REG_C_0); + } + } + priv->sh->hws_tags = 1; + mlx5_flow_hw_avl_tags_init_cnt++; +} + +/* + * Reset the available tag registers information to NONE. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + */ +void flow_hw_clear_tags_set(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + if (!priv->sh->hws_tags) + return; + priv->sh->hws_tags = 0; + mlx5_flow_hw_avl_tags_init_cnt--; + if (!mlx5_flow_hw_avl_tags_init_cnt) + memset(mlx5_flow_hw_avl_tags, REG_NON, + sizeof(enum modify_reg) * MLX5_FLOW_HW_TAGS_MAX); +} + /** * Create shared action. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 07/18] net/mlx5: Add additional glue functions for HWS 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (5 preceding siblings ...) 2022-10-14 11:48 ` [v3 06/18] net/mlx5: provide the available tag registers Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 08/18] net/mlx5/hws: Add HWS command layer Alex Vesker ` (10 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Add missing glue support for HWS mlx5dr layer. The new glue functions are needed for mlx5dv create matcher and action, which are used as the kernel root table as well as for capabilities query like device name and ports info. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/common/mlx5/linux/mlx5_glue.c | 121 ++++++++++++++++++++++++-- drivers/common/mlx5/linux/mlx5_glue.h | 17 ++++ 2 files changed, 131 insertions(+), 7 deletions(-) diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c index 450dd6a06a..943d4bf833 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.c +++ b/drivers/common/mlx5/linux/mlx5_glue.c @@ -111,6 +111,12 @@ mlx5_glue_query_device_ex(struct ibv_context *context, return ibv_query_device_ex(context, input, attr); } +static const char * +mlx5_glue_get_device_name(struct ibv_device *device) +{ + return ibv_get_device_name(device); +} + static int mlx5_glue_query_rt_values_ex(struct ibv_context *context, struct ibv_values_ex *values) @@ -620,6 +626,20 @@ mlx5_glue_dv_create_qp(struct ibv_context *context, #endif } +static void * +__mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, + struct mlx5dv_flow_matcher_attr *matcher_attr) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_matcher(context, matcher_attr); +#else + (void)context; + (void)matcher_attr; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, struct mlx5dv_flow_matcher_attr *matcher_attr, @@ -633,7 +653,7 @@ mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, matcher_attr->match_mask); #else (void)tbl; - return mlx5dv_create_flow_matcher(context, matcher_attr); + return __mlx5_glue_dv_create_flow_matcher(context, matcher_attr); #endif #else (void)context; @@ -644,6 +664,26 @@ mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, #endif } +static void * +__mlx5_glue_dv_create_flow(void *matcher, + void *match_value, + size_t num_actions, + void *actions) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow(matcher, + match_value, + num_actions, + (struct mlx5dv_flow_action_attr *)actions); +#else + (void)matcher; + (void)match_value; + (void)num_actions; + (void)actions; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow(void *matcher, void *match_value, @@ -663,8 +703,8 @@ mlx5_glue_dv_create_flow(void *matcher, for (i = 0; i < num_actions; i++) actions_attr[i] = *((struct mlx5dv_flow_action_attr *)(actions[i])); - return mlx5dv_create_flow(matcher, match_value, - num_actions, actions_attr); + return __mlx5_glue_dv_create_flow(matcher, match_value, + num_actions, actions_attr); #endif #else (void)matcher; @@ -735,6 +775,26 @@ mlx5_glue_dv_create_flow_action_dest_devx_tir(void *tir) #endif } +static void * +__mlx5_glue_dv_create_flow_action_modify_header + (struct ibv_context *ctx, + size_t actions_sz, + uint64_t actions[], + enum mlx5dv_flow_table_type ft_type) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_action_modify_header + (ctx, actions_sz, actions, ft_type); +#else + (void)ctx; + (void)ft_type; + (void)actions_sz; + (void)actions; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_action_modify_header (struct ibv_context *ctx, @@ -758,7 +818,7 @@ mlx5_glue_dv_create_flow_action_modify_header if (!action) return NULL; action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; - action->action = mlx5dv_create_flow_action_modify_header + action->action = __mlx5_glue_dv_create_flow_action_modify_header (ctx, actions_sz, actions, ft_type); return action; #endif @@ -774,6 +834,27 @@ mlx5_glue_dv_create_flow_action_modify_header #endif } +static void * +__mlx5_glue_dv_create_flow_action_packet_reformat + (struct ibv_context *ctx, + size_t data_sz, void *data, + enum mlx5dv_flow_action_packet_reformat_type reformat_type, + enum mlx5dv_flow_table_type ft_type) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_action_packet_reformat + (ctx, data_sz, data, reformat_type, ft_type); +#else + (void)ctx; + (void)reformat_type; + (void)ft_type; + (void)data_sz; + (void)data; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_action_packet_reformat (struct ibv_context *ctx, @@ -798,7 +879,7 @@ mlx5_glue_dv_create_flow_action_packet_reformat if (!action) return NULL; action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; - action->action = mlx5dv_create_flow_action_packet_reformat + action->action = __mlx5_glue_dv_create_flow_action_packet_reformat (ctx, data_sz, data, reformat_type, ft_type); return action; #endif @@ -908,6 +989,18 @@ mlx5_glue_dv_destroy_flow(void *flow_id) #endif } +static int +__mlx5_glue_dv_destroy_flow_matcher(void *matcher) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_destroy_flow_matcher(matcher); +#else + (void)matcher; + errno = ENOTSUP; + return errno; +#endif +} + static int mlx5_glue_dv_destroy_flow_matcher(void *matcher) { @@ -915,7 +1008,7 @@ mlx5_glue_dv_destroy_flow_matcher(void *matcher) #ifdef HAVE_MLX5DV_DR return mlx5dv_dr_matcher_destroy(matcher); #else - return mlx5dv_destroy_flow_matcher(matcher); + return __mlx5_glue_dv_destroy_flow_matcher(matcher); #endif #else (void)matcher; @@ -1164,12 +1257,18 @@ mlx5_glue_devx_port_query(struct ibv_context *ctx, info->vport_id = devx_port.vport; info->query_flags |= MLX5_PORT_QUERY_VPORT; } + if (devx_port.flags & MLX5DV_QUERY_PORT_ESW_OWNER_VHCA_ID) { + info->esw_owner_vhca_id = devx_port.esw_owner_vhca_id; + info->query_flags |= MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID; + } #else #ifdef HAVE_MLX5DV_DR_DEVX_PORT /* The legacy DevX port query API is implemented (prior v35). */ struct mlx5dv_devx_port devx_port = { .comp_mask = MLX5DV_DEVX_PORT_VPORT | - MLX5DV_DEVX_PORT_MATCH_REG_C_0 + MLX5DV_DEVX_PORT_MATCH_REG_C_0 | + MLX5DV_DEVX_PORT_VPORT_VHCA_ID | + MLX5DV_DEVX_PORT_ESW_OWNER_VHCA_ID }; err = mlx5dv_query_devx_port(ctx, port_num, &devx_port); @@ -1449,6 +1548,7 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { .close_device = mlx5_glue_close_device, .query_device = mlx5_glue_query_device, .query_device_ex = mlx5_glue_query_device_ex, + .get_device_name = mlx5_glue_get_device_name, .query_rt_values_ex = mlx5_glue_query_rt_values_ex, .query_port = mlx5_glue_query_port, .create_comp_channel = mlx5_glue_create_comp_channel, @@ -1507,7 +1607,9 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { .dv_init_obj = mlx5_glue_dv_init_obj, .dv_create_qp = mlx5_glue_dv_create_qp, .dv_create_flow_matcher = mlx5_glue_dv_create_flow_matcher, + .dv_create_flow_matcher_root = __mlx5_glue_dv_create_flow_matcher, .dv_create_flow = mlx5_glue_dv_create_flow, + .dv_create_flow_root = __mlx5_glue_dv_create_flow, .dv_create_flow_action_counter = mlx5_glue_dv_create_flow_action_counter, .dv_create_flow_action_dest_ibv_qp = @@ -1516,8 +1618,12 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { mlx5_glue_dv_create_flow_action_dest_devx_tir, .dv_create_flow_action_modify_header = mlx5_glue_dv_create_flow_action_modify_header, + .dv_create_flow_action_modify_header_root = + __mlx5_glue_dv_create_flow_action_modify_header, .dv_create_flow_action_packet_reformat = mlx5_glue_dv_create_flow_action_packet_reformat, + .dv_create_flow_action_packet_reformat_root = + __mlx5_glue_dv_create_flow_action_packet_reformat, .dv_create_flow_action_tag = mlx5_glue_dv_create_flow_action_tag, .dv_create_flow_action_meter = mlx5_glue_dv_create_flow_action_meter, .dv_modify_flow_action_meter = mlx5_glue_dv_modify_flow_action_meter, @@ -1526,6 +1632,7 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { mlx5_glue_dr_create_flow_action_default_miss, .dv_destroy_flow = mlx5_glue_dv_destroy_flow, .dv_destroy_flow_matcher = mlx5_glue_dv_destroy_flow_matcher, + .dv_destroy_flow_matcher_root = __mlx5_glue_dv_destroy_flow_matcher, .dv_open_device = mlx5_glue_dv_open_device, .devx_obj_create = mlx5_glue_devx_obj_create, .devx_obj_destroy = mlx5_glue_devx_obj_destroy, diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h index c4903a6dce..ef7341a76a 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.h +++ b/drivers/common/mlx5/linux/mlx5_glue.h @@ -91,10 +91,12 @@ struct mlx5dv_port; #define MLX5_PORT_QUERY_VPORT (1u << 0) #define MLX5_PORT_QUERY_REG_C0 (1u << 1) +#define MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID (1u << 2) struct mlx5_port_info { uint16_t query_flags; uint16_t vport_id; /* Associated VF vport index (if any). */ + uint16_t esw_owner_vhca_id; /* Associated the esw_owner that this VF belongs to. */ uint32_t vport_meta_tag; /* Used for vport index match ove VF LAG. */ uint32_t vport_meta_mask; /* Used for vport index field match mask. */ }; @@ -164,6 +166,7 @@ struct mlx5_glue { int (*query_device_ex)(struct ibv_context *context, const struct ibv_query_device_ex_input *input, struct ibv_device_attr_ex *attr); + const char *(*get_device_name)(struct ibv_device *device); int (*query_rt_values_ex)(struct ibv_context *context, struct ibv_values_ex *values); int (*query_port)(struct ibv_context *context, uint8_t port_num, @@ -268,8 +271,13 @@ struct mlx5_glue { (struct ibv_context *context, struct mlx5dv_flow_matcher_attr *matcher_attr, void *tbl); + void *(*dv_create_flow_matcher_root) + (struct ibv_context *context, + struct mlx5dv_flow_matcher_attr *matcher_attr); void *(*dv_create_flow)(void *matcher, void *match_value, size_t num_actions, void *actions[]); + void *(*dv_create_flow_root)(void *matcher, void *match_value, + size_t num_actions, void *actions); void *(*dv_create_flow_action_counter)(void *obj, uint32_t offset); void *(*dv_create_flow_action_dest_ibv_qp)(void *qp); void *(*dv_create_flow_action_dest_devx_tir)(void *tir); @@ -277,12 +285,20 @@ struct mlx5_glue { (struct ibv_context *ctx, enum mlx5dv_flow_table_type ft_type, void *domain, uint64_t flags, size_t actions_sz, uint64_t actions[]); + void *(*dv_create_flow_action_modify_header_root) + (struct ibv_context *ctx, size_t actions_sz, uint64_t actions[], + enum mlx5dv_flow_table_type ft_type); void *(*dv_create_flow_action_packet_reformat) (struct ibv_context *ctx, enum mlx5dv_flow_action_packet_reformat_type reformat_type, enum mlx5dv_flow_table_type ft_type, struct mlx5dv_dr_domain *domain, uint32_t flags, size_t data_sz, void *data); + void *(*dv_create_flow_action_packet_reformat_root) + (struct ibv_context *ctx, + size_t data_sz, void *data, + enum mlx5dv_flow_action_packet_reformat_type reformat_type, + enum mlx5dv_flow_table_type ft_type); void *(*dv_create_flow_action_tag)(uint32_t tag); void *(*dv_create_flow_action_meter) (struct mlx5dv_dr_flow_meter_attr *attr); @@ -291,6 +307,7 @@ struct mlx5_glue { void *(*dr_create_flow_action_default_miss)(void); int (*dv_destroy_flow)(void *flow); int (*dv_destroy_flow_matcher)(void *matcher); + int (*dv_destroy_flow_matcher_root)(void *matcher); struct ibv_context *(*dv_open_device)(struct ibv_device *device); struct mlx5dv_var *(*dv_alloc_var)(struct ibv_context *context, uint32_t flags); -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 08/18] net/mlx5/hws: Add HWS command layer 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (6 preceding siblings ...) 2022-10-14 11:48 ` [v3 07/18] net/mlx5: Add additional glue functions for HWS Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 09/18] net/mlx5/hws: Add HWS pool and buddy Alex Vesker ` (9 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit From: Erez Shitrit <erezsh@nvidia.com> The command layer is used to communicate with the FW, query capabilities and allocate FW resources needed for HWS. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/common/mlx5/mlx5_prm.h | 607 ++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_cmd.c | 948 ++++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_cmd.h | 230 ++++++++ 3 files changed, 1775 insertions(+), 10 deletions(-) create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.h diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index ca4763f53d..371942ae50 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -289,6 +289,8 @@ /* The alignment needed for CQ buffer. */ #define MLX5_CQE_BUF_ALIGNMENT rte_mem_page_size() +#define MAX_ACTIONS_DATA_IN_HEADER_MODIFY 512 + /* Completion mode. */ enum mlx5_completion_mode { MLX5_COMP_ONLY_ERR = 0x0, @@ -677,6 +679,10 @@ enum { MLX5_MODIFICATION_TYPE_SET = 0x1, MLX5_MODIFICATION_TYPE_ADD = 0x2, MLX5_MODIFICATION_TYPE_COPY = 0x3, + MLX5_MODIFICATION_TYPE_INSERT = 0x4, + MLX5_MODIFICATION_TYPE_REMOVE = 0x5, + MLX5_MODIFICATION_TYPE_NOP = 0x6, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS = 0x7, }; /* The field of packet to be modified. */ @@ -1111,6 +1117,10 @@ enum { MLX5_CMD_OP_QUERY_TIS = 0x915, MLX5_CMD_OP_CREATE_RQT = 0x916, MLX5_CMD_OP_MODIFY_RQT = 0x917, + MLX5_CMD_OP_CREATE_FLOW_TABLE = 0x930, + MLX5_CMD_OP_CREATE_FLOW_GROUP = 0x933, + MLX5_CMD_OP_SET_FLOW_TABLE_ENTRY = 0x936, + MLX5_CMD_OP_MODIFY_FLOW_TABLE = 0x93c, MLX5_CMD_OP_ALLOC_FLOW_COUNTER = 0x939, MLX5_CMD_OP_QUERY_FLOW_COUNTER = 0x93b, MLX5_CMD_OP_CREATE_GENERAL_OBJECT = 0xa00, @@ -1299,6 +1309,7 @@ enum { MLX5_SET_HCA_CAP_OP_MOD_ESW = 0x9 << 1, MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1, MLX5_GET_HCA_CAP_OP_MOD_CRYPTO = 0x1A << 1, + MLX5_GET_HCA_CAP_OP_MOD_WQE_BASED_FLOW_TABLE = 0x1B << 1, MLX5_GET_HCA_CAP_OP_MOD_PARSE_GRAPH_NODE_CAP = 0x1C << 1, MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE_2 = 0x20 << 1, }; @@ -1317,6 +1328,14 @@ enum { (1ULL << MLX5_GENERAL_OBJ_TYPE_GENEVE_TLV_OPT) #define MLX5_GENERAL_OBJ_TYPES_CAP_CONN_TRACK_OFFLOAD \ (1ULL << MLX5_GENERAL_OBJ_TYPE_CONN_TRACK_OFFLOAD) +#define MLX5_GENERAL_OBJ_TYPES_CAP_RTC \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_RTC) +#define MLX5_GENERAL_OBJ_TYPES_CAP_STC \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_STC) +#define MLX5_GENERAL_OBJ_TYPES_CAP_STE \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_STE) +#define MLX5_GENERAL_OBJ_TYPES_CAP_DEFINER \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_DEFINER) #define MLX5_GENERAL_OBJ_TYPES_CAP_DEK \ (1ULL << MLX5_GENERAL_OBJ_TYPE_DEK) #define MLX5_GENERAL_OBJ_TYPES_CAP_IMPORT_KEK \ @@ -1373,6 +1392,11 @@ enum { #define MLX5_HCA_FLEX_VXLAN_GPE_ENABLED (1UL << 7) #define MLX5_HCA_FLEX_ICMP_ENABLED (1UL << 8) #define MLX5_HCA_FLEX_ICMPV6_ENABLED (1UL << 9) +#define MLX5_HCA_FLEX_GTPU_ENABLED (1UL << 11) +#define MLX5_HCA_FLEX_GTPU_DW_2_ENABLED (1UL << 16) +#define MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED (1UL << 17) +#define MLX5_HCA_FLEX_GTPU_DW_0_ENABLED (1UL << 18) +#define MLX5_HCA_FLEX_GTPU_TEID_ENABLED (1UL << 19) /* The device steering logic format. */ #define MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 0x0 @@ -1505,7 +1529,8 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 wol_u[0x1]; u8 wol_p[0x1]; u8 stat_rate_support[0x10]; - u8 reserved_at_1f0[0xc]; + u8 reserved_at_1ef[0xb]; + u8 wqe_based_flow_table_update_cap[0x1]; u8 cqe_version[0x4]; u8 compact_address_vector[0x1]; u8 striding_rq[0x1]; @@ -1681,7 +1706,9 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 cqe_compression[0x1]; u8 cqe_compression_timeout[0x10]; u8 cqe_compression_max_num[0x10]; - u8 reserved_at_5e0[0x10]; + u8 reserved_at_5e0[0x8]; + u8 flex_parser_id_gtpu_dw_0[0x4]; + u8 reserved_at_5ec[0x4]; u8 tag_matching[0x1]; u8 rndv_offload_rc[0x1]; u8 rndv_offload_dc[0x1]; @@ -1691,17 +1718,38 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 affiliate_nic_vport_criteria[0x8]; u8 native_port_num[0x8]; u8 num_vhca_ports[0x8]; - u8 reserved_at_618[0x6]; + u8 flex_parser_id_gtpu_teid[0x4]; + u8 reserved_at_61c[0x2]; u8 sw_owner_id[0x1]; u8 reserved_at_61f[0x6C]; u8 wait_on_data[0x1]; u8 wait_on_time[0x1]; - u8 reserved_at_68d[0xBB]; + u8 reserved_at_68d[0x37]; + u8 flex_parser_id_geneve_opt_0[0x4]; + u8 flex_parser_id_icmp_dw1[0x4]; + u8 flex_parser_id_icmp_dw0[0x4]; + u8 flex_parser_id_icmpv6_dw1[0x4]; + u8 flex_parser_id_icmpv6_dw0[0x4]; + u8 flex_parser_id_outer_first_mpls_over_gre[0x4]; + u8 flex_parser_id_outer_first_mpls_over_udp_label[0x4]; + u8 reserved_at_6e0[0x20]; + u8 flex_parser_id_gtpu_dw_2[0x4]; + u8 flex_parser_id_gtpu_first_ext_dw_0[0x4]; + u8 reserved_at_708[0x40]; u8 dma_mmo_qp[0x1]; u8 regexp_mmo_qp[0x1]; u8 compress_mmo_qp[0x1]; u8 decompress_mmo_qp[0x1]; - u8 reserved_at_624[0xd4]; + u8 reserved_at_74c[0x14]; + u8 reserved_at_760[0x3]; + u8 log_max_num_header_modify_argument[0x5]; + u8 log_header_modify_argument_granularity_offset[0x4]; + u8 log_header_modify_argument_granularity[0x4]; + u8 reserved_at_770[0x3]; + u8 log_header_modify_argument_max_alloc[0x5]; + u8 reserved_at_778[0x8]; + u8 reserved_at_780[0x40]; + u8 match_definer_format_supported[0x40]; }; struct mlx5_ifc_qos_cap_bits { @@ -1876,7 +1924,9 @@ struct mlx5_ifc_flow_table_prop_layout_bits { u8 log_max_ft_sampler_num[8]; u8 metadata_reg_b_width[0x8]; u8 metadata_reg_a_width[0x8]; - u8 reserved_at_60[0x18]; + u8 reserved_at_60[0xa]; + u8 reparse[0x1]; + u8 reserved_at_6b[0xd]; u8 log_max_ft_num[0x8]; u8 reserved_at_80[0x10]; u8 log_max_flow_counter[0x8]; @@ -2061,7 +2111,17 @@ struct mlx5_ifc_cmd_hca_cap_2_bits { u8 hairpin_sq_wqe_bb_size[0x5]; u8 hairpin_sq_wq_in_host_mem[0x1]; u8 hairpin_data_buffer_locked[0x1]; - u8 reserved_at_16a[0x696]; + u8 reserved_at_16a[0x36]; + u8 reserved_at_1a0[0xb]; + u8 format_select_dw_8_6_ext[0x1]; + u8 reserved_at_1ac[0x14]; + u8 general_obj_types_127_64[0x40]; + u8 reserved_at_200[0x80]; + u8 format_select_dw_gtpu_dw_0[0x8]; + u8 format_select_dw_gtpu_dw_1[0x8]; + u8 format_select_dw_gtpu_dw_2[0x8]; + u8 format_select_dw_gtpu_first_ext_dw_0[0x8]; + u8 reserved_at_2a0[0x560]; }; struct mlx5_ifc_esw_cap_bits { @@ -2074,6 +2134,37 @@ struct mlx5_ifc_esw_cap_bits { u8 reserved_at_80[0x780]; }; +struct mlx5_ifc_wqe_based_flow_table_cap_bits { + u8 reserved_at_0[0x3]; + u8 log_max_num_ste[0x5]; + u8 reserved_at_8[0x3]; + u8 log_max_num_stc[0x5]; + u8 reserved_at_10[0x3]; + u8 log_max_num_rtc[0x5]; + u8 reserved_at_18[0x3]; + u8 log_max_num_header_modify_pattern[0x5]; + u8 reserved_at_20[0x3]; + u8 stc_alloc_log_granularity[0x5]; + u8 reserved_at_28[0x3]; + u8 stc_alloc_log_max[0x5]; + u8 reserved_at_30[0x3]; + u8 ste_alloc_log_granularity[0x5]; + u8 reserved_at_38[0x3]; + u8 ste_alloc_log_max[0x5]; + u8 reserved_at_40[0xb]; + u8 rtc_reparse_mode[0x5]; + u8 reserved_at_50[0x3]; + u8 rtc_index_mode[0x5]; + u8 reserved_at_58[0x3]; + u8 rtc_log_depth_max[0x5]; + u8 reserved_at_60[0x10]; + u8 ste_format[0x10]; + u8 stc_action_type[0x80]; + u8 header_insert_type[0x10]; + u8 header_remove_type[0x10]; + u8 trivial_match_definer[0x20]; +}; + union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_cmd_hca_cap_bits cmd_hca_cap; struct mlx5_ifc_cmd_hca_cap_2_bits cmd_hca_cap_2; @@ -2085,6 +2176,7 @@ union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_flow_table_esw_cap_bits flow_table_esw_cap; struct mlx5_ifc_esw_cap_bits esw_cap; struct mlx5_ifc_roce_caps_bits roce_caps; + struct mlx5_ifc_wqe_based_flow_table_cap_bits wqe_based_flow_table_cap; u8 reserved_at_0[0x8000]; }; @@ -2098,6 +2190,20 @@ struct mlx5_ifc_set_action_in_bits { u8 data[0x20]; }; +struct mlx5_ifc_copy_action_in_bits { + u8 action_type[0x4]; + u8 src_field[0xc]; + u8 reserved_at_10[0x3]; + u8 src_offset[0x5]; + u8 reserved_at_18[0x3]; + u8 length[0x5]; + u8 reserved_at_20[0x4]; + u8 dst_field[0xc]; + u8 reserved_at_30[0x3]; + u8 dst_offset[0x5]; + u8 reserved_at_38[0x8]; +}; + struct mlx5_ifc_query_hca_cap_out_bits { u8 status[0x8]; u8 reserved_at_8[0x18]; @@ -2978,6 +3084,7 @@ enum { MLX5_GENERAL_OBJ_TYPE_GENEVE_TLV_OPT = 0x000b, MLX5_GENERAL_OBJ_TYPE_DEK = 0x000c, MLX5_GENERAL_OBJ_TYPE_VIRTQ = 0x000d, + MLX5_GENERAL_OBJ_TYPE_DEFINER = 0x0018, MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS = 0x001c, MLX5_GENERAL_OBJ_TYPE_IMPORT_KEK = 0x001d, MLX5_GENERAL_OBJ_TYPE_CREDENTIAL = 0x001e, @@ -2986,6 +3093,11 @@ enum { MLX5_GENERAL_OBJ_TYPE_FLOW_METER_ASO = 0x0024, MLX5_GENERAL_OBJ_TYPE_FLOW_HIT_ASO = 0x0025, MLX5_GENERAL_OBJ_TYPE_CONN_TRACK_OFFLOAD = 0x0031, + MLX5_GENERAL_OBJ_TYPE_ARG = 0x0023, + MLX5_GENERAL_OBJ_TYPE_STC = 0x0040, + MLX5_GENERAL_OBJ_TYPE_RTC = 0x0041, + MLX5_GENERAL_OBJ_TYPE_STE = 0x0042, + MLX5_GENERAL_OBJ_TYPE_MODIFY_HEADER_PATTERN = 0x0043, }; struct mlx5_ifc_general_obj_in_cmd_hdr_bits { @@ -2993,9 +3105,14 @@ struct mlx5_ifc_general_obj_in_cmd_hdr_bits { u8 reserved_at_10[0x20]; u8 obj_type[0x10]; u8 obj_id[0x20]; - u8 reserved_at_60[0x3]; - u8 log_obj_range[0x5]; - u8 reserved_at_58[0x18]; + union { + struct { + u8 reserved_at_60[0x3]; + u8 log_obj_range[0x5]; + u8 reserved_at_58[0x18]; + }; + u8 obj_offset[0x20]; + }; }; struct mlx5_ifc_general_obj_out_cmd_hdr_bits { @@ -3029,6 +3146,243 @@ struct mlx5_ifc_geneve_tlv_option_bits { u8 reserved_at_80[0x180]; }; + +enum mlx5_ifc_rtc_update_mode { + MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH = 0x0, + MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET = 0x1, +}; + +enum mlx5_ifc_rtc_ste_format { + MLX5_IFC_RTC_STE_FORMAT_8DW = 0x4, + MLX5_IFC_RTC_STE_FORMAT_11DW = 0x5, +}; + +enum mlx5_ifc_rtc_reparse_mode { + MLX5_IFC_RTC_REPARSE_NEVER = 0x0, + MLX5_IFC_RTC_REPARSE_ALWAYS = 0x1, +}; + +struct mlx5_ifc_rtc_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x40]; + u8 update_index_mode[0x2]; + u8 reparse_mode[0x2]; + u8 reserved_at_84[0x4]; + u8 pd[0x18]; + u8 reserved_at_a0[0x13]; + u8 log_depth[0x5]; + u8 log_hash_size[0x8]; + u8 ste_format[0x8]; + u8 table_type[0x8]; + u8 reserved_at_d0[0x10]; + u8 match_definer_id[0x20]; + u8 stc_id[0x20]; + u8 ste_table_base_id[0x20]; + u8 ste_table_offset[0x20]; + u8 reserved_at_160[0x8]; + u8 miss_flow_table_id[0x18]; + u8 reserved_at_180[0x280]; +}; + +enum mlx5_ifc_stc_action_type { + MLX5_IFC_STC_ACTION_TYPE_NOP = 0x00, + MLX5_IFC_STC_ACTION_TYPE_COPY = 0x05, + MLX5_IFC_STC_ACTION_TYPE_SET = 0x06, + MLX5_IFC_STC_ACTION_TYPE_ADD = 0x07, + MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS = 0x08, + MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE = 0x09, + MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT = 0x0b, + MLX5_IFC_STC_ACTION_TYPE_TAG = 0x0c, + MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST = 0x0e, + MLX5_IFC_STC_ACTION_TYPE_ASO = 0x12, + MLX5_IFC_STC_ACTION_TYPE_COUNTER = 0x14, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE = 0x80, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR = 0x81, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT = 0x82, + MLX5_IFC_STC_ACTION_TYPE_DROP = 0x83, + MLX5_IFC_STC_ACTION_TYPE_ALLOW = 0x84, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT = 0x85, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK = 0x86, +}; + +struct mlx5_ifc_stc_ste_param_ste_table_bits { + u8 ste_obj_id[0x20]; + u8 match_definer_id[0x20]; + u8 reserved_at_40[0x3]; + u8 log_hash_size[0x5]; + u8 reserved_at_48[0x38]; +}; + +struct mlx5_ifc_stc_ste_param_tir_bits { + u8 reserved_at_0[0x8]; + u8 tirn[0x18]; + u8 reserved_at_20[0x60]; +}; + +struct mlx5_ifc_stc_ste_param_table_bits { + u8 reserved_at_0[0x8]; + u8 table_id[0x18]; + u8 reserved_at_20[0x60]; +}; + +struct mlx5_ifc_stc_ste_param_flow_counter_bits { + u8 flow_counter_id[0x20]; +}; + +enum { + MLX5_ASO_CT_NUM_PER_OBJ = 1, + MLX5_ASO_METER_NUM_PER_OBJ = 2, +}; + +struct mlx5_ifc_stc_ste_param_execute_aso_bits { + u8 aso_object_id[0x20]; + u8 return_reg_id[0x4]; + u8 aso_type[0x4]; + u8 reserved_at_28[0x18]; +}; + +struct mlx5_ifc_stc_ste_param_header_modify_list_bits { + u8 header_modify_pattern_id[0x20]; + u8 header_modify_argument_id[0x20]; +}; + +enum mlx5_ifc_header_anchors { + MLX5_HEADER_ANCHOR_PACKET_START = 0x0, + MLX5_HEADER_ANCHOR_FIRST_VLAN_START = 0x2, + MLX5_HEADER_ANCHOR_IPV6_IPV4 = 0x07, + MLX5_HEADER_ANCHOR_INNER_MAC = 0x13, + MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4 = 0x19, +}; + +struct mlx5_ifc_stc_ste_param_remove_bits { + u8 action_type[0x4]; + u8 decap[0x1]; + u8 reserved_at_5[0x5]; + u8 remove_start_anchor[0x6]; + u8 reserved_at_10[0x2]; + u8 remove_end_anchor[0x6]; + u8 reserved_at_18[0x8]; +}; + +struct mlx5_ifc_stc_ste_param_remove_words_bits { + u8 action_type[0x4]; + u8 reserved_at_4[0x6]; + u8 remove_start_anchor[0x6]; + u8 reserved_at_10[0x1]; + u8 remove_offset[0x7]; + u8 reserved_at_18[0x2]; + u8 remove_size[0x6]; +}; + +struct mlx5_ifc_stc_ste_param_insert_bits { + u8 action_type[0x4]; + u8 encap[0x1]; + u8 inline_data[0x1]; + u8 reserved_at_6[0x4]; + u8 insert_anchor[0x6]; + u8 reserved_at_10[0x1]; + u8 insert_offset[0x7]; + u8 reserved_at_18[0x1]; + u8 insert_size[0x7]; + u8 insert_argument[0x20]; +}; + +struct mlx5_ifc_stc_ste_param_vport_bits { + u8 eswitch_owner_vhca_id[0x10]; + u8 vport_number[0x10]; + u8 eswitch_owner_vhca_id_valid[0x1]; + u8 reserved_at_21[0x59]; +}; + +union mlx5_ifc_stc_param_bits { + struct mlx5_ifc_stc_ste_param_ste_table_bits ste_table; + struct mlx5_ifc_stc_ste_param_tir_bits tir; + struct mlx5_ifc_stc_ste_param_table_bits table; + struct mlx5_ifc_stc_ste_param_flow_counter_bits counter; + struct mlx5_ifc_stc_ste_param_header_modify_list_bits modify_header; + struct mlx5_ifc_stc_ste_param_execute_aso_bits aso; + struct mlx5_ifc_stc_ste_param_remove_bits remove_header; + struct mlx5_ifc_stc_ste_param_insert_bits insert_header; + struct mlx5_ifc_set_action_in_bits add; + struct mlx5_ifc_set_action_in_bits set; + struct mlx5_ifc_copy_action_in_bits copy; + struct mlx5_ifc_stc_ste_param_vport_bits vport; + u8 reserved_at_0[0x80]; +}; + +enum { + MLX5_IFC_MODIFY_STC_FIELD_SELECT_NEW_STC = 1 << 0, +}; + +struct mlx5_ifc_stc_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x48]; + u8 table_type[0x8]; + u8 ste_action_offset[0x8]; + u8 action_type[0x8]; + u8 reserved_at_a0[0x60]; + union mlx5_ifc_stc_param_bits stc_param; + u8 reserved_at_180[0x280]; +}; + +struct mlx5_ifc_ste_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x48]; + u8 table_type[0x8]; + u8 reserved_at_90[0x370]; +}; + +enum { + MLX5_IFC_DEFINER_FORMAT_ID_SELECT = 61, +}; + +struct mlx5_ifc_definer_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x50]; + u8 format_id[0x10]; + u8 reserved_at_60[0x60]; + u8 format_select_dw3[0x8]; + u8 format_select_dw2[0x8]; + u8 format_select_dw1[0x8]; + u8 format_select_dw0[0x8]; + u8 format_select_dw7[0x8]; + u8 format_select_dw6[0x8]; + u8 format_select_dw5[0x8]; + u8 format_select_dw4[0x8]; + u8 reserved_at_100[0x18]; + u8 format_select_dw8[0x8]; + u8 reserved_at_120[0x20]; + u8 format_select_byte3[0x8]; + u8 format_select_byte2[0x8]; + u8 format_select_byte1[0x8]; + u8 format_select_byte0[0x8]; + u8 format_select_byte7[0x8]; + u8 format_select_byte6[0x8]; + u8 format_select_byte5[0x8]; + u8 format_select_byte4[0x8]; + u8 reserved_at_180[0x40]; + u8 ctrl[0xa0]; + u8 match_mask[0x160]; +}; + +struct mlx5_ifc_arg_bits { + u8 rsvd0[0x88]; + u8 access_pd[0x18]; +}; + +struct mlx5_ifc_header_modify_pattern_in_bits { + u8 modify_field_select[0x40]; + + u8 reserved_at_40[0x40]; + + u8 pattern_length[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x60]; + + u8 pattern_data[MAX_ACTIONS_DATA_IN_HEADER_MODIFY * 8]; +}; + struct mlx5_ifc_create_virtio_q_counters_in_bits { struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters; @@ -3044,6 +3398,36 @@ struct mlx5_ifc_create_geneve_tlv_option_in_bits { struct mlx5_ifc_geneve_tlv_option_bits geneve_tlv_opt; }; +struct mlx5_ifc_create_rtc_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_rtc_bits rtc; +}; + +struct mlx5_ifc_create_stc_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_stc_bits stc; +}; + +struct mlx5_ifc_create_ste_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_ste_bits ste; +}; + +struct mlx5_ifc_create_definer_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_definer_bits definer; +}; + +struct mlx5_ifc_create_arg_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_arg_bits arg; +}; + +struct mlx5_ifc_create_header_modify_pattern_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_header_modify_pattern_in_bits pattern; +}; + enum { MLX5_CRYPTO_KEY_SIZE_128b = 0x0, MLX5_CRYPTO_KEY_SIZE_256b = 0x1, @@ -4253,6 +4637,209 @@ struct mlx5_ifc_query_q_counter_in_bits { u8 counter_set_id[0x8]; }; +enum { + FS_FT_NIC_RX = 0x0, + FS_FT_NIC_TX = 0x1, + FS_FT_FDB = 0x4, + FS_FT_FDB_RX = 0xa, + FS_FT_FDB_TX = 0xb, +}; + +struct mlx5_ifc_flow_table_context_bits { + u8 reformat_en[0x1]; + u8 decap_en[0x1]; + u8 sw_owner[0x1]; + u8 termination_table[0x1]; + u8 table_miss_action[0x4]; + u8 level[0x8]; + u8 rtc_valid[0x1]; + u8 reserved_at_11[0x7]; + u8 log_size[0x8]; + + u8 reserved_at_20[0x8]; + u8 table_miss_id[0x18]; + + u8 reserved_at_40[0x8]; + u8 lag_master_next_table_id[0x18]; + + u8 reserved_at_60[0x60]; + + u8 rtc_id_0[0x20]; + + u8 rtc_id_1[0x20]; + + u8 reserved_at_100[0x40]; +}; + +struct mlx5_ifc_create_flow_table_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + + u8 reserved_at_60[0x20]; + + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x20]; + + struct mlx5_ifc_flow_table_context_bits flow_table_context; +}; + +struct mlx5_ifc_create_flow_table_out_bits { + u8 status[0x8]; + u8 icm_address_63_40[0x18]; + u8 syndrome[0x20]; + u8 icm_address_39_32[0x8]; + u8 table_id[0x18]; + u8 icm_address_31_0[0x20]; +}; + +enum mlx5_flow_destination_type { + MLX5_FLOW_DESTINATION_TYPE_VPORT = 0x0, +}; + +enum { + MLX5_FLOW_CONTEXT_ACTION_FWD_DEST = 0x4, +}; + +struct mlx5_ifc_set_fte_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x40]; +}; + +struct mlx5_ifc_dest_format_bits { + u8 destination_type[0x8]; + u8 destination_id[0x18]; + u8 destination_eswitch_owner_vhca_id_valid[0x1]; + u8 packet_reformat[0x1]; + u8 reserved_at_22[0xe]; + u8 destination_eswitch_owner_vhca_id[0x10]; +}; + +struct mlx5_ifc_flow_counter_list_bits { + u8 flow_counter_id[0x20]; + u8 reserved_at_20[0x20]; +}; + +union mlx5_ifc_dest_format_flow_counter_list_auto_bits { + struct mlx5_ifc_dest_format_bits dest_format; + struct mlx5_ifc_flow_counter_list_bits flow_counter_list; + u8 reserved_at_0[0x40]; +}; + +struct mlx5_ifc_flow_context_bits { + u8 reserved_at_00[0x20]; + u8 group_id[0x20]; + u8 reserved_at_40[0x8]; + u8 flow_tag[0x18]; + u8 reserved_at_60[0x10]; + u8 action[0x10]; + u8 extended_destination[0x1]; + u8 reserved_at_81[0x7]; + u8 destination_list_size[0x18]; + u8 reserved_at_a0[0x8]; + u8 flow_counter_list_size[0x18]; + u8 reserved_at_c0[0x1740]; + /* Currently only one destnation */ + union mlx5_ifc_dest_format_flow_counter_list_auto_bits destination[1]; +}; + +struct mlx5_ifc_set_fte_in_bits { + u8 opcode[0x10]; + u8 reserved_at_10[0x10]; + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + u8 reserved_at_60[0x20]; + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + u8 ignore_flow_level[0x1]; + u8 reserved_at_c1[0x17]; + u8 modify_enable_mask[0x8]; + u8 reserved_at_e0[0x20]; + u8 flow_index[0x20]; + u8 reserved_at_120[0xe0]; + struct mlx5_ifc_flow_context_bits flow_context; +}; + +struct mlx5_ifc_create_flow_group_in_bits { + u8 opcode[0x10]; + u8 reserved_at_10[0x10]; + u8 reserved_at_20[0x20]; + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + u8 reserved_at_60[0x20]; + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + u8 reserved_at_c0[0x1f40]; +}; + +struct mlx5_ifc_create_flow_group_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x8]; + u8 group_id[0x18]; + u8 reserved_at_60[0x20]; +}; + +enum { + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION = 1 << 0, + MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID = 1 << 1, +}; + +enum { + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_DEFAULT = 0, + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_GOTO_TBL = 1, +}; + +struct mlx5_ifc_modify_flow_table_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 reserved_at_40[0x10]; + u8 vport_number[0x10]; + + u8 reserved_at_60[0x10]; + u8 modify_field_select[0x10]; + + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + + struct mlx5_ifc_flow_table_context_bits flow_table_context; +}; + +struct mlx5_ifc_modify_flow_table_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + + u8 syndrome[0x20]; + + u8 reserved_at_40[0x60]; +}; + /* CQE format mask. */ #define MLX5E_CQE_FORMAT_MASK 0xc diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c new file mode 100644 index 0000000000..2211e49598 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c @@ -0,0 +1,948 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +int mlx5dr_cmd_destroy_obj(struct mlx5dr_devx_obj *devx_obj) +{ + int ret; + + ret = mlx5_glue->devx_obj_destroy(devx_obj->obj); + simple_free(devx_obj); + + return ret; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_table_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(create_flow_table_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_flow_table_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *ft_ctx; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for flow table object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_flow_table_in, in, opcode, MLX5_CMD_OP_CREATE_FLOW_TABLE); + MLX5_SET(create_flow_table_in, in, table_type, ft_attr->type); + + ft_ctx = MLX5_ADDR_OF(create_flow_table_in, in, flow_table_context); + MLX5_SET(flow_table_context, ft_ctx, level, ft_attr->level); + MLX5_SET(flow_table_context, ft_ctx, rtc_valid, ft_attr->rtc_valid); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create FT"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_flow_table_out, out, table_id); + + return devx_obj; +} + +int +mlx5dr_cmd_flow_table_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_ft_modify_attr *ft_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(modify_flow_table_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(modify_flow_table_in)] = {0}; + void *ft_ctx; + int ret; + + MLX5_SET(modify_flow_table_in, in, opcode, MLX5_CMD_OP_MODIFY_FLOW_TABLE); + MLX5_SET(modify_flow_table_in, in, table_type, ft_attr->type); + MLX5_SET(modify_flow_table_in, in, modify_field_select, ft_attr->modify_fs); + MLX5_SET(modify_flow_table_in, in, table_id, devx_obj->id); + + ft_ctx = MLX5_ADDR_OF(modify_flow_table_in, in, flow_table_context); + + MLX5_SET(flow_table_context, ft_ctx, table_miss_action, ft_attr->table_miss_action); + MLX5_SET(flow_table_context, ft_ctx, table_miss_id, ft_attr->table_miss_id); + MLX5_SET(flow_table_context, ft_ctx, rtc_id_0, ft_attr->rtc_id_0); + MLX5_SET(flow_table_context, ft_ctx, rtc_id_1, ft_attr->rtc_id_1); + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify FT"); + rte_errno = errno; + } + + return ret; +} + +static struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_group_create(struct ibv_context *ctx, + struct mlx5dr_cmd_fg_attr *fg_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(create_flow_group_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_flow_group_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for flow group object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_flow_group_in, in, opcode, MLX5_CMD_OP_CREATE_FLOW_GROUP); + MLX5_SET(create_flow_group_in, in, table_type, fg_attr->table_type); + MLX5_SET(create_flow_group_in, in, table_id, fg_attr->table_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create Flow group"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_flow_group_out, out, group_id); + + return devx_obj; +} + +static struct mlx5dr_devx_obj * +mlx5dr_cmd_set_vport_fte(struct ibv_context *ctx, + uint32_t table_type, + uint32_t table_id, + uint32_t group_id, + uint32_t vport_id) +{ + uint32_t in[MLX5_ST_SZ_DW(set_fte_in) + MLX5_ST_SZ_DW(dest_format)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(set_fte_out)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *in_flow_context; + void *in_dests; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for fte object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(set_fte_in, in, opcode, MLX5_CMD_OP_SET_FLOW_TABLE_ENTRY); + MLX5_SET(set_fte_in, in, table_type, table_type); + MLX5_SET(set_fte_in, in, table_id, table_id); + + in_flow_context = MLX5_ADDR_OF(set_fte_in, in, flow_context); + MLX5_SET(flow_context, in_flow_context, group_id, group_id); + MLX5_SET(flow_context, in_flow_context, destination_list_size, 1); + MLX5_SET(flow_context, in_flow_context, action, MLX5_FLOW_CONTEXT_ACTION_FWD_DEST); + + in_dests = MLX5_ADDR_OF(flow_context, in_flow_context, destination); + MLX5_SET(dest_format, in_dests, destination_type, + MLX5_FLOW_DESTINATION_TYPE_VPORT); + MLX5_SET(dest_format, in_dests, destination_id, vport_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create FTE"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + return devx_obj; +} + +void mlx5dr_cmd_miss_ft_destroy(struct mlx5dr_cmd_forward_tbl *tbl) +{ + mlx5dr_cmd_destroy_obj(tbl->fte); + mlx5dr_cmd_destroy_obj(tbl->fg); + mlx5dr_cmd_destroy_obj(tbl->ft); +} + +struct mlx5dr_cmd_forward_tbl * +mlx5dr_cmd_miss_ft_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr, + uint32_t vport) +{ + struct mlx5dr_cmd_fg_attr fg_attr = {0}; + struct mlx5dr_cmd_forward_tbl *tbl; + + tbl = simple_calloc(1, sizeof(*tbl)); + if (!tbl) { + DR_LOG(ERR, "Failed to allocate memory for forward default"); + rte_errno = ENOMEM; + return NULL; + } + + tbl->ft = mlx5dr_cmd_flow_table_create(ctx, ft_attr); + if (!tbl->ft) { + DR_LOG(ERR, "Failed to create FT for miss-table"); + goto free_tbl; + } + + fg_attr.table_id = tbl->ft->id; + fg_attr.table_type = ft_attr->type; + + tbl->fg = mlx5dr_cmd_flow_group_create(ctx, &fg_attr); + if (!tbl->fg) { + DR_LOG(ERR, "Failed to create FG for miss-table"); + goto free_ft; + } + + tbl->fte = mlx5dr_cmd_set_vport_fte(ctx, ft_attr->type, tbl->ft->id, tbl->fg->id, vport); + if (!tbl->fte) { + DR_LOG(ERR, "Failed to create FTE for miss-table"); + goto free_fg; + } + return tbl; + +free_fg: + mlx5dr_cmd_destroy_obj(tbl->fg); +free_ft: + mlx5dr_cmd_destroy_obj(tbl->ft); +free_tbl: + simple_free(tbl); + return NULL; +} + +void mlx5dr_cmd_set_attr_connect_miss_tbl(struct mlx5dr_context *ctx, + uint32_t fw_ft_type, + enum mlx5dr_table_type type, + struct mlx5dr_cmd_ft_modify_attr *ft_attr) +{ + struct mlx5dr_devx_obj *default_miss_tbl; + + if (type != MLX5DR_TABLE_TYPE_FDB) + return; + + default_miss_tbl = ctx->common_res[type].default_miss->ft; + if (!default_miss_tbl) { + assert(false); + return; + } + ft_attr->modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION; + ft_attr->type = fw_ft_type; + ft_attr->table_miss_action = MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_GOTO_TBL; + ft_attr->table_miss_id = default_miss_tbl->id; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_rtc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_rtc_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for RTC object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_rtc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_RTC); + + attr = MLX5_ADDR_OF(create_rtc_in, in, rtc); + MLX5_SET(rtc, attr, ste_format, rtc_attr->is_jumbo ? + MLX5_IFC_RTC_STE_FORMAT_11DW : + MLX5_IFC_RTC_STE_FORMAT_8DW); + MLX5_SET(rtc, attr, pd, rtc_attr->pd); + MLX5_SET(rtc, attr, update_index_mode, rtc_attr->update_index_mode); + MLX5_SET(rtc, attr, log_depth, rtc_attr->log_depth); + MLX5_SET(rtc, attr, log_hash_size, rtc_attr->log_size); + MLX5_SET(rtc, attr, table_type, rtc_attr->table_type); + MLX5_SET(rtc, attr, match_definer_id, rtc_attr->definer_id); + MLX5_SET(rtc, attr, stc_id, rtc_attr->stc_base); + MLX5_SET(rtc, attr, ste_table_base_id, rtc_attr->ste_base); + MLX5_SET(rtc, attr, ste_table_offset, rtc_attr->ste_offset); + MLX5_SET(rtc, attr, miss_flow_table_id, rtc_attr->miss_ft_id); + MLX5_SET(rtc, attr, reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create RTC"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_stc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_stc_create_attr *stc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_stc_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for STC object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_stc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STC); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, stc_attr->log_obj_range); + + attr = MLX5_ADDR_OF(create_stc_in, in, stc); + MLX5_SET(stc, attr, table_type, stc_attr->table_type); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create STC"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +static int +mlx5dr_cmd_stc_modify_set_stc_param(struct mlx5dr_cmd_stc_modify_attr *stc_attr, + void *stc_parm) +{ + switch (stc_attr->action_type) { + case MLX5_IFC_STC_ACTION_TYPE_COUNTER: + MLX5_SET(stc_ste_param_flow_counter, stc_parm, flow_counter_id, stc_attr->id); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR: + MLX5_SET(stc_ste_param_tir, stc_parm, tirn, stc_attr->dest_tir_num); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT: + MLX5_SET(stc_ste_param_table, stc_parm, table_id, stc_attr->dest_table_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST: + MLX5_SET(stc_ste_param_header_modify_list, stc_parm, + header_modify_pattern_id, stc_attr->modify_header.pattern_id); + MLX5_SET(stc_ste_param_header_modify_list, stc_parm, + header_modify_argument_id, stc_attr->modify_header.arg_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE: + MLX5_SET(stc_ste_param_remove, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_REMOVE); + MLX5_SET(stc_ste_param_remove, stc_parm, decap, + stc_attr->remove_header.decap); + MLX5_SET(stc_ste_param_remove, stc_parm, remove_start_anchor, + stc_attr->remove_header.start_anchor); + MLX5_SET(stc_ste_param_remove, stc_parm, remove_end_anchor, + stc_attr->remove_header.end_anchor); + break; + case MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT: + MLX5_SET(stc_ste_param_insert, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_INSERT); + MLX5_SET(stc_ste_param_insert, stc_parm, encap, + stc_attr->insert_header.encap); + MLX5_SET(stc_ste_param_insert, stc_parm, inline_data, + stc_attr->insert_header.is_inline); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_anchor, + stc_attr->insert_header.insert_anchor); + /* HW gets the next 2 sizes in words */ + MLX5_SET(stc_ste_param_insert, stc_parm, insert_size, + stc_attr->insert_header.header_size / 2); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_offset, + stc_attr->insert_header.insert_offset / 2); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_argument, + stc_attr->insert_header.arg_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_COPY: + case MLX5_IFC_STC_ACTION_TYPE_SET: + case MLX5_IFC_STC_ACTION_TYPE_ADD: + *(__be64 *)stc_parm = stc_attr->modify_action.data; + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT: + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK: + MLX5_SET(stc_ste_param_vport, stc_parm, vport_number, + stc_attr->vport.vport_num); + MLX5_SET(stc_ste_param_vport, stc_parm, eswitch_owner_vhca_id, + stc_attr->vport.esw_owner_vhca_id); + MLX5_SET(stc_ste_param_vport, stc_parm, eswitch_owner_vhca_id_valid, 1); + break; + case MLX5_IFC_STC_ACTION_TYPE_DROP: + case MLX5_IFC_STC_ACTION_TYPE_NOP: + case MLX5_IFC_STC_ACTION_TYPE_TAG: + case MLX5_IFC_STC_ACTION_TYPE_ALLOW: + break; + case MLX5_IFC_STC_ACTION_TYPE_ASO: + MLX5_SET(stc_ste_param_execute_aso, stc_parm, aso_object_id, + stc_attr->aso.devx_obj_id); + MLX5_SET(stc_ste_param_execute_aso, stc_parm, return_reg_id, + stc_attr->aso.return_reg_id); + MLX5_SET(stc_ste_param_execute_aso, stc_parm, aso_type, + stc_attr->aso.aso_type); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE: + MLX5_SET(stc_ste_param_ste_table, stc_parm, ste_obj_id, + stc_attr->ste_table.ste_obj_id); + MLX5_SET(stc_ste_param_ste_table, stc_parm, match_definer_id, + stc_attr->ste_table.match_definer_id); + MLX5_SET(stc_ste_param_ste_table, stc_parm, log_hash_size, + stc_attr->ste_table.log_hash_size); + break; + case MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS: + MLX5_SET(stc_ste_param_remove_words, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS); + MLX5_SET(stc_ste_param_remove_words, stc_parm, remove_start_anchor, + stc_attr->remove_words.start_anchor); + MLX5_SET(stc_ste_param_remove_words, stc_parm, + remove_size, stc_attr->remove_words.num_of_words); + break; + default: + DR_LOG(ERR, "Not supported type %d", stc_attr->action_type); + rte_errno = EINVAL; + return rte_errno; + } + return 0; +} + +int +mlx5dr_cmd_stc_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_stc_modify_attr *stc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_stc_in)] = {0}; + void *stc_parm; + void *attr; + int ret; + + attr = MLX5_ADDR_OF(create_stc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_MODIFY_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STC); + MLX5_SET(general_obj_in_cmd_hdr, in, obj_id, devx_obj->id); + MLX5_SET(general_obj_in_cmd_hdr, in, obj_offset, stc_attr->stc_offset); + + attr = MLX5_ADDR_OF(create_stc_in, in, stc); + MLX5_SET(stc, attr, ste_action_offset, stc_attr->action_offset); + MLX5_SET(stc, attr, action_type, stc_attr->action_type); + MLX5_SET64(stc, attr, modify_field_select, + MLX5_IFC_MODIFY_STC_FIELD_SELECT_NEW_STC); + + /* Set destination TIRN, TAG, FT ID, STE ID */ + stc_parm = MLX5_ADDR_OF(stc, attr, stc_param); + ret = mlx5dr_cmd_stc_modify_set_stc_param(stc_attr, stc_parm); + if (ret) + return ret; + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify STC FW action_type %d", stc_attr->action_type); + rte_errno = errno; + } + + return ret; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_arg_create(struct ibv_context *ctx, + uint16_t log_obj_range, + uint32_t pd) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_arg_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for ARG object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_arg_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_ARG); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, log_obj_range); + + attr = MLX5_ADDR_OF(create_arg_in, in, arg); + MLX5_SET(arg, attr, access_pd, pd); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create ARG"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_header_modify_pattern_create(struct ibv_context *ctx, + uint32_t pattern_length, + uint8_t *actions) +{ + uint32_t in[MLX5_ST_SZ_DW(create_header_modify_pattern_in)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *pattern_data; + void *pattern; + void *attr; + + if (pattern_length > MAX_ACTIONS_DATA_IN_HEADER_MODIFY) { + DR_LOG(ERR, "Pattern length %d exceeds limit %d", + pattern_length, MAX_ACTIONS_DATA_IN_HEADER_MODIFY); + rte_errno = EINVAL; + return NULL; + } + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for header_modify_pattern object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_header_modify_pattern_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_MODIFY_HEADER_PATTERN); + + pattern = MLX5_ADDR_OF(create_header_modify_pattern_in, in, pattern); + /* Pattern_length is in ddwords */ + MLX5_SET(header_modify_pattern_in, pattern, pattern_length, pattern_length / (2 * DW_SIZE)); + + pattern_data = MLX5_ADDR_OF(header_modify_pattern_in, pattern, pattern_data); + memcpy(pattern_data, actions, pattern_length); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create header_modify_pattern"); + rte_errno = errno; + goto free_obj; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; + +free_obj: + simple_free(devx_obj); + return NULL; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_ste_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ste_create_attr *ste_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_ste_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for STE object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_ste_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STE); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, ste_attr->log_obj_range); + + attr = MLX5_ADDR_OF(create_ste_in, in, ste); + MLX5_SET(ste, attr, table_type, ste_attr->table_type); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create STE"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_definer_create(struct ibv_context *ctx, + struct mlx5dr_cmd_definer_create_attr *def_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_definer_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *ptr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for definer object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(general_obj_in_cmd_hdr, + in, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + in, obj_type, MLX5_GENERAL_OBJ_TYPE_DEFINER); + + ptr = MLX5_ADDR_OF(create_definer_in, in, definer); + MLX5_SET(definer, ptr, format_id, MLX5_IFC_DEFINER_FORMAT_ID_SELECT); + + MLX5_SET(definer, ptr, format_select_dw0, def_attr->dw_selector[0]); + MLX5_SET(definer, ptr, format_select_dw1, def_attr->dw_selector[1]); + MLX5_SET(definer, ptr, format_select_dw2, def_attr->dw_selector[2]); + MLX5_SET(definer, ptr, format_select_dw3, def_attr->dw_selector[3]); + MLX5_SET(definer, ptr, format_select_dw4, def_attr->dw_selector[4]); + MLX5_SET(definer, ptr, format_select_dw5, def_attr->dw_selector[5]); + MLX5_SET(definer, ptr, format_select_dw6, def_attr->dw_selector[6]); + MLX5_SET(definer, ptr, format_select_dw7, def_attr->dw_selector[7]); + MLX5_SET(definer, ptr, format_select_dw8, def_attr->dw_selector[8]); + + MLX5_SET(definer, ptr, format_select_byte0, def_attr->byte_selector[0]); + MLX5_SET(definer, ptr, format_select_byte1, def_attr->byte_selector[1]); + MLX5_SET(definer, ptr, format_select_byte2, def_attr->byte_selector[2]); + MLX5_SET(definer, ptr, format_select_byte3, def_attr->byte_selector[3]); + MLX5_SET(definer, ptr, format_select_byte4, def_attr->byte_selector[4]); + MLX5_SET(definer, ptr, format_select_byte5, def_attr->byte_selector[5]); + MLX5_SET(definer, ptr, format_select_byte6, def_attr->byte_selector[6]); + MLX5_SET(definer, ptr, format_select_byte7, def_attr->byte_selector[7]); + + ptr = MLX5_ADDR_OF(definer, ptr, match_mask); + memcpy(ptr, def_attr->match_mask, MLX5_FLD_SZ_BYTES(definer, match_mask)); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create Definer"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_sq_create(struct ibv_context *ctx, + struct mlx5dr_cmd_sq_create_attr *attr) +{ + uint32_t out[DEVX_ST_SZ_DW(create_sq_out)] = {0}; + uint32_t in[DEVX_ST_SZ_DW(create_sq_in)] = {0}; + void *sqc = DEVX_ADDR_OF(create_sq_in, in, ctx); + void *wqc = DEVX_ADDR_OF(sqc, sqc, wq); + struct mlx5dr_devx_obj *devx_obj; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to create SQ"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_sq_in, in, opcode, MLX5_CMD_OP_CREATE_SQ); + MLX5_SET(sqc, sqc, cqn, attr->cqn); + MLX5_SET(sqc, sqc, flush_in_error_en, 1); + MLX5_SET(sqc, sqc, non_wire, 1); + MLX5_SET(wq, wqc, wq_type, MLX5_WQ_TYPE_CYCLIC); + MLX5_SET(wq, wqc, pd, attr->pdn); + MLX5_SET(wq, wqc, uar_page, attr->page_id); + MLX5_SET(wq, wqc, log_wq_stride, log2above(MLX5_SEND_WQE_BB)); + MLX5_SET(wq, wqc, log_wq_sz, attr->log_wq_sz); + MLX5_SET(wq, wqc, dbr_umem_id, attr->dbr_id); + MLX5_SET(wq, wqc, wq_umem_id, attr->wq_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_sq_out, out, sqn); + + return devx_obj; +} + +int mlx5dr_cmd_sq_modify_rdy(struct mlx5dr_devx_obj *devx_obj) +{ + uint32_t out[DEVX_ST_SZ_DW(modify_sq_out)] = {0}; + uint32_t in[DEVX_ST_SZ_DW(modify_sq_in)] = {0}; + void *sqc = DEVX_ADDR_OF(modify_sq_in, in, ctx); + int ret; + + MLX5_SET(modify_sq_in, in, opcode, MLX5_CMD_OP_MODIFY_SQ); + MLX5_SET(modify_sq_in, in, sqn, devx_obj->id); + MLX5_SET(modify_sq_in, in, sq_state, MLX5_SQC_STATE_RST); + MLX5_SET(sqc, sqc, state, MLX5_SQC_STATE_RDY); + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify SQ"); + rte_errno = errno; + } + + return ret; +} + +int mlx5dr_cmd_query_caps(struct ibv_context *ctx, + struct mlx5dr_cmd_query_caps *caps) +{ + uint32_t out[DEVX_ST_SZ_DW(query_hca_cap_out)] = {0}; + uint32_t in[DEVX_ST_SZ_DW(query_hca_cap_in)] = {0}; + const struct flow_hw_port_info *port_info; + struct ibv_device_attr_ex attr_ex; + int ret; + + MLX5_SET(query_hca_cap_in, in, opcode, MLX5_CMD_OP_QUERY_HCA_CAP); + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query device caps"); + rte_errno = errno; + return rte_errno; + } + + caps->wqe_based_update = + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.wqe_based_flow_table_update_cap); + + caps->eswitch_manager = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.eswitch_manager); + + caps->flex_protocols = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.flex_parser_protocols); + + caps->log_header_modify_argument_granularity = + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.log_header_modify_argument_granularity); + + caps->log_header_modify_argument_granularity -= + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap. + log_header_modify_argument_granularity_offset); + + caps->log_header_modify_argument_max_alloc = + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.log_header_modify_argument_max_alloc); + + caps->definer_format_sup = + MLX5_GET64(query_hca_cap_out, out, + capability.cmd_hca_cap.match_definer_format_supported); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE_2 | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query device caps"); + rte_errno = errno; + return rte_errno; + } + + caps->full_dw_jumbo_support = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_8_6_ext); + + caps->format_select_gtpu_dw_0 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_0); + + caps->format_select_gtpu_dw_1 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_1); + + caps->format_select_gtpu_dw_2 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_2); + + caps->format_select_gtpu_ext_dw_0 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_first_ext_dw_0); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query flow table caps"); + rte_errno = errno; + return rte_errno; + } + + caps->nic_ft.max_level = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.max_ft_level); + + caps->nic_ft.reparse = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.reparse); + + if (caps->wqe_based_update) { + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_WQE_BASED_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query WQE based FT caps"); + rte_errno = errno; + return rte_errno; + } + + caps->rtc_reparse_mode = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_reparse_mode); + + caps->ste_format = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_format); + + caps->rtc_index_mode = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_index_mode); + + caps->rtc_log_depth_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_log_depth_max); + + caps->ste_alloc_log_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_alloc_log_max); + + caps->ste_alloc_log_gran = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_alloc_log_granularity); + + caps->trivial_match_definer = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + trivial_match_definer); + + caps->stc_alloc_log_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + stc_alloc_log_max); + + caps->stc_alloc_log_gran = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + stc_alloc_log_granularity); + } + + if (caps->eswitch_manager) { + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query flow table esw caps"); + rte_errno = errno; + return rte_errno; + } + + caps->fdb_ft.max_level = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.max_ft_level); + + caps->fdb_ft.reparse = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.reparse); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_SET_HCA_CAP_OP_MOD_ESW | MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Query eswitch capabilities failed %d\n", ret); + rte_errno = errno; + return rte_errno; + } + + if (MLX5_GET(query_hca_cap_out, out, + capability.esw_cap.esw_manager_vport_number_valid)) + caps->eswitch_manager_vport_number = + MLX5_GET(query_hca_cap_out, out, + capability.esw_cap.esw_manager_vport_number); + } + + ret = mlx5_glue->query_device_ex(ctx, NULL, &attr_ex); + if (ret) { + DR_LOG(ERR, "Failed to query device attributes"); + rte_errno = ret; + return rte_errno; + } + + strlcpy(caps->fw_ver, attr_ex.orig_attr.fw_ver, sizeof(caps->fw_ver)); + + port_info = flow_hw_get_wire_port(ctx); + if (port_info) { + caps->wire_regc = port_info->regc_value; + caps->wire_regc_mask = port_info->regc_mask; + } else { + DR_LOG(INFO, "Failed to query wire port regc value"); + } + + return ret; +} + +int mlx5dr_cmd_query_ib_port(struct ibv_context *ctx, + struct mlx5dr_cmd_query_vport_caps *vport_caps, + uint32_t port_num) +{ + struct mlx5_port_info port_info = {0}; + uint32_t flags; + int ret; + + flags = MLX5_PORT_QUERY_VPORT | MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID; + + ret = mlx5_glue->devx_port_query(ctx, port_num, &port_info); + /* Check if query succeed and vport is enabled */ + if (ret || (port_info.query_flags & flags) != flags) { + rte_errno = ENOTSUP; + return rte_errno; + } + + vport_caps->vport_num = port_info.vport_id; + vport_caps->esw_owner_vhca_id = port_info.esw_owner_vhca_id; + + if (port_info.query_flags & MLX5_PORT_QUERY_REG_C0) { + vport_caps->metadata_c = port_info.vport_meta_tag; + vport_caps->metadata_c_mask = port_info.vport_meta_mask; + } + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h new file mode 100644 index 0000000000..2548b2b238 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h @@ -0,0 +1,230 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_CMD_H_ +#define MLX5DR_CMD_H_ + +struct mlx5dr_cmd_ft_create_attr { + uint8_t type; + uint8_t level; + bool rtc_valid; +}; + +struct mlx5dr_cmd_ft_modify_attr { + uint8_t type; + uint32_t rtc_id_0; + uint32_t rtc_id_1; + uint32_t table_miss_id; + uint8_t table_miss_action; + uint64_t modify_fs; +}; + +struct mlx5dr_cmd_fg_attr { + uint32_t table_id; + uint32_t table_type; +}; + +struct mlx5dr_cmd_forward_tbl { + struct mlx5dr_devx_obj *ft; + struct mlx5dr_devx_obj *fg; + struct mlx5dr_devx_obj *fte; + uint32_t refcount; +}; + +struct mlx5dr_cmd_rtc_create_attr { + uint32_t pd; + uint32_t stc_base; + uint32_t ste_base; + uint32_t ste_offset; + uint32_t miss_ft_id; + uint8_t update_index_mode; + uint8_t log_depth; + uint8_t log_size; + uint8_t table_type; + uint8_t definer_id; + bool is_jumbo; +}; + +struct mlx5dr_cmd_stc_create_attr { + uint8_t log_obj_range; + uint8_t table_type; +}; + +struct mlx5dr_cmd_stc_modify_attr { + uint32_t stc_offset; + uint8_t action_offset; + enum mlx5_ifc_stc_action_type action_type; + union { + uint32_t id; /* TIRN, TAG, FT ID, STE ID */ + struct { + uint8_t decap; + uint16_t start_anchor; + uint16_t end_anchor; + } remove_header; + struct { + uint32_t arg_id; + uint32_t pattern_id; + } modify_header; + struct { + __be64 data; + } modify_action; + struct { + uint32_t arg_id; + uint32_t header_size; + uint8_t is_inline; + uint8_t encap; + uint16_t insert_anchor; + uint16_t insert_offset; + } insert_header; + struct { + uint8_t aso_type; + uint32_t devx_obj_id; + uint8_t return_reg_id; + } aso; + struct { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + } vport; + struct { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_pool *ste_pool; + uint32_t ste_obj_id; /* Internal */ + uint32_t match_definer_id; + uint8_t log_hash_size; + } ste_table; + struct { + uint16_t start_anchor; + uint16_t num_of_words; + } remove_words; + + uint32_t dest_table_id; + uint32_t dest_tir_num; + }; +}; + +struct mlx5dr_cmd_ste_create_attr { + uint8_t log_obj_range; + uint8_t table_type; +}; + +struct mlx5dr_cmd_definer_create_attr { + uint8_t *dw_selector; + uint8_t *byte_selector; + uint8_t *match_mask; +}; + +struct mlx5dr_cmd_sq_create_attr { + uint32_t cqn; + uint32_t pdn; + uint32_t page_id; + uint32_t dbr_id; + uint32_t wq_id; + uint32_t log_wq_sz; +}; + +struct mlx5dr_cmd_query_ft_caps { + uint8_t max_level; + uint8_t reparse; +}; + +struct mlx5dr_cmd_query_vport_caps { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + uint32_t metadata_c; + uint32_t metadata_c_mask; +}; + +struct mlx5dr_cmd_query_caps { + uint32_t wire_regc; + uint32_t wire_regc_mask; + uint32_t flex_protocols; + uint8_t wqe_based_update; + uint8_t rtc_reparse_mode; + uint16_t ste_format; + uint8_t rtc_index_mode; + uint8_t ste_alloc_log_max; + uint8_t ste_alloc_log_gran; + uint8_t stc_alloc_log_max; + uint8_t stc_alloc_log_gran; + uint8_t rtc_log_depth_max; + uint8_t format_select_gtpu_dw_0; + uint8_t format_select_gtpu_dw_1; + uint8_t format_select_gtpu_dw_2; + uint8_t format_select_gtpu_ext_dw_0; + bool full_dw_jumbo_support; + struct mlx5dr_cmd_query_ft_caps nic_ft; + struct mlx5dr_cmd_query_ft_caps fdb_ft; + bool eswitch_manager; + uint32_t eswitch_manager_vport_number; + uint8_t log_header_modify_argument_granularity; + uint8_t log_header_modify_argument_max_alloc; + uint64_t definer_format_sup; + uint32_t trivial_match_definer; + char fw_ver[64]; +}; + +int mlx5dr_cmd_destroy_obj(struct mlx5dr_devx_obj *devx_obj); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_table_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr); + +int +mlx5dr_cmd_flow_table_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_ft_modify_attr *ft_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_rtc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_stc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_stc_create_attr *stc_attr); + +int +mlx5dr_cmd_stc_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_stc_modify_attr *stc_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_ste_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ste_create_attr *ste_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_definer_create(struct ibv_context *ctx, + struct mlx5dr_cmd_definer_create_attr *def_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_sq_create(struct ibv_context *ctx, + struct mlx5dr_cmd_sq_create_attr *attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_arg_create(struct ibv_context *ctx, + uint16_t log_obj_range, + uint32_t pd); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_header_modify_pattern_create(struct ibv_context *ctx, + uint32_t pattern_length, + uint8_t *actions); + +int mlx5dr_cmd_sq_modify_rdy(struct mlx5dr_devx_obj *devx_obj); + +int mlx5dr_cmd_query_ib_port(struct ibv_context *ctx, + struct mlx5dr_cmd_query_vport_caps *vport_caps, + uint32_t port_num); +int mlx5dr_cmd_query_caps(struct ibv_context *ctx, + struct mlx5dr_cmd_query_caps *caps); + +void mlx5dr_cmd_miss_ft_destroy(struct mlx5dr_cmd_forward_tbl *tbl); + +struct mlx5dr_cmd_forward_tbl * +mlx5dr_cmd_miss_ft_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr, + uint32_t vport); + +void mlx5dr_cmd_set_attr_connect_miss_tbl(struct mlx5dr_context *ctx, + uint32_t fw_ft_type, + enum mlx5dr_table_type type, + struct mlx5dr_cmd_ft_modify_attr *ft_attr); +#endif /* MLX5DR_CMD_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 09/18] net/mlx5/hws: Add HWS pool and buddy 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (7 preceding siblings ...) 2022-10-14 11:48 ` [v3 08/18] net/mlx5/hws: Add HWS command layer Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 10/18] net/mlx5/hws: Add HWS send layer Alex Vesker ` (8 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit From: Erez Shitrit <erezsh@nvidia.com> HWS needs to manage different types of device memory in an efficient and quick way. For this, memory pools are being used. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_buddy.c | 201 +++++++++ drivers/net/mlx5/hws/mlx5dr_buddy.h | 22 + drivers/net/mlx5/hws/mlx5dr_pool.c | 672 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_pool.h | 152 +++++++ 4 files changed, 1047 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.h diff --git a/drivers/net/mlx5/hws/mlx5dr_buddy.c b/drivers/net/mlx5/hws/mlx5dr_buddy.c new file mode 100644 index 0000000000..9dba95f0b1 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_buddy.c @@ -0,0 +1,201 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include <rte_bitmap.h> +#include <rte_malloc.h> +#include "mlx5dr_internal.h" +#include "mlx5dr_buddy.h" + +static struct rte_bitmap *bitmap_alloc0(int s) +{ + struct rte_bitmap *bitmap; + uint32_t bmp_size; + void *mem; + + bmp_size = rte_bitmap_get_memory_footprint(s); + mem = rte_zmalloc("create_bmap", bmp_size, RTE_CACHE_LINE_SIZE); + if (!mem) { + DR_LOG(ERR, "No mem for bitmap"); + rte_errno = ENOMEM; + return NULL; + } + + bitmap = rte_bitmap_init(s, mem, bmp_size); + if (!bitmap) { + DR_LOG(ERR, "%s Failed to initialize bitmap", __func__); + rte_errno = EINVAL; + goto err_mem_alloc; + } + + return bitmap; + +err_mem_alloc: + rte_free(mem); + return NULL; +} + +static void bitmap_set_bit(struct rte_bitmap *bmp, uint32_t pos) +{ + rte_bitmap_set(bmp, pos); +} + +static void bitmap_clear_bit(struct rte_bitmap *bmp, uint32_t pos) +{ + rte_bitmap_clear(bmp, pos); +} + +static bool bitmap_test_bit(struct rte_bitmap *bmp, unsigned long n) +{ + return !!rte_bitmap_get(bmp, n); +} + +static unsigned long bitmap_ffs(struct rte_bitmap *bmap, + unsigned long n, unsigned long m) +{ + uint64_t out_slab = 0; + uint32_t pos = 0; /* Compilation warn */ + + __rte_bitmap_scan_init(bmap); + if (!rte_bitmap_scan(bmap, &pos, &out_slab)) { + DR_LOG(ERR, "Failed to get slab from bitmap."); + return m; + } + pos = pos + __builtin_ctzll(out_slab); + + if (pos < n) { + DR_LOG(ERR, "Unexpected bit (%d < %"PRIx64") from bitmap", pos, n); + return m; + } + return pos; +} + +static unsigned long mlx5dr_buddy_find_first_bit(struct rte_bitmap *addr, + uint32_t size) +{ + return bitmap_ffs(addr, 0, size); +} + +static int mlx5dr_buddy_init(struct mlx5dr_buddy_mem *buddy, uint32_t max_order) +{ + int i, s; + + buddy->max_order = max_order; + + buddy->bits = simple_calloc(buddy->max_order + 1, sizeof(long *)); + if (!buddy->bits) { + rte_errno = ENOMEM; + return -1; + } + + buddy->num_free = simple_calloc(buddy->max_order + 1, sizeof(*buddy->num_free)); + if (!buddy->num_free) { + rte_errno = ENOMEM; + goto err_out_free_bits; + } + + for (i = 0; i <= (int)buddy->max_order; ++i) { + s = 1 << (buddy->max_order - i); + buddy->bits[i] = bitmap_alloc0(s); + if (!buddy->bits[i]) + goto err_out_free_num_free; + } + + bitmap_set_bit(buddy->bits[buddy->max_order], 0); + + buddy->num_free[buddy->max_order] = 1; + + return 0; + +err_out_free_num_free: + for (i = 0; i <= (int)buddy->max_order; ++i) + rte_free(buddy->bits[i]); + + simple_free(buddy->num_free); + +err_out_free_bits: + simple_free(buddy->bits); + return -1; +} + +struct mlx5dr_buddy_mem *mlx5dr_buddy_create(uint32_t max_order) +{ + struct mlx5dr_buddy_mem *buddy; + + buddy = simple_calloc(1, sizeof(*buddy)); + if (!buddy) { + rte_errno = ENOMEM; + return NULL; + } + + if (mlx5dr_buddy_init(buddy, max_order)) + goto free_buddy; + + return buddy; + +free_buddy: + simple_free(buddy); + return NULL; +} + +void mlx5dr_buddy_cleanup(struct mlx5dr_buddy_mem *buddy) +{ + int i; + + for (i = 0; i <= (int)buddy->max_order; ++i) { + rte_free(buddy->bits[i]); + } + + simple_free(buddy->num_free); + simple_free(buddy->bits); +} + +int mlx5dr_buddy_alloc_mem(struct mlx5dr_buddy_mem *buddy, int order) +{ + int seg; + int o, m; + + for (o = order; o <= (int)buddy->max_order; ++o) + if (buddy->num_free[o]) { + m = 1 << (buddy->max_order - o); + seg = mlx5dr_buddy_find_first_bit(buddy->bits[o], m); + if (m <= seg) + return -1; + + goto found; + } + + return -1; + +found: + bitmap_clear_bit(buddy->bits[o], seg); + --buddy->num_free[o]; + + while (o > order) { + --o; + seg <<= 1; + bitmap_set_bit(buddy->bits[o], seg ^ 1); + ++buddy->num_free[o]; + } + + seg <<= order; + + return seg; +} + +void mlx5dr_buddy_free_mem(struct mlx5dr_buddy_mem *buddy, uint32_t seg, int order) +{ + seg >>= order; + + while (bitmap_test_bit(buddy->bits[order], seg ^ 1)) { + bitmap_clear_bit(buddy->bits[order], seg ^ 1); + --buddy->num_free[order]; + seg >>= 1; + ++order; + } + + bitmap_set_bit(buddy->bits[order], seg); + + ++buddy->num_free[order]; +} + diff --git a/drivers/net/mlx5/hws/mlx5dr_buddy.h b/drivers/net/mlx5/hws/mlx5dr_buddy.h new file mode 100644 index 0000000000..b9ec446b99 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_buddy.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_BUDDY_H_ +#define MLX5DR_BUDDY_H_ + +struct mlx5dr_buddy_mem { + struct rte_bitmap **bits; + unsigned int *num_free; + uint32_t max_order; +}; + +struct mlx5dr_buddy_mem *mlx5dr_buddy_create(uint32_t max_order); + +void mlx5dr_buddy_cleanup(struct mlx5dr_buddy_mem *buddy); + +int mlx5dr_buddy_alloc_mem(struct mlx5dr_buddy_mem *buddy, int order); + +void mlx5dr_buddy_free_mem(struct mlx5dr_buddy_mem *buddy, uint32_t seg, int order); + +#endif /* MLX5DR_BUDDY_H_ */ diff --git a/drivers/net/mlx5/hws/mlx5dr_pool.c b/drivers/net/mlx5/hws/mlx5dr_pool.c new file mode 100644 index 0000000000..2bfda5b4a5 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pool.c @@ -0,0 +1,672 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include <rte_bitmap.h> +#include <rte_malloc.h> +#include "mlx5dr_buddy.h" +#include "mlx5dr_internal.h" + +static void mlx5dr_pool_free_one_resource(struct mlx5dr_pool_resource *resource) +{ + mlx5dr_cmd_destroy_obj(resource->devx_obj); + + simple_free(resource); +} + +static void mlx5dr_pool_resource_free(struct mlx5dr_pool *pool, + int resource_idx) +{ + mlx5dr_pool_free_one_resource(pool->resource[resource_idx]); + pool->resource[resource_idx] = NULL; + + if (pool->tbl_type == MLX5DR_TABLE_TYPE_FDB) { + mlx5dr_pool_free_one_resource(pool->mirror_resource[resource_idx]); + pool->mirror_resource[resource_idx] = NULL; + } +} + +static struct mlx5dr_pool_resource * +mlx5dr_pool_create_one_resource(struct mlx5dr_pool *pool, uint32_t log_range, + uint32_t fw_ft_type) +{ + struct mlx5dr_cmd_ste_create_attr ste_attr; + struct mlx5dr_cmd_stc_create_attr stc_attr; + struct mlx5dr_pool_resource *resource; + struct mlx5dr_devx_obj *devx_obj; + + resource = simple_malloc(sizeof(*resource)); + if (!resource) { + rte_errno = ENOMEM; + return NULL; + } + + switch (pool->type) { + case MLX5DR_POOL_TYPE_STE: + ste_attr.log_obj_range = log_range; + ste_attr.table_type = fw_ft_type; + devx_obj = mlx5dr_cmd_ste_create(pool->ctx->ibv_ctx, &ste_attr); + break; + case MLX5DR_POOL_TYPE_STC: + stc_attr.log_obj_range = log_range; + stc_attr.table_type = fw_ft_type; + devx_obj = mlx5dr_cmd_stc_create(pool->ctx->ibv_ctx, &stc_attr); + break; + default: + assert(0); + break; + } + + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate resource objects"); + goto free_resource; + } + + resource->pool = pool; + resource->devx_obj = devx_obj; + resource->range = 1 << log_range; + resource->base_id = devx_obj->id; + + return resource; + +free_resource: + simple_free(resource); + return NULL; +} + +static int +mlx5dr_pool_resource_alloc(struct mlx5dr_pool *pool, uint32_t log_range, int idx) +{ + struct mlx5dr_pool_resource *resource; + uint32_t fw_ft_type, opt_log_range; + + fw_ft_type = mlx5dr_table_get_res_fw_ft_type(pool->tbl_type, false); + opt_log_range = pool->opt_type == MLX5DR_POOL_OPTIMIZE_ORIG ? 0 : log_range; + resource = mlx5dr_pool_create_one_resource(pool, opt_log_range, fw_ft_type); + if (!resource) { + DR_LOG(ERR, "Failed allocating resource"); + return rte_errno; + } + pool->resource[idx] = resource; + + if (pool->tbl_type == MLX5DR_TABLE_TYPE_FDB) { + struct mlx5dr_pool_resource *mir_resource; + + fw_ft_type = mlx5dr_table_get_res_fw_ft_type(pool->tbl_type, true); + opt_log_range = pool->opt_type == MLX5DR_POOL_OPTIMIZE_MIRROR ? 0 : log_range; + mir_resource = mlx5dr_pool_create_one_resource(pool, opt_log_range, fw_ft_type); + if (!mir_resource) { + DR_LOG(ERR, "Failed allocating mirrored resource"); + mlx5dr_pool_free_one_resource(resource); + pool->resource[idx] = NULL; + return rte_errno; + } + pool->mirror_resource[idx] = mir_resource; + } + + return 0; +} + +static int mlx5dr_pool_bitmap_get_free_slot(struct rte_bitmap *bitmap, uint32_t *iidx) +{ + uint64_t slab = 0; + + __rte_bitmap_scan_init(bitmap); + + if (!rte_bitmap_scan(bitmap, iidx, &slab)) + return ENOMEM; + + *iidx += __builtin_ctzll(slab); + + rte_bitmap_clear(bitmap, *iidx); + + return 0; +} + +static struct rte_bitmap *mlx5dr_pool_create_and_init_bitmap(uint32_t log_range) +{ + struct rte_bitmap *cur_bmp; + uint32_t bmp_size; + void *mem; + + bmp_size = rte_bitmap_get_memory_footprint(1 << log_range); + mem = rte_zmalloc("create_stc_bmap", bmp_size, RTE_CACHE_LINE_SIZE); + if (!mem) { + DR_LOG(ERR, "No mem for bitmap"); + rte_errno = ENOMEM; + return NULL; + } + + cur_bmp = rte_bitmap_init_with_all_set(1 << log_range, mem, bmp_size); + if (!cur_bmp) { + rte_free(mem); + DR_LOG(ERR, "Failed to initialize stc bitmap."); + rte_errno = ENOMEM; + return NULL; + } + + return cur_bmp; +} + +static void mlx5dr_pool_buddy_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + struct mlx5dr_buddy_mem *buddy; + + buddy = pool->db.buddy_manager->buddies[chunk->resource_idx]; + if (!buddy) { + assert(false); + DR_LOG(ERR, "No such buddy (%d)", chunk->resource_idx); + return; + } + + mlx5dr_buddy_free_mem(buddy, chunk->offset, chunk->order); +} + +static struct mlx5dr_buddy_mem * +mlx5dr_pool_buddy_get_next_buddy(struct mlx5dr_pool *pool, int idx, + uint32_t order, bool *is_new_buddy) +{ + static struct mlx5dr_buddy_mem *buddy; + uint32_t new_buddy_size; + + buddy = pool->db.buddy_manager->buddies[idx]; + if (buddy) + return buddy; + + new_buddy_size = RTE_MAX(pool->alloc_log_sz, order); + *is_new_buddy = true; + buddy = mlx5dr_buddy_create(new_buddy_size); + if (!buddy) { + DR_LOG(ERR, "Failed to create buddy order: %d index: %d", + new_buddy_size, idx); + return NULL; + } + + if (mlx5dr_pool_resource_alloc(pool, new_buddy_size, idx) != 0) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, new_buddy_size, idx); + mlx5dr_buddy_cleanup(buddy); + return NULL; + } + + pool->db.buddy_manager->buddies[idx] = buddy; + + return buddy; +} + +static int mlx5dr_pool_buddy_get_mem_chunk(struct mlx5dr_pool *pool, + int order, + uint32_t *buddy_idx, + int *seg) +{ + struct mlx5dr_buddy_mem *buddy; + bool new_mem = false; + int err = 0; + int i; + + *seg = -1; + + /* Find the next free place from the buddy array */ + while (*seg == -1) { + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + buddy = mlx5dr_pool_buddy_get_next_buddy(pool, i, + order, + &new_mem); + if (!buddy) { + err = rte_errno; + goto out; + } + + *seg = mlx5dr_buddy_alloc_mem(buddy, order); + if (*seg != -1) + goto found; + + if (pool->flags & MLX5DR_POOL_FLAGS_ONE_RESOURCE) { + DR_LOG(ERR, "Fail to allocate seg for one resource pool"); + err = rte_errno; + goto out; + } + + if (new_mem) { + /* We have new memory pool, should be place for us */ + assert(false); + DR_LOG(ERR, "No memory for order: %d with buddy no: %d", + order, i); + rte_errno = ENOMEM; + err = ENOMEM; + goto out; + } + } + } + +found: + *buddy_idx = i; +out: + return err; +} + +static int mlx5dr_pool_buddy_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret = 0; + + /* Go over the buddies and find next free slot */ + ret = mlx5dr_pool_buddy_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_pool_buddy_db_uninit(struct mlx5dr_pool *pool) +{ + struct mlx5dr_buddy_mem *buddy; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + buddy = pool->db.buddy_manager->buddies[i]; + if (buddy) { + mlx5dr_buddy_cleanup(buddy); + simple_free(buddy); + pool->db.buddy_manager->buddies[i] = NULL; + } + } + + simple_free(pool->db.buddy_manager); +} + +static int mlx5dr_pool_buddy_db_init(struct mlx5dr_pool *pool, uint32_t log_range) +{ + pool->db.buddy_manager = simple_calloc(1, sizeof(*pool->db.buddy_manager)); + if (!pool->db.buddy_manager) { + DR_LOG(ERR, "No mem for buddy_manager with log_range: %d", log_range); + rte_errno = ENOMEM; + return rte_errno; + } + + if (pool->flags & MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE) { + bool new_buddy; + + if (!mlx5dr_pool_buddy_get_next_buddy(pool, 0, log_range, &new_buddy)) { + DR_LOG(ERR, "Failed allocating memory on create log_sz: %d", log_range); + simple_free(pool->db.buddy_manager); + return rte_errno; + } + } + + pool->p_db_uninit = &mlx5dr_pool_buddy_db_uninit; + pool->p_get_chunk = &mlx5dr_pool_buddy_db_get_chunk; + pool->p_put_chunk = &mlx5dr_pool_buddy_db_put_chunk; + + return 0; +} + +static int mlx5dr_pool_create_resource_on_index(struct mlx5dr_pool *pool, + uint32_t alloc_size, int idx) +{ + if (mlx5dr_pool_resource_alloc(pool, alloc_size, idx) != 0) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, alloc_size, idx); + return rte_errno; + } + + return 0; +} + +static struct mlx5dr_pool_elements * +mlx5dr_pool_element_create_new_elem(struct mlx5dr_pool *pool, uint32_t order, int idx) +{ + struct mlx5dr_pool_elements *elem; + uint32_t alloc_size; + + alloc_size = pool->alloc_log_sz; + + elem = simple_calloc(1, sizeof(*elem)); + if (!elem) { + DR_LOG(ERR, "Failed to create elem order: %d index: %d", + order, idx); + rte_errno = ENOMEM; + return NULL; + } + /*sharing the same resource, also means that all the elements are with size 1*/ + if ((pool->flags & MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS) && + !(pool->flags & MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK)) { + /* Currently all chunks in size 1 */ + elem->bitmap = mlx5dr_pool_create_and_init_bitmap(alloc_size - order); + if (!elem->bitmap) { + DR_LOG(ERR, "Failed to create bitmap type: %d: size %d index: %d", + pool->type, alloc_size, idx); + goto free_elem; + } + } + + if (mlx5dr_pool_create_resource_on_index(pool, alloc_size, idx)) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, alloc_size, idx); + goto free_db; + } + + pool->db.element_manager->elements[idx] = elem; + + return elem; + +free_db: + rte_free(elem->bitmap); +free_elem: + simple_free(elem); + return NULL; +} + +static int mlx5dr_pool_element_find_seg(struct mlx5dr_pool_elements *elem, int *seg) +{ + if (mlx5dr_pool_bitmap_get_free_slot(elem->bitmap, (uint32_t *)seg)) { + elem->is_full = true; + return ENOMEM; + } + return 0; +} + +static int +mlx5dr_pool_onesize_element_get_mem_chunk(struct mlx5dr_pool *pool, uint32_t order, + uint32_t *idx, int *seg) +{ + struct mlx5dr_pool_elements *elem; + + elem = pool->db.element_manager->elements[0]; + if (!elem) + elem = mlx5dr_pool_element_create_new_elem(pool, order, 0); + if (!elem) + goto err_no_elem; + + *idx = 0; + + if (mlx5dr_pool_element_find_seg(elem, seg) != 0) { + DR_LOG(ERR, "No more resources (last request order: %d)", order); + rte_errno = ENOMEM; + return ENOMEM; + } + + elem->num_of_elements++; + return 0; + +err_no_elem: + DR_LOG(ERR, "Failed to allocate element for order: %d", order); + return ENOMEM; +} + +static int +mlx5dr_pool_general_element_get_mem_chunk(struct mlx5dr_pool *pool, uint32_t order, + uint32_t *idx, int *seg) +{ + int ret; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + if (!pool->resource[i]) { + ret = mlx5dr_pool_create_resource_on_index(pool, order, i); + if (ret) + goto err_no_res; + *idx = i; + *seg = 0; /* One memory slot in that element */ + return 0; + } + } + + rte_errno = ENOMEM; + DR_LOG(ERR, "No more resources (last request order: %d)", order); + return ENOMEM; + +err_no_res: + DR_LOG(ERR, "Failed to allocate element for order: %d", order); + return ENOMEM; +} + +static int mlx5dr_pool_general_element_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret; + + /* Go over all memory elements and find/allocate free slot */ + ret = mlx5dr_pool_general_element_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_pool_general_element_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + assert(pool->resource[chunk->resource_idx]); + + if (pool->flags & MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE) + mlx5dr_pool_resource_free(pool, chunk->resource_idx); +} + +static void mlx5dr_pool_general_element_db_uninit(struct mlx5dr_pool *pool) +{ + (void)pool; +} + +/* This memory management works as the following: + * - At start doesn't allocate no mem at all. + * - When new request for chunk arrived: + * allocate resource and give it. + * - When free that chunk: + * the resource is freed. + */ +static int mlx5dr_pool_general_element_db_init(struct mlx5dr_pool *pool) +{ + pool->db.element_manager = simple_calloc(1, sizeof(*pool->db.element_manager)); + if (!pool->db.element_manager) { + DR_LOG(ERR, "No mem for general elemnt_manager"); + rte_errno = ENOMEM; + return rte_errno; + } + + pool->p_db_uninit = &mlx5dr_pool_general_element_db_uninit; + pool->p_get_chunk = &mlx5dr_pool_general_element_db_get_chunk; + pool->p_put_chunk = &mlx5dr_pool_general_element_db_put_chunk; + + return 0; +} + +static void mlx5dr_onesize_element_db_destroy_element(struct mlx5dr_pool *pool, + struct mlx5dr_pool_elements *elem, + struct mlx5dr_pool_chunk *chunk) +{ + assert(pool->resource[chunk->resource_idx]); + + mlx5dr_pool_resource_free(pool, chunk->resource_idx); + + simple_free(elem); + pool->db.element_manager->elements[chunk->resource_idx] = NULL; +} + +static void mlx5dr_onesize_element_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + struct mlx5dr_pool_elements *elem; + + assert(chunk->resource_idx == 0); + + elem = pool->db.element_manager->elements[chunk->resource_idx]; + if (!elem) { + assert(false); + DR_LOG(ERR, "No such element (%d)", chunk->resource_idx); + return; + } + + rte_bitmap_set(elem->bitmap, chunk->offset); + elem->is_full = false; + elem->num_of_elements--; + + if (pool->flags & MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE && + !elem->num_of_elements) + mlx5dr_onesize_element_db_destroy_element(pool, elem, chunk); +} + +static int mlx5dr_onesize_element_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret = 0; + + /* Go over all memory elements and find/allocate free slot */ + ret = mlx5dr_pool_onesize_element_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_onesize_element_db_uninit(struct mlx5dr_pool *pool) +{ + struct mlx5dr_pool_elements *elem; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + elem = pool->db.element_manager->elements[i]; + if (elem) { + if (elem->bitmap) + rte_free(elem->bitmap); + simple_free(elem); + pool->db.element_manager->elements[i] = NULL; + } + } + simple_free(pool->db.element_manager); +} + +/* This memory management works as the following: + * - At start doesn't allocate no mem at all. + * - When new request for chunk arrived: + * aloocate the first and only slot of memory/resource + * when it ended return error. + */ +static int mlx5dr_pool_onesize_element_db_init(struct mlx5dr_pool *pool) +{ + pool->db.element_manager = simple_calloc(1, sizeof(*pool->db.element_manager)); + if (!pool->db.element_manager) { + DR_LOG(ERR, "No mem for general elemnt_manager"); + rte_errno = ENOMEM; + return rte_errno; + } + + pool->p_db_uninit = &mlx5dr_onesize_element_db_uninit; + pool->p_get_chunk = &mlx5dr_onesize_element_db_get_chunk; + pool->p_put_chunk = &mlx5dr_onesize_element_db_put_chunk; + + return 0; +} + +static int mlx5dr_pool_db_init(struct mlx5dr_pool *pool, + enum mlx5dr_db_type db_type) +{ + int ret; + + if (db_type == MLX5DR_POOL_DB_TYPE_GENERAL_SIZE) + ret = mlx5dr_pool_general_element_db_init(pool); + else if (db_type == MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE) + ret = mlx5dr_pool_onesize_element_db_init(pool); + else + ret = mlx5dr_pool_buddy_db_init(pool, pool->alloc_log_sz); + + if (ret) { + DR_LOG(ERR, "Failed to init general db : %d (ret: %d)", db_type, ret); + return ret; + } + + return 0; +} + +static void mlx5dr_pool_db_unint(struct mlx5dr_pool *pool) +{ + pool->p_db_uninit(pool); +} + +int +mlx5dr_pool_chunk_alloc(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret; + + pthread_spin_lock(&pool->lock); + ret = pool->p_get_chunk(pool, chunk); + pthread_spin_unlock(&pool->lock); + + return ret; +} + +void mlx5dr_pool_chunk_free(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + pthread_spin_lock(&pool->lock); + pool->p_put_chunk(pool, chunk); + pthread_spin_unlock(&pool->lock); +} + +struct mlx5dr_pool * +mlx5dr_pool_create(struct mlx5dr_context *ctx, struct mlx5dr_pool_attr *pool_attr) +{ + enum mlx5dr_db_type res_db_type; + struct mlx5dr_pool *pool; + + pool = simple_calloc(1, sizeof(*pool)); + if (!pool) + return NULL; + + pool->ctx = ctx; + pool->type = pool_attr->pool_type; + pool->alloc_log_sz = pool_attr->alloc_log_sz; + pool->flags = pool_attr->flags; + pool->tbl_type = pool_attr->table_type; + pool->opt_type = pool_attr->opt_type; + + pthread_spin_init(&pool->lock, PTHREAD_PROCESS_PRIVATE); + + /* Support general db */ + if (pool->flags == (MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE | + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK)) + res_db_type = MLX5DR_POOL_DB_TYPE_GENERAL_SIZE; + else if (pool->flags == (MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS)) + res_db_type = MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE; + else + res_db_type = MLX5DR_POOL_DB_TYPE_BUDDY; + + pool->alloc_log_sz = pool_attr->alloc_log_sz; + + if (mlx5dr_pool_db_init(pool, res_db_type)) + goto free_pool; + + return pool; + +free_pool: + pthread_spin_destroy(&pool->lock); + simple_free(pool); + return NULL; +} + +int mlx5dr_pool_destroy(struct mlx5dr_pool *pool) +{ + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) + if (pool->resource[i]) + mlx5dr_pool_resource_free(pool, i); + + mlx5dr_pool_db_unint(pool); + + pthread_spin_destroy(&pool->lock); + simple_free(pool); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_pool.h b/drivers/net/mlx5/hws/mlx5dr_pool.h new file mode 100644 index 0000000000..cd12c3ab9a --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pool.h @@ -0,0 +1,152 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_POOL_H_ +#define MLX5DR_POOL_H_ + +enum mlx5dr_pool_type { + MLX5DR_POOL_TYPE_STE, + MLX5DR_POOL_TYPE_STC, +}; + +#define MLX5DR_POOL_STC_LOG_SZ 14 + +#define MLX5DR_POOL_RESOURCE_ARR_SZ 100 + +struct mlx5dr_pool_chunk { + uint32_t resource_idx; + /* Internal offset, relative to base index */ + int offset; + int order; +}; + +struct mlx5dr_pool_resource { + struct mlx5dr_pool *pool; + struct mlx5dr_devx_obj *devx_obj; + uint32_t base_id; + uint32_t range; +}; + +enum mlx5dr_pool_flags { + /* Only a one resource in that pool */ + MLX5DR_POOL_FLAGS_ONE_RESOURCE = 1 << 0, + MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE = 1 << 1, + /* No sharing resources between chunks */ + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK = 1 << 2, + /* All objects are in the same size */ + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS = 1 << 3, + /* Manged by buddy allocator */ + MLX5DR_POOL_FLAGS_BUDDY_MANAGED = 1 << 4, + /* Allocate pool_type memory on pool creation */ + MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE = 1 << 5, + + /* These values should be used by the caller */ + MLX5DR_POOL_FLAGS_FOR_STC_POOL = + MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS, + MLX5DR_POOL_FLAGS_FOR_MATCHER_STE_POOL = + MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE | + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK, + MLX5DR_POOL_FLAGS_FOR_STE_ACTION_POOL = + MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_BUDDY_MANAGED | + MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE, +}; + +enum mlx5dr_pool_optimize { + MLX5DR_POOL_OPTIMIZE_NONE = 0x0, + MLX5DR_POOL_OPTIMIZE_ORIG = 0x1, + MLX5DR_POOL_OPTIMIZE_MIRROR = 0x2, +}; + +struct mlx5dr_pool_attr { + enum mlx5dr_pool_type pool_type; + enum mlx5dr_table_type table_type; + enum mlx5dr_pool_flags flags; + enum mlx5dr_pool_optimize opt_type; + /* Allocation size once memory is depleted */ + size_t alloc_log_sz; +}; + +enum mlx5dr_db_type { + /* Uses for allocating chunk of big memory, each element has its own resource in the FW*/ + MLX5DR_POOL_DB_TYPE_GENERAL_SIZE, + /* One resource only, all the elements are with same one size */ + MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE, + /* Many resources, the memory allocated with buddy mechanism */ + MLX5DR_POOL_DB_TYPE_BUDDY, +}; + +struct mlx5dr_buddy_manager { + struct mlx5dr_buddy_mem *buddies[MLX5DR_POOL_RESOURCE_ARR_SZ]; +}; + +struct mlx5dr_pool_elements { + uint32_t num_of_elements; + struct rte_bitmap *bitmap; + bool is_full; +}; + +struct mlx5dr_element_manager { + struct mlx5dr_pool_elements *elements[MLX5DR_POOL_RESOURCE_ARR_SZ]; +}; + +struct mlx5dr_pool_db { + enum mlx5dr_db_type type; + union { + struct mlx5dr_element_manager *element_manager; + struct mlx5dr_buddy_manager *buddy_manager; + }; +}; + +typedef int (*mlx5dr_pool_db_get_chunk)(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); +typedef void (*mlx5dr_pool_db_put_chunk)(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); +typedef void (*mlx5dr_pool_unint_db)(struct mlx5dr_pool *pool); + +struct mlx5dr_pool { + struct mlx5dr_context *ctx; + enum mlx5dr_pool_type type; + enum mlx5dr_pool_flags flags; + pthread_spinlock_t lock; + size_t alloc_log_sz; + enum mlx5dr_table_type tbl_type; + enum mlx5dr_pool_optimize opt_type; + struct mlx5dr_pool_resource *resource[MLX5DR_POOL_RESOURCE_ARR_SZ]; + struct mlx5dr_pool_resource *mirror_resource[MLX5DR_POOL_RESOURCE_ARR_SZ]; + /* DB */ + struct mlx5dr_pool_db db; + /* Functions */ + mlx5dr_pool_unint_db p_db_uninit; + mlx5dr_pool_db_get_chunk p_get_chunk; + mlx5dr_pool_db_put_chunk p_put_chunk; +}; + +struct mlx5dr_pool * +mlx5dr_pool_create(struct mlx5dr_context *ctx, + struct mlx5dr_pool_attr *pool_attr); + +int mlx5dr_pool_destroy(struct mlx5dr_pool *pool); + +int mlx5dr_pool_chunk_alloc(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); + +void mlx5dr_pool_chunk_free(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); + +static inline struct mlx5dr_devx_obj * +mlx5dr_pool_chunk_get_base_devx_obj(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + return pool->resource[chunk->resource_idx]->devx_obj; +} + +static inline struct mlx5dr_devx_obj * +mlx5dr_pool_chunk_get_base_devx_obj_mirror(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + return pool->mirror_resource[chunk->resource_idx]->devx_obj; +} +#endif /* MLX5DR_POOL_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 10/18] net/mlx5/hws: Add HWS send layer 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (8 preceding siblings ...) 2022-10-14 11:48 ` [v3 09/18] net/mlx5/hws: Add HWS pool and buddy Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 11/18] net/mlx5/hws: Add HWS definer layer Alex Vesker ` (7 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Mark Bloch HWS configures flows to the HW using a QP, each WQE has the details of the flow we want to offload. The send layer allocates the resources needed to send the request to the HW as well as managing the queues, getting completions and handling failures. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_send.c | 844 +++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_send.h | 275 ++++++++++ 2 files changed, 1119 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.h diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c new file mode 100644 index 0000000000..26904a9040 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_send.c @@ -0,0 +1,844 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +struct mlx5dr_send_ring_dep_wqe * +mlx5dr_send_add_new_dep_wqe(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *send_sq = &queue->send_ring->send_sq; + unsigned int idx = send_sq->head_dep_idx++ & (queue->num_entries - 1); + + memset(&send_sq->dep_wqe[idx].wqe_data.tag, 0, MLX5DR_MATCH_TAG_SZ); + + return &send_sq->dep_wqe[idx]; +} + +void mlx5dr_send_abort_new_dep_wqe(struct mlx5dr_send_engine *queue) +{ + queue->send_ring->send_sq.head_dep_idx--; +} + +void mlx5dr_send_all_dep_wqe(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *send_sq = &queue->send_ring->send_sq; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE; + + /* Fence first from previous depend WQEs */ + ste_attr.send_attr.fence = 1; + + while (send_sq->head_dep_idx != send_sq->tail_dep_idx) { + dep_wqe = &send_sq->dep_wqe[send_sq->tail_dep_idx++ & (queue->num_entries - 1)]; + + /* Notify HW on the last WQE */ + ste_attr.send_attr.notify_hw = (send_sq->tail_dep_idx == send_sq->head_dep_idx); + ste_attr.send_attr.user_data = dep_wqe->user_data; + ste_attr.send_attr.rule = dep_wqe->rule; + + ste_attr.rtc_0 = dep_wqe->rtc_0; + ste_attr.rtc_1 = dep_wqe->rtc_1; + ste_attr.retry_rtc_0 = dep_wqe->retry_rtc_0; + ste_attr.retry_rtc_1 = dep_wqe->retry_rtc_1; + ste_attr.used_id_rtc_0 = &dep_wqe->rule->rtc_0; + ste_attr.used_id_rtc_1 = &dep_wqe->rule->rtc_1; + ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; + ste_attr.wqe_data = &dep_wqe->wqe_data; + + mlx5dr_send_ste(queue, &ste_attr); + + /* Fencing is done only on the first WQE */ + ste_attr.send_attr.fence = 0; + } +} + +struct mlx5dr_send_engine_post_ctrl +mlx5dr_send_engine_post_start(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_engine_post_ctrl ctrl; + + ctrl.queue = queue; + /* Currently only one send ring is supported */ + ctrl.send_ring = &queue->send_ring[0]; + ctrl.num_wqebbs = 0; + + return ctrl; +} + +void mlx5dr_send_engine_post_req_wqe(struct mlx5dr_send_engine_post_ctrl *ctrl, + char **buf, size_t *len) +{ + struct mlx5dr_send_ring_sq *send_sq = &ctrl->send_ring->send_sq; + unsigned int idx; + + idx = (send_sq->cur_post + ctrl->num_wqebbs) & send_sq->buf_mask; + + *buf = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + *len = MLX5_SEND_WQE_BB; + + if (!ctrl->num_wqebbs) { + *buf += sizeof(struct mlx5dr_wqe_ctrl_seg); + *len -= sizeof(struct mlx5dr_wqe_ctrl_seg); + } + + ctrl->num_wqebbs++; +} + +static void mlx5dr_send_engine_post_ring(struct mlx5dr_send_ring_sq *sq, + struct mlx5dv_devx_uar *uar, + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl) +{ + rte_compiler_barrier(); + sq->db[MLX5_SND_DBR] = rte_cpu_to_be_32(sq->cur_post); + + rte_wmb(); + mlx5dr_uar_write64_relaxed(*((uint64_t *)wqe_ctrl), uar->reg_addr); + rte_wmb(); +} + +static void +mlx5dr_send_wqe_set_tag(struct mlx5dr_wqe_gta_data_seg_ste *wqe_data, + struct mlx5dr_rule_match_tag *tag, + bool is_jumbo) +{ + if (is_jumbo) { + /* Clear previous possibly dirty control */ + memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ); + memcpy(wqe_data->action, tag->jumbo, MLX5DR_JUMBO_TAG_SZ); + } else { + /* Clear previous possibly dirty control and actions */ + memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ + MLX5DR_ACTIONS_SZ); + memcpy(wqe_data->tag, tag->match, MLX5DR_MATCH_TAG_SZ); + } +} + +void mlx5dr_send_engine_post_end(struct mlx5dr_send_engine_post_ctrl *ctrl, + struct mlx5dr_send_engine_post_attr *attr) +{ + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_ring_sq *sq; + uint32_t flags = 0; + unsigned int idx; + + sq = &ctrl->send_ring->send_sq; + idx = sq->cur_post & sq->buf_mask; + sq->last_idx = idx; + + wqe_ctrl = (void *)(sq->buf + (idx << MLX5_SEND_WQE_SHIFT)); + + wqe_ctrl->opmod_idx_opcode = + rte_cpu_to_be_32((attr->opmod << 24) | + ((sq->cur_post & 0xffff) << 8) | + attr->opcode); + wqe_ctrl->qpn_ds = + rte_cpu_to_be_32((attr->len + sizeof(struct mlx5dr_wqe_ctrl_seg)) / 16 | + sq->sqn << 8); + + wqe_ctrl->imm = rte_cpu_to_be_32(attr->id); + + flags |= attr->notify_hw ? MLX5_WQE_CTRL_CQ_UPDATE : 0; + flags |= attr->fence ? MLX5_WQE_CTRL_INITIATOR_SMALL_FENCE : 0; + wqe_ctrl->flags = rte_cpu_to_be_32(flags); + + sq->wr_priv[idx].id = attr->id; + sq->wr_priv[idx].retry_id = attr->retry_id; + + sq->wr_priv[idx].rule = attr->rule; + sq->wr_priv[idx].user_data = attr->user_data; + sq->wr_priv[idx].num_wqebbs = ctrl->num_wqebbs; + + if (attr->rule) { + sq->wr_priv[idx].rule->pending_wqes++; + sq->wr_priv[idx].used_id = attr->used_id; + } + + sq->cur_post += ctrl->num_wqebbs; + + if (attr->notify_hw) + mlx5dr_send_engine_post_ring(sq, ctrl->queue->uar, wqe_ctrl); +} + +static void mlx5dr_send_wqe(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_engine_post_attr *send_attr, + struct mlx5dr_wqe_gta_ctrl_seg *send_wqe_ctrl, + void *send_wqe_data, + void *send_wqe_tag, + bool is_jumbo, + uint8_t gta_opcode, + uint32_t direct_index) +{ + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_engine_post_ctrl ctrl; + size_t wqe_len; + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_data, &wqe_len); + + wqe_ctrl->op_dirix = htobe32(gta_opcode << 28 | direct_index); + memcpy(wqe_ctrl->stc_ix, send_wqe_ctrl->stc_ix, sizeof(send_wqe_ctrl->stc_ix)); + + if (send_wqe_data) + memcpy(wqe_data, send_wqe_data, sizeof(*wqe_data)); + else + mlx5dr_send_wqe_set_tag(wqe_data, send_wqe_tag, is_jumbo); + + mlx5dr_send_engine_post_end(&ctrl, send_attr); +} + +void mlx5dr_send_ste(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ste_attr *ste_attr) +{ + struct mlx5dr_send_engine_post_attr *send_attr = &ste_attr->send_attr; + uint8_t notify_hw = send_attr->notify_hw; + uint8_t fence = send_attr->fence; + + if (ste_attr->rtc_1) { + send_attr->id = ste_attr->rtc_1; + send_attr->used_id = ste_attr->used_id_rtc_1; + send_attr->retry_id = ste_attr->retry_rtc_1; + send_attr->fence = fence; + send_attr->notify_hw = notify_hw && !ste_attr->rtc_0; + mlx5dr_send_wqe(queue, send_attr, + ste_attr->wqe_ctrl, + ste_attr->wqe_data, + ste_attr->wqe_tag, + ste_attr->wqe_tag_is_jumbo, + ste_attr->gta_opcode, + ste_attr->direct_index); + } + + if (ste_attr->rtc_0) { + send_attr->id = ste_attr->rtc_0; + send_attr->used_id = ste_attr->used_id_rtc_0; + send_attr->retry_id = ste_attr->retry_rtc_0; + send_attr->fence = fence && !ste_attr->rtc_1; + send_attr->notify_hw = notify_hw; + mlx5dr_send_wqe(queue, send_attr, + ste_attr->wqe_ctrl, + ste_attr->wqe_data, + ste_attr->wqe_tag, + ste_attr->wqe_tag_is_jumbo, + ste_attr->gta_opcode, + ste_attr->direct_index); + } + + /* Restore to ortginal requested values */ + send_attr->notify_hw = notify_hw; + send_attr->fence = fence; +} + +static void mlx5dr_send_engine_retry_post_send(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_priv *priv, + uint16_t wqe_cnt) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_send_ring_sq *send_sq; + unsigned int idx; + size_t wqe_len; + char *p; + + send_attr.rule = priv->rule; + send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + send_attr.len = MLX5_SEND_WQE_BB * 2 - sizeof(struct mlx5dr_wqe_ctrl_seg); + send_attr.notify_hw = 1; + send_attr.fence = 0; + send_attr.user_data = priv->user_data; + send_attr.id = priv->retry_id; + send_attr.used_id = priv->used_id; + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_data, &wqe_len); + + send_sq = &ctrl.send_ring->send_sq; + idx = wqe_cnt & send_sq->buf_mask; + p = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + + /* Copy old gta ctrl */ + memcpy(wqe_ctrl, p + sizeof(struct mlx5dr_wqe_ctrl_seg), + MLX5_SEND_WQE_BB - sizeof(struct mlx5dr_wqe_ctrl_seg)); + + idx = (wqe_cnt + 1) & send_sq->buf_mask; + p = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + + /* Copy old gta data */ + memcpy(wqe_data, p, MLX5_SEND_WQE_BB); + + mlx5dr_send_engine_post_end(&ctrl, &send_attr); +} + +void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *sq = &queue->send_ring[0].send_sq; + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl; + + wqe_ctrl = (void *)(sq->buf + (sq->last_idx << MLX5_SEND_WQE_SHIFT)); + + wqe_ctrl->flags |= rte_cpu_to_be_32(MLX5_WQE_CTRL_CQ_UPDATE); + + mlx5dr_send_engine_post_ring(sq, queue->uar, wqe_ctrl); +} + +static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_priv *priv, + uint16_t wqe_cnt, + enum rte_flow_op_status *status) +{ + priv->rule->pending_wqes--; + + if (*status == RTE_FLOW_OP_ERROR) { + if (priv->retry_id) { + mlx5dr_send_engine_retry_post_send(queue, priv, wqe_cnt); + return; + } + /* Some part of the rule failed */ + priv->rule->status = MLX5DR_RULE_STATUS_FAILING; + *priv->used_id = 0; + } else { + *priv->used_id = priv->id; + } + + /* Update rule status for the last completion */ + if (!priv->rule->pending_wqes) { + if (unlikely(priv->rule->status == MLX5DR_RULE_STATUS_FAILING)) { + /* Rule completely failed and doesn't require cleanup */ + if (!priv->rule->rtc_0 && !priv->rule->rtc_1) + priv->rule->status = MLX5DR_RULE_STATUS_FAILED; + + *status = RTE_FLOW_OP_ERROR; + } else { + /* Increase the status, this only works on good flow as the enum + * is arrange it away creating -> created -> deleting -> deleted + */ + priv->rule->status++; + *status = RTE_FLOW_OP_SUCCESS; + /* Rule was deleted now we can safely release action STEs */ + if (priv->rule->status == MLX5DR_RULE_STATUS_DELETED) + mlx5dr_rule_free_action_ste_idx(priv->rule); + } + } +} + +static void mlx5dr_send_engine_update(struct mlx5dr_send_engine *queue, + struct mlx5_cqe64 *cqe, + struct mlx5dr_send_ring_priv *priv, + struct rte_flow_op_result res[], + int64_t *i, + uint32_t res_nb, + uint16_t wqe_cnt) +{ + enum rte_flow_op_status status; + + if (!cqe || (likely(rte_be_to_cpu_32(cqe->byte_cnt) >> 31 == 0) && + likely(mlx5dv_get_cqe_opcode(cqe) == MLX5_CQE_REQ))) { + status = RTE_FLOW_OP_SUCCESS; + } else { + status = RTE_FLOW_OP_ERROR; + } + + if (priv->user_data) { + if (priv->rule) { + mlx5dr_send_engine_update_rule(queue, priv, wqe_cnt, &status); + /* Completion is provided on the last rule WQE */ + if (priv->rule->pending_wqes) + return; + } + + if (*i < res_nb) { + res[*i].user_data = priv->user_data; + res[*i].status = status; + (*i)++; + mlx5dr_send_engine_dec_rule(queue); + } else { + mlx5dr_send_engine_gen_comp(queue, priv->user_data, status); + } + } +} + +static void mlx5dr_send_engine_poll_cq(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring *send_ring, + struct rte_flow_op_result res[], + int64_t *i, + uint32_t res_nb) +{ + struct mlx5dr_send_ring_cq *cq = &send_ring->send_cq; + struct mlx5dr_send_ring_sq *sq = &send_ring->send_sq; + uint32_t cq_idx = cq->cons_index & cq->ncqe_mask; + struct mlx5dr_send_ring_priv *priv; + struct mlx5_cqe64 *cqe; + uint32_t offset_cqe64; + uint8_t cqe_opcode; + uint8_t cqe_owner; + uint16_t wqe_cnt; + uint8_t sw_own; + + offset_cqe64 = RTE_CACHE_LINE_SIZE - sizeof(struct mlx5_cqe64); + cqe = (void *)(cq->buf + (cq_idx << cq->cqe_log_sz) + offset_cqe64); + + sw_own = (cq->cons_index & cq->ncqe) ? 1 : 0; + cqe_opcode = mlx5dv_get_cqe_opcode(cqe); + cqe_owner = mlx5dv_get_cqe_owner(cqe); + + if (cqe_opcode == MLX5_CQE_INVALID || + cqe_owner != sw_own) + return; + + if (unlikely(mlx5dv_get_cqe_opcode(cqe) != MLX5_CQE_REQ)) + queue->err = true; + + rte_io_rmb(); + + wqe_cnt = be16toh(cqe->wqe_counter) & sq->buf_mask; + + while (cq->poll_wqe != wqe_cnt) { + priv = &sq->wr_priv[cq->poll_wqe]; + mlx5dr_send_engine_update(queue, NULL, priv, res, i, res_nb, 0); + cq->poll_wqe = (cq->poll_wqe + priv->num_wqebbs) & sq->buf_mask; + } + + priv = &sq->wr_priv[wqe_cnt]; + cq->poll_wqe = (wqe_cnt + priv->num_wqebbs) & sq->buf_mask; + mlx5dr_send_engine_update(queue, cqe, priv, res, i, res_nb, wqe_cnt); + cq->cons_index++; +} + +static void mlx5dr_send_engine_poll_cqs(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + int64_t *polled, + uint32_t res_nb) +{ + int j; + + for (j = 0; j < MLX5DR_NUM_SEND_RINGS; j++) { + mlx5dr_send_engine_poll_cq(queue, &queue->send_ring[j], + res, polled, res_nb); + + *queue->send_ring[j].send_cq.db = + htobe32(queue->send_ring[j].send_cq.cons_index & 0xffffff); + } +} + +static void mlx5dr_send_engine_poll_list(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + int64_t *polled, + uint32_t res_nb) +{ + struct mlx5dr_completed_poll *comp = &queue->completed; + + while (comp->ci != comp->pi) { + if (*polled < res_nb) { + res[*polled].status = + comp->entries[comp->ci].status; + res[*polled].user_data = + comp->entries[comp->ci].user_data; + (*polled)++; + comp->ci = (comp->ci + 1) & comp->mask; + mlx5dr_send_engine_dec_rule(queue); + } else { + return; + } + } +} + +static int mlx5dr_send_engine_poll(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + uint32_t res_nb) +{ + int64_t polled = 0; + + mlx5dr_send_engine_poll_list(queue, res, &polled, res_nb); + + if (polled >= res_nb) + return polled; + + mlx5dr_send_engine_poll_cqs(queue, res, &polled, res_nb); + + return polled; +} + +int mlx5dr_send_queue_poll(struct mlx5dr_context *ctx, + uint16_t queue_id, + struct rte_flow_op_result res[], + uint32_t res_nb) +{ + return mlx5dr_send_engine_poll(&ctx->send_queue[queue_id], + res, res_nb); +} + +static int mlx5dr_send_ring_create_sq_obj(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_sq *sq, + struct mlx5dr_send_ring_cq *cq, + size_t log_wq_sz) +{ + struct mlx5dr_cmd_sq_create_attr attr = {0}; + int err; + + attr.cqn = cq->cqn; + attr.pdn = ctx->pd_num; + attr.page_id = queue->uar->page_id; + attr.dbr_id = sq->db_umem->umem_id; + attr.wq_id = sq->buf_umem->umem_id; + attr.log_wq_sz = log_wq_sz; + + sq->obj = mlx5dr_cmd_sq_create(ctx->ibv_ctx, &attr); + if (!sq->obj) + return rte_errno; + + sq->sqn = sq->obj->id; + + err = mlx5dr_cmd_sq_modify_rdy(sq->obj); + if (err) + goto free_sq; + + return 0; + +free_sq: + mlx5dr_cmd_destroy_obj(sq->obj); + + return err; +} + +static inline unsigned long align(unsigned long val, unsigned long align) +{ + return (val + align - 1) & ~(align - 1); +} + +static int mlx5dr_send_ring_open_sq(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_sq *sq, + struct mlx5dr_send_ring_cq *cq) +{ + size_t sq_log_buf_sz; + size_t buf_aligned; + size_t sq_buf_sz; + size_t buf_sz; + int err; + + buf_sz = queue->num_entries * MAX_WQES_PER_RULE; + sq_log_buf_sz = log2above(buf_sz); + sq_buf_sz = 1 << (sq_log_buf_sz + log2above(MLX5_SEND_WQE_BB)); + sq->reg_addr = queue->uar->reg_addr; + + buf_aligned = align(sq_buf_sz, sysconf(_SC_PAGESIZE)); + err = posix_memalign((void **)&sq->buf, sysconf(_SC_PAGESIZE), buf_aligned); + if (err) { + rte_errno = ENOMEM; + return err; + } + memset(sq->buf, 0, buf_aligned); + + err = posix_memalign((void **)&sq->db, 8, 8); + if (err) + goto free_buf; + + sq->buf_umem = mlx5_glue->devx_umem_reg(ctx->ibv_ctx, sq->buf, sq_buf_sz, 0); + + if (!sq->buf_umem) { + err = errno; + goto free_db; + } + + sq->db_umem = mlx5_glue->devx_umem_reg(ctx->ibv_ctx, sq->db, 8, 0); + if (!sq->db_umem) { + err = errno; + goto free_buf_umem; + } + + err = mlx5dr_send_ring_create_sq_obj(ctx, queue, sq, cq, sq_log_buf_sz); + + if (err) + goto free_db_umem; + + sq->wr_priv = simple_malloc(sizeof(*sq->wr_priv) * buf_sz); + if (!sq->wr_priv) { + err = ENOMEM; + goto destroy_sq_obj; + } + + sq->dep_wqe = simple_calloc(queue->num_entries, sizeof(*sq->dep_wqe)); + if (!sq->dep_wqe) { + err = ENOMEM; + goto destroy_wr_priv; + } + + sq->buf_mask = buf_sz - 1; + + return 0; + +destroy_wr_priv: + simple_free(sq->wr_priv); +destroy_sq_obj: + mlx5dr_cmd_destroy_obj(sq->obj); +free_db_umem: + mlx5_glue->devx_umem_dereg(sq->db_umem); +free_buf_umem: + mlx5_glue->devx_umem_dereg(sq->buf_umem); +free_db: + free(sq->db); +free_buf: + free(sq->buf); + rte_errno = err; + return err; +} + +static void mlx5dr_send_ring_close_sq(struct mlx5dr_send_ring_sq *sq) +{ + simple_free(sq->dep_wqe); + mlx5dr_cmd_destroy_obj(sq->obj); + mlx5_glue->devx_umem_dereg(sq->db_umem); + mlx5_glue->devx_umem_dereg(sq->buf_umem); + simple_free(sq->wr_priv); + free(sq->db); + free(sq->buf); +} + +static int mlx5dr_send_ring_open_cq(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_cq *cq) +{ + struct mlx5dv_cq mlx5_cq = {0}; + struct mlx5dv_obj obj; + struct ibv_cq *ibv_cq; + size_t cq_size; + int err; + + cq_size = queue->num_entries; + ibv_cq = mlx5_glue->create_cq(ctx->ibv_ctx, cq_size, NULL, NULL, 0); + if (!ibv_cq) { + DR_LOG(ERR, "Failed to create CQ"); + rte_errno = errno; + return rte_errno; + } + + obj.cq.in = ibv_cq; + obj.cq.out = &mlx5_cq; + err = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_CQ); + if (err) { + err = errno; + goto close_cq; + } + + cq->buf = mlx5_cq.buf; + cq->db = mlx5_cq.dbrec; + cq->ncqe = mlx5_cq.cqe_cnt; + cq->cqe_sz = mlx5_cq.cqe_size; + cq->cqe_log_sz = log2above(cq->cqe_sz); + cq->ncqe_mask = cq->ncqe - 1; + cq->buf_sz = cq->cqe_sz * cq->ncqe; + cq->cqn = mlx5_cq.cqn; + cq->ibv_cq = ibv_cq; + + return 0; + +close_cq: + mlx5_glue->destroy_cq(ibv_cq); + rte_errno = err; + return err; +} + +static void mlx5dr_send_ring_close_cq(struct mlx5dr_send_ring_cq *cq) +{ + mlx5_glue->destroy_cq(cq->ibv_cq); +} + +static void mlx5dr_send_ring_close(struct mlx5dr_send_ring *ring) +{ + mlx5dr_send_ring_close_sq(&ring->send_sq); + mlx5dr_send_ring_close_cq(&ring->send_cq); +} + +static int mlx5dr_send_ring_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring *ring) +{ + int err; + + err = mlx5dr_send_ring_open_cq(ctx, queue, &ring->send_cq); + if (err) + return err; + + err = mlx5dr_send_ring_open_sq(ctx, queue, &ring->send_sq, &ring->send_cq); + if (err) + goto close_cq; + + return err; + +close_cq: + mlx5dr_send_ring_close_cq(&ring->send_cq); + + return err; +} + +static void __mlx5dr_send_rings_close(struct mlx5dr_send_engine *queue, + uint16_t i) +{ + while (i--) + mlx5dr_send_ring_close(&queue->send_ring[i]); +} + +static void mlx5dr_send_rings_close(struct mlx5dr_send_engine *queue) +{ + __mlx5dr_send_rings_close(queue, queue->rings); +} + +static int mlx5dr_send_rings_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue) +{ + uint16_t i; + int err; + + for (i = 0; i < queue->rings; i++) { + err = mlx5dr_send_ring_open(ctx, queue, &queue->send_ring[i]); + if (err) + goto free_rings; + } + + return 0; + +free_rings: + __mlx5dr_send_rings_close(queue, i); + + return err; +} + +void mlx5dr_send_queue_close(struct mlx5dr_send_engine *queue) +{ + mlx5dr_send_rings_close(queue); + simple_free(queue->completed.entries); + mlx5_glue->devx_free_uar(queue->uar); +} + +int mlx5dr_send_queue_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + uint16_t queue_size) +{ + struct mlx5dv_devx_uar *uar; + int err; + +#ifdef MLX5DV_UAR_ALLOC_TYPE_NC + uar = mlx5_glue->devx_alloc_uar(ctx->ibv_ctx, MLX5_IB_UAPI_UAR_ALLOC_TYPE_NC); + if (!uar) { + rte_errno = errno; + return rte_errno; + } +#else + uar = NULL; + rte_errno = ENOTSUP; + return rte_errno; +#endif + + queue->uar = uar; + queue->rings = MLX5DR_NUM_SEND_RINGS; + queue->num_entries = roundup_pow_of_two(queue_size); + queue->used_entries = 0; + queue->th_entries = queue->num_entries; + + queue->completed.entries = simple_calloc(queue->num_entries, + sizeof(queue->completed.entries[0])); + if (!queue->completed.entries) { + rte_errno = ENOMEM; + goto free_uar; + } + queue->completed.pi = 0; + queue->completed.ci = 0; + queue->completed.mask = queue->num_entries - 1; + + err = mlx5dr_send_rings_open(ctx, queue); + if (err) + goto free_completed_entries; + + return 0; + +free_completed_entries: + simple_free(queue->completed.entries); +free_uar: + mlx5_glue->devx_free_uar(uar); + return rte_errno; +} + +static void __mlx5dr_send_queues_close(struct mlx5dr_context *ctx, uint16_t queues) +{ + struct mlx5dr_send_engine *queue; + + while (queues--) { + queue = &ctx->send_queue[queues]; + + mlx5dr_send_queue_close(queue); + } +} + +void mlx5dr_send_queues_close(struct mlx5dr_context *ctx) +{ + __mlx5dr_send_queues_close(ctx, ctx->queues); + simple_free(ctx->send_queue); +} + +int mlx5dr_send_queues_open(struct mlx5dr_context *ctx, + uint16_t queues, + uint16_t queue_size) +{ + int err = 0; + uint32_t i; + + /* Open one extra queue for control path */ + ctx->queues = queues + 1; + + ctx->send_queue = simple_calloc(ctx->queues, sizeof(*ctx->send_queue)); + if (!ctx->send_queue) { + rte_errno = ENOMEM; + return rte_errno; + } + + for (i = 0; i < ctx->queues; i++) { + err = mlx5dr_send_queue_open(ctx, &ctx->send_queue[i], queue_size); + if (err) + goto close_send_queues; + } + + return 0; + +close_send_queues: + __mlx5dr_send_queues_close(ctx, i); + + simple_free(ctx->send_queue); + + return err; +} + +int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, + uint16_t queue_id, + uint32_t actions) +{ + struct mlx5dr_send_ring_sq *send_sq; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[queue_id]; + send_sq = &queue->send_ring->send_sq; + + if (actions == MLX5DR_SEND_QUEUE_ACTION_DRAIN) { + if (send_sq->head_dep_idx != send_sq->tail_dep_idx) + /* Send dependent WQEs to drain the queue */ + mlx5dr_send_all_dep_wqe(queue); + else + /* Signal on the last posted WQE */ + mlx5dr_send_engine_flush_queue(queue); + } else { + rte_errno = -EINVAL; + return rte_errno; + } + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h new file mode 100644 index 0000000000..8d4769495d --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_send.h @@ -0,0 +1,275 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_SEND_H_ +#define MLX5DR_SEND_H_ + +#define MLX5DR_NUM_SEND_RINGS 1 + +/* As a single operation requires at least two WQEBBS. + * This means a maximum of 16 such operations per rule. + */ +#define MAX_WQES_PER_RULE 32 + +/* WQE Control segment. */ +struct mlx5dr_wqe_ctrl_seg { + __be32 opmod_idx_opcode; + __be32 qpn_ds; + __be32 flags; + __be32 imm; +}; + +enum mlx5dr_wqe_opcode { + MLX5DR_WQE_OPCODE_TBL_ACCESS = 0x2c, +}; + +enum mlx5dr_wqe_opmod { + MLX5DR_WQE_OPMOD_GTA_STE = 0, + MLX5DR_WQE_OPMOD_GTA_MOD_ARG = 1, +}; + +enum mlx5dr_wqe_gta_opcode { + MLX5DR_WQE_GTA_OP_ACTIVATE = 0, + MLX5DR_WQE_GTA_OP_DEACTIVATE = 1, +}; + +enum mlx5dr_wqe_gta_opmod { + MLX5DR_WQE_GTA_OPMOD_STE = 0, + MLX5DR_WQE_GTA_OPMOD_MOD_ARG = 1, +}; + +enum mlx5dr_wqe_gta_sz { + MLX5DR_WQE_SZ_GTA_CTRL = 48, + MLX5DR_WQE_SZ_GTA_DATA = 64, +}; + +struct mlx5dr_wqe_gta_ctrl_seg { + __be32 op_dirix; + __be32 stc_ix[5]; + __be32 rsvd0[6]; +}; + +struct mlx5dr_wqe_gta_data_seg_ste { + __be32 rsvd0_ctr_id; + __be32 rsvd1[4]; + __be32 action[3]; + __be32 tag[8]; +}; + +struct mlx5dr_wqe_gta_data_seg_arg { + __be32 action_args[8]; +}; + +struct mlx5dr_wqe_gta { + struct mlx5dr_wqe_gta_ctrl_seg gta_ctrl; + union { + struct mlx5dr_wqe_gta_data_seg_ste seg_ste; + struct mlx5dr_wqe_gta_data_seg_arg seg_arg; + }; +}; + +struct mlx5dr_send_ring_cq { + uint8_t *buf; + uint32_t cons_index; + uint32_t ncqe_mask; + uint32_t buf_sz; + uint32_t ncqe; + uint32_t cqe_log_sz; + __be32 *db; + uint16_t poll_wqe; + struct ibv_cq *ibv_cq; + uint32_t cqn; + uint32_t cqe_sz; +}; + +struct mlx5dr_send_ring_priv { + struct mlx5dr_rule *rule; + void *user_data; + uint32_t num_wqebbs; + uint32_t id; + uint32_t retry_id; + uint32_t *used_id; +}; + +struct mlx5dr_send_ring_dep_wqe { + struct mlx5dr_wqe_gta_ctrl_seg wqe_ctrl; + struct mlx5dr_wqe_gta_data_seg_ste wqe_data; + struct mlx5dr_rule *rule; + uint32_t rtc_0; + uint32_t rtc_1; + uint32_t retry_rtc_0; + uint32_t retry_rtc_1; + void *user_data; +}; + +struct mlx5dr_send_ring_sq { + char *buf; + uint32_t sqn; + __be32 *db; + void *reg_addr; + uint16_t cur_post; + uint16_t buf_mask; + struct mlx5dr_send_ring_priv *wr_priv; + unsigned int last_idx; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + unsigned int head_dep_idx; + unsigned int tail_dep_idx; + struct mlx5dr_devx_obj *obj; + struct mlx5dv_devx_umem *buf_umem; + struct mlx5dv_devx_umem *db_umem; +}; + +struct mlx5dr_send_ring { + struct mlx5dr_send_ring_cq send_cq; + struct mlx5dr_send_ring_sq send_sq; +}; + +struct mlx5dr_completed_poll_entry { + void *user_data; + enum rte_flow_op_status status; +}; + +struct mlx5dr_completed_poll { + struct mlx5dr_completed_poll_entry *entries; + uint16_t ci; + uint16_t pi; + uint16_t mask; +}; + +struct mlx5dr_send_engine { + struct mlx5dr_send_ring send_ring[MLX5DR_NUM_SEND_RINGS]; /* For now 1:1 mapping */ + struct mlx5dv_devx_uar *uar; /* Uar is shared between rings of a queue */ + struct mlx5dr_completed_poll completed; + uint16_t used_entries; + uint16_t th_entries; + uint16_t rings; + uint16_t num_entries; + bool err; +} __rte_cache_aligned; + +struct mlx5dr_send_engine_post_ctrl { + struct mlx5dr_send_engine *queue; + struct mlx5dr_send_ring *send_ring; + size_t num_wqebbs; +}; + +struct mlx5dr_send_engine_post_attr { + uint8_t opcode; + uint8_t opmod; + uint8_t notify_hw; + uint8_t fence; + size_t len; + struct mlx5dr_rule *rule; + uint32_t id; + uint32_t retry_id; + uint32_t *used_id; + void *user_data; +}; + +struct mlx5dr_send_ste_attr { + /* rtc / retry_rtc / used_id_rtc override send_attr */ + uint32_t rtc_0; + uint32_t rtc_1; + uint32_t retry_rtc_0; + uint32_t retry_rtc_1; + uint32_t *used_id_rtc_0; + uint32_t *used_id_rtc_1; + bool wqe_tag_is_jumbo; + uint8_t gta_opcode; + uint32_t direct_index; + struct mlx5dr_send_engine_post_attr send_attr; + struct mlx5dr_rule_match_tag *wqe_tag; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; +}; + +/** + * Provide safe 64bit store operation to mlx5 UAR region for + * both 32bit and 64bit architectures. + * + * @param val + * value to write in CPU endian format. + * @param addr + * Address to write to. + * @param lock + * Address of the lock to use for that UAR access. + */ +static __rte_always_inline void +mlx5dr_uar_write64_relaxed(uint64_t val, void *addr) +{ +#ifdef RTE_ARCH_64 + *(uint64_t *)addr = val; +#else /* !RTE_ARCH_64 */ + *(uint32_t *)addr = val; + rte_io_wmb(); + *((uint32_t *)addr + 1) = val >> 32; +#endif +} + +struct mlx5dr_send_ring_dep_wqe * +mlx5dr_send_add_new_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_abort_new_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_all_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_queue_close(struct mlx5dr_send_engine *queue); + +int mlx5dr_send_queue_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + uint16_t queue_size); + +void mlx5dr_send_queues_close(struct mlx5dr_context *ctx); + +int mlx5dr_send_queues_open(struct mlx5dr_context *ctx, + uint16_t queues, + uint16_t queue_size); + +struct mlx5dr_send_engine_post_ctrl +mlx5dr_send_engine_post_start(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_engine_post_req_wqe(struct mlx5dr_send_engine_post_ctrl *ctrl, + char **buf, size_t *len); + +void mlx5dr_send_engine_post_end(struct mlx5dr_send_engine_post_ctrl *ctrl, + struct mlx5dr_send_engine_post_attr *attr); + +void mlx5dr_send_ste(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ste_attr *ste_attr); + +void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue); + +static inline bool mlx5dr_send_engine_full(struct mlx5dr_send_engine *queue) +{ + return queue->used_entries >= queue->th_entries; +} + +static inline void mlx5dr_send_engine_inc_rule(struct mlx5dr_send_engine *queue) +{ + queue->used_entries++; +} + +static inline void mlx5dr_send_engine_dec_rule(struct mlx5dr_send_engine *queue) +{ + queue->used_entries--; +} + +static inline void mlx5dr_send_engine_gen_comp(struct mlx5dr_send_engine *queue, + void *user_data, + int comp_status) +{ + struct mlx5dr_completed_poll *comp = &queue->completed; + + comp->entries[comp->pi].status = comp_status; + comp->entries[comp->pi].user_data = user_data; + + comp->pi = (comp->pi + 1) & comp->mask; +} + +static inline bool mlx5dr_send_engine_err(struct mlx5dr_send_engine *queue) +{ + return queue->err; +} + +#endif /* MLX5DR_SEND_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 11/18] net/mlx5/hws: Add HWS definer layer 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (9 preceding siblings ...) 2022-10-14 11:48 ` [v3 10/18] net/mlx5/hws: Add HWS send layer Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 12/18] net/mlx5/hws: Add HWS context object Alex Vesker ` (6 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Mark Bloch Definers are HW objects that are used for matching, rte items are translated to definers, each definer holds the fields and bit-masks used for HW flow matching. The definer layer is used for finding the most efficient definer for each set of items. In addition to definer creation we also calculate the field copy (fc) array used for efficient items to WQE conversion. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_definer.c | 1968 +++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_definer.h | 585 ++++++++ 2 files changed, 2553 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.h diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c new file mode 100644 index 0000000000..6b98eb8c96 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -0,0 +1,1968 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +#define GTP_PDU_SC 0x85 +#define BAD_PORT 0xBAD +#define ETH_TYPE_IPV4_VXLAN 0x0800 +#define ETH_TYPE_IPV6_VXLAN 0x86DD +#define ETH_VXLAN_DEFAULT_PORT 4789 + +#define STE_NO_VLAN 0x0 +#define STE_SVLAN 0x1 +#define STE_CVLAN 0x2 +#define STE_IPV4 0x1 +#define STE_IPV6 0x2 +#define STE_TCP 0x1 +#define STE_UDP 0x2 +#define STE_ICMP 0x3 + +/* Setter function based on bit offset and mask, for 32bit DW*/ +#define _DR_SET_32(p, v, byte_off, bit_off, mask) \ + do { \ + u32 _v = v; \ + *((rte_be32_t *)(p) + ((byte_off) / 4)) = \ + rte_cpu_to_be_32((rte_be_to_cpu_32(*((u32 *)(p) + \ + ((byte_off) / 4))) & \ + (~((mask) << (bit_off)))) | \ + (((_v) & (mask)) << \ + (bit_off))); \ + } while (0) + +/* Setter function based on bit offset and mask */ +#define DR_SET(p, v, byte_off, bit_off, mask) \ + do { \ + if (unlikely((bit_off) < 0)) { \ + u32 _bit_off = -1 * (bit_off); \ + u32 second_dw_mask = (mask) & ((1 << _bit_off) - 1); \ + _DR_SET_32(p, (v) >> _bit_off, byte_off, 0, (mask) >> _bit_off); \ + _DR_SET_32(p, (v) & second_dw_mask, (byte_off) + DW_SIZE, \ + (bit_off) % BITS_IN_DW, second_dw_mask); \ + } else { \ + _DR_SET_32(p, v, byte_off, (bit_off), (mask)); \ + } \ + } while (0) + +/* Setter function based on byte offset to directly set FULL BE32 value */ +#define DR_SET_BE32(p, v, byte_off, bit_off, mask) \ + (*((rte_be32_t *)((uint8_t *)(p) + (byte_off))) = (v)) + +/* Setter function based on byte offset to directly set FULL BE32 value from ptr */ +#define DR_SET_BE32P(p, v_ptr, byte_off, bit_off, mask) \ + memcpy((uint8_t *)(p) + (byte_off), v_ptr, 4) + +/* Setter function based on byte offset to directly set FULL BE16 value */ +#define DR_SET_BE16(p, v, byte_off, bit_off, mask) \ + (*((rte_be16_t *)((uint8_t *)(p) + (byte_off))) = (v)) + +/* Setter function based on byte offset to directly set FULL BE16 value from ptr */ +#define DR_SET_BE16P(p, v_ptr, byte_off, bit_off, mask) \ + memcpy((uint8_t *)(p) + (byte_off), v_ptr, 2) + +#define DR_CALC_FNAME(field, inner) \ + ((inner) ? MLX5DR_DEFINER_FNAME_##field##_I : \ + MLX5DR_DEFINER_FNAME_##field##_O) + +#define DR_CALC_SET_HDR(fc, hdr, field) \ + do { \ + (fc)->bit_mask = __mlx5_mask(definer_hl, hdr.field); \ + (fc)->bit_off = __mlx5_dw_bit_off(definer_hl, hdr.field); \ + (fc)->byte_off = MLX5_BYTE_OFF(definer_hl, hdr.field); \ + } while (0) + +/* Helper to calculate data used by DR_SET */ +#define DR_CALC_SET(fc, hdr, field, is_inner) \ + do { \ + if (is_inner) { \ + DR_CALC_SET_HDR(fc, hdr##_inner, field); \ + } else { \ + DR_CALC_SET_HDR(fc, hdr##_outer, field); \ + } \ + } while (0) + + #define DR_GET(typ, p, fld) \ + ((rte_be_to_cpu_32(*((const rte_be32_t *)(p) + \ + __mlx5_dw_off(typ, fld))) >> __mlx5_dw_bit_off(typ, fld)) & \ + __mlx5_mask(typ, fld)) + +struct mlx5dr_definer_sel_ctrl { + uint8_t allowed_full_dw; /* Full DW selectors cover all offsets */ + uint8_t allowed_lim_dw; /* Limited DW selectors cover offset < 64 */ + uint8_t allowed_bytes; /* Bytes selectors, up to offset 255 */ + uint8_t used_full_dw; + uint8_t used_lim_dw; + uint8_t used_bytes; + uint8_t full_dw_selector[DW_SELECTORS]; + uint8_t lim_dw_selector[DW_SELECTORS_LIMITED]; + uint8_t byte_selector[BYTE_SELECTORS]; +}; + +struct mlx5dr_definer_conv_data { + struct mlx5dr_cmd_query_caps *caps; + struct mlx5dr_definer_fc *fc; + uint8_t relaxed; + uint8_t tunnel; + uint8_t *hl; +}; + +/* Xmacro used to create generic item setter from items */ +#define LIST_OF_FIELDS_INFO \ + X(SET_BE16, eth_type, v->type, rte_flow_item_eth) \ + X(SET_BE32P, eth_smac_47_16, &v->src.addr_bytes[0], rte_flow_item_eth) \ + X(SET_BE16P, eth_smac_15_0, &v->src.addr_bytes[4], rte_flow_item_eth) \ + X(SET_BE32P, eth_dmac_47_16, &v->dst.addr_bytes[0], rte_flow_item_eth) \ + X(SET_BE16P, eth_dmac_15_0, &v->dst.addr_bytes[4], rte_flow_item_eth) \ + X(SET_BE16, tci, v->tci, rte_flow_item_vlan) \ + X(SET, ipv4_ihl, v->ihl, rte_ipv4_hdr) \ + X(SET, ipv4_tos, v->type_of_service, rte_ipv4_hdr) \ + X(SET, ipv4_time_to_live, v->time_to_live, rte_ipv4_hdr) \ + X(SET_BE32, ipv4_dst_addr, v->dst_addr, rte_ipv4_hdr) \ + X(SET_BE32, ipv4_src_addr, v->src_addr, rte_ipv4_hdr) \ + X(SET, ipv4_next_proto, v->next_proto_id, rte_ipv4_hdr) \ + X(SET, ipv4_version, STE_IPV4, rte_ipv4_hdr) \ + X(SET_BE16, ipv4_frag, v->fragment_offset, rte_ipv4_hdr) \ + X(SET_BE16, ipv6_payload_len, v->hdr.payload_len, rte_flow_item_ipv6) \ + X(SET, ipv6_proto, v->hdr.proto, rte_flow_item_ipv6) \ + X(SET, ipv6_hop_limits, v->hdr.hop_limits, rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_127_96, &v->hdr.src_addr[0], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_95_64, &v->hdr.src_addr[4], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_63_32, &v->hdr.src_addr[8], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_31_0, &v->hdr.src_addr[12], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_127_96, &v->hdr.dst_addr[0], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_95_64, &v->hdr.dst_addr[4], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_63_32, &v->hdr.dst_addr[8], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_31_0, &v->hdr.dst_addr[12], rte_flow_item_ipv6) \ + X(SET, ipv6_version, STE_IPV6, rte_flow_item_ipv6) \ + X(SET, ipv6_frag, v->has_frag_ext, rte_flow_item_ipv6) \ + X(SET, icmp_protocol, STE_ICMP, rte_flow_item_icmp) \ + X(SET, udp_protocol, STE_UDP, rte_flow_item_udp) \ + X(SET_BE16, udp_src_port, v->hdr.src_port, rte_flow_item_udp) \ + X(SET_BE16, udp_dst_port, v->hdr.dst_port, rte_flow_item_udp) \ + X(SET, tcp_flags, v->hdr.tcp_flags, rte_flow_item_tcp) \ + X(SET, tcp_protocol, STE_TCP, rte_flow_item_tcp) \ + X(SET_BE16, tcp_src_port, v->hdr.src_port, rte_flow_item_tcp) \ + X(SET_BE16, tcp_dst_port, v->hdr.dst_port, rte_flow_item_tcp) \ + X(SET, gtp_udp_port, RTE_GTPU_UDP_PORT, rte_flow_item_gtp) \ + X(SET_BE32, gtp_teid, v->teid, rte_flow_item_gtp) \ + X(SET, gtp_msg_type, v->msg_type, rte_flow_item_gtp) \ + X(SET, gtp_ext_flag, !!v->v_pt_rsv_flags, rte_flow_item_gtp) \ + X(SET, gtp_next_ext_hdr, GTP_PDU_SC, rte_flow_item_gtp_psc) \ + X(SET, gtp_ext_hdr_pdu, v->hdr.type, rte_flow_item_gtp_psc) \ + X(SET, gtp_ext_hdr_qfi, v->hdr.qfi, rte_flow_item_gtp_psc) \ + X(SET, vxlan_flags, v->flags, rte_flow_item_vxlan) \ + X(SET, vxlan_udp_port, ETH_VXLAN_DEFAULT_PORT, rte_flow_item_vxlan) \ + X(SET, source_qp, v->queue, mlx5_rte_flow_item_sq) \ + X(SET, tag, v->data, rte_flow_item_tag) \ + X(SET, metadata, v->data, rte_flow_item_meta) \ + X(SET_BE16, gre_c_ver, v->c_rsvd0_ver, rte_flow_item_gre) \ + X(SET_BE16, gre_protocol_type, v->protocol, rte_flow_item_gre) \ + X(SET, ipv4_protocol_gre, IPPROTO_GRE, rte_flow_item_gre) \ + X(SET_BE32, gre_opt_key, v->key.key, rte_flow_item_gre_opt) \ + X(SET_BE32, gre_opt_seq, v->sequence.sequence, rte_flow_item_gre_opt) \ + X(SET_BE16, gre_opt_checksum, v->checksum_rsvd.checksum, rte_flow_item_gre_opt) \ + X(SET, meter_color, rte_col_2_mlx5_col(v->color), rte_flow_item_meter_color) + +/* Item set function format */ +#define X(set_type, func_name, value, item_type) \ +static void mlx5dr_definer_##func_name##_set( \ + struct mlx5dr_definer_fc *fc, \ + const void *item_spec, \ + uint8_t *tag) \ +{ \ + __rte_unused const struct item_type *v = item_spec; \ + DR_##set_type(tag, value, fc->byte_off, fc->bit_off, fc->bit_mask); \ +} +LIST_OF_FIELDS_INFO +#undef X + +static void +mlx5dr_definer_ones_set(struct mlx5dr_definer_fc *fc, + __rte_unused const void *item_spec, + __rte_unused uint8_t *tag) +{ + DR_SET(tag, -1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_eth_first_vlan_q_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_eth *v = item_spec; + uint8_t vlan_type; + + vlan_type = v->has_vlan ? STE_CVLAN : STE_NO_VLAN; + + DR_SET(tag, vlan_type, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_first_vlan_q_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_vlan *v = item_spec; + uint8_t vlan_type; + + vlan_type = v->has_more_vlan ? STE_SVLAN : STE_CVLAN; + + DR_SET(tag, vlan_type, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_conntrack_mask(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_conntrack *m = item_spec; + uint32_t reg_mask = 0; + + if (m->flags & (RTE_FLOW_CONNTRACK_PKT_STATE_VALID | + RTE_FLOW_CONNTRACK_PKT_STATE_INVALID | + RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED)) + reg_mask |= (MLX5_CT_SYNDROME_VALID | MLX5_CT_SYNDROME_INVALID | + MLX5_CT_SYNDROME_TRAP); + + if (m->flags & RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED) + reg_mask |= MLX5_CT_SYNDROME_STATE_CHANGE; + + if (m->flags & RTE_FLOW_CONNTRACK_PKT_STATE_BAD) + reg_mask |= MLX5_CT_SYNDROME_BAD_PACKET; + + DR_SET(tag, reg_mask, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_conntrack_tag(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_conntrack *v = item_spec; + uint32_t reg_value = 0; + + /* The conflict should be checked in the validation. */ + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_VALID) + reg_value |= MLX5_CT_SYNDROME_VALID; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED) + reg_value |= MLX5_CT_SYNDROME_STATE_CHANGE; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_INVALID) + reg_value |= MLX5_CT_SYNDROME_INVALID; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED) + reg_value |= MLX5_CT_SYNDROME_TRAP; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_BAD) + reg_value |= MLX5_CT_SYNDROME_BAD_PACKET; + + DR_SET(tag, reg_value, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_integrity_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + bool inner = (fc->fname == MLX5DR_DEFINER_FNAME_INTEGRITY_I); + const struct rte_flow_item_integrity *v = item_spec; + uint32_t ok1_bits = 0; + + if (v->l3_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L3_OK) | + BIT(MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_L3_OK) | + BIT(MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK); + + if (v->ipv4_csum_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK); + + if (v->l4_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_OK) | + BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_OK) | + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK); + + if (v->l4_csum_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK); + + DR_SET(tag, ok1_bits, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_gre_key_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const rte_be32_t *v = item_spec; + + DR_SET_BE32(tag, *v, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_vxlan_vni_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_vxlan *v = item_spec; + + memcpy(tag + fc->byte_off, v->vni, sizeof(v->vni)); +} + +static void +mlx5dr_definer_ipv6_tos_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ipv6 *v = item_spec; + uint8_t tos = DR_GET(header_ipv6_vtc, &v->hdr.vtc_flow, tos); + + DR_SET(tag, tos, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp_dw1_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp *v = item_spec; + rte_be32_t icmp_dw1; + + icmp_dw1 = (v->hdr.icmp_type << __mlx5_dw_bit_off(header_icmp, type)) | + (v->hdr.icmp_code << __mlx5_dw_bit_off(header_icmp, code)) | + (rte_be_to_cpu_16(v->hdr.icmp_cksum) << __mlx5_dw_bit_off(header_icmp, cksum)); + + DR_SET(tag, icmp_dw1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp_dw2_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp *v = item_spec; + rte_be32_t icmp_dw2; + + icmp_dw2 = (rte_be_to_cpu_16(v->hdr.icmp_ident) << __mlx5_dw_bit_off(header_icmp, ident)) | + (rte_be_to_cpu_16(v->hdr.icmp_seq_nb) << __mlx5_dw_bit_off(header_icmp, seq_nb)); + + DR_SET(tag, icmp_dw2, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp6_dw1_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp6 *v = item_spec; + rte_be32_t icmp_dw1; + + icmp_dw1 = (v->type << __mlx5_dw_bit_off(header_icmp, type)) | + (v->code << __mlx5_dw_bit_off(header_icmp, code)) | + (rte_be_to_cpu_16(v->checksum) << __mlx5_dw_bit_off(header_icmp, cksum)); + + DR_SET(tag, icmp_dw1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_ipv6_flow_label_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ipv6 *v = item_spec; + uint32_t flow_label = DR_GET(header_ipv6_vtc, &v->hdr.vtc_flow, flow_label); + + DR_SET(tag, flow_label, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_vport_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ethdev *v = item_spec; + const struct flow_hw_port_info *port_info; + uint32_t regc_value; + + port_info = flow_hw_conv_port_id(v->port_id); + if (unlikely(!port_info)) + regc_value = BAD_PORT; + else + regc_value = port_info->regc_value >> fc->bit_off; + + /* Bit offset is set to 0 to since regc value is 32bit */ + DR_SET(tag, regc_value, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static int +mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_eth *m = item->mask; + uint8_t empty_mac[RTE_ETHER_ADDR_LEN] = {0}; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->reserved) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->type) { + fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_type_set; + DR_CALC_SET(fc, eth_l2, l3_ethertype, inner); + } + + /* Check SMAC 47_16 */ + if (memcmp(m->src.addr_bytes, empty_mac, 4)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_48_16, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_smac_47_16_set; + DR_CALC_SET(fc, eth_l2_src, smac_47_16, inner); + } + + /* Check SMAC 15_0 */ + if (memcmp(m->src.addr_bytes + 4, empty_mac + 4, 2)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_15_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_smac_15_0_set; + DR_CALC_SET(fc, eth_l2_src, smac_15_0, inner); + } + + /* Check DMAC 47_16 */ + if (memcmp(m->dst.addr_bytes, empty_mac, 4)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_48_16, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_dmac_47_16_set; + DR_CALC_SET(fc, eth_l2, dmac_47_16, inner); + } + + /* Check DMAC 15_0 */ + if (memcmp(m->dst.addr_bytes + 4, empty_mac + 4, 2)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_15_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_dmac_15_0_set; + DR_CALC_SET(fc, eth_l2, dmac_15_0, inner); + } + + if (m->has_vlan) { + /* Mark packet as tagged (CVLAN) */ + fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_eth_first_vlan_q_set; + DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_vlan *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->reserved) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (!cd->relaxed || m->has_more_vlan) { + /* Mark packet as tagged (CVLAN or SVLAN) even if TCI is not specified.*/ + fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_first_vlan_q_set; + DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner); + } + + if (m->tci) { + fc = &cd->fc[DR_CALC_FNAME(VLAN_TCI, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tci_set; + DR_CALC_SET(fc, eth_l2, tci, inner); + } + + if (m->inner_type) { + fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_type_set; + DR_CALC_SET(fc, eth_l2, l3_ethertype, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_ipv4(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_ipv4_hdr *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_VERSION, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_version_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l3_type, inner); + + /* Overwrite - Unset ethertype if present */ + memset(&cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)], 0, sizeof(*fc)); + } + + if (!m) + return 0; + + if (m->total_length || m->packet_id || + m->hdr_checksum) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->fragment_offset) { + fc = &cd->fc[DR_CALC_FNAME(IP_FRAG, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_frag_set; + DR_CALC_SET(fc, eth_l3, fragment_offset, inner); + } + + if (m->next_proto_id) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_next_proto_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (m->dst_addr) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_DST, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_dst_addr_set; + DR_CALC_SET(fc, ipv4_src_dest, destination_address, inner); + } + + if (m->src_addr) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_SRC, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_src_addr_set; + DR_CALC_SET(fc, ipv4_src_dest, source_address, inner); + } + + if (m->ihl) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_IHL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_ihl_set; + DR_CALC_SET(fc, eth_l3, ihl, inner); + } + + if (m->time_to_live) { + fc = &cd->fc[DR_CALC_FNAME(IP_TTL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_time_to_live_set; + DR_CALC_SET(fc, eth_l3, time_to_live_hop_limit, inner); + } + + if (m->type_of_service) { + fc = &cd->fc[DR_CALC_FNAME(IP_TOS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_tos_set; + DR_CALC_SET(fc, eth_l3, tos, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_ipv6(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_ipv6 *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_VERSION, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_version_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l3_type, inner); + + /* Overwrite - Unset ethertype if present */ + memset(&cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)], 0, sizeof(*fc)); + } + + if (!m) + return 0; + + if (m->has_hop_ext || m->has_route_ext || m->has_auth_ext || + m->has_esp_ext || m->has_dest_ext || m->has_mobil_ext || + m->has_hip_ext || m->has_shim6_ext) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->has_frag_ext) { + fc = &cd->fc[DR_CALC_FNAME(IP_FRAG, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_frag_set; + DR_CALC_SET(fc, eth_l4, ip_fragmented, inner); + } + + if (DR_GET(header_ipv6_vtc, &m->hdr.vtc_flow, tos)) { + fc = &cd->fc[DR_CALC_FNAME(IP_TOS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_tos_set; + DR_CALC_SET(fc, eth_l3, tos, inner); + } + + if (DR_GET(header_ipv6_vtc, &m->hdr.vtc_flow, flow_label)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_FLOW_LABEL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_flow_label_set; + DR_CALC_SET(fc, eth_l3, flow_label, inner); + } + + if (m->hdr.payload_len) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_PAYLOAD_LEN, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_payload_len_set; + DR_CALC_SET(fc, eth_l3, ipv6_payload_length, inner); + } + + if (m->hdr.proto) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_proto_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (m->hdr.hop_limits) { + fc = &cd->fc[DR_CALC_FNAME(IP_TTL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_hop_limits_set; + DR_CALC_SET(fc, eth_l3, time_to_live_hop_limit, inner); + } + + if (!is_mem_zero(m->hdr.src_addr, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_127_96, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_127_96_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_127_96, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 4, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_95_64, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_95_64_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_95_64, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 8, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_63_32, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_63_32_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_63_32, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 12, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_31_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_31_0_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_31_0, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_127_96, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_127_96_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_127_96, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 4, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_95_64, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_95_64_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_95_64, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 8, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_63_32, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_63_32_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_63_32, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 12, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_31_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_31_0_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_31_0, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_udp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_udp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Set match on L4 type UDP */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + if (!m) + return 0; + + if (m->hdr.dgram_cksum || m->hdr.dgram_len) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->hdr.src_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_SPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_src_port_set; + DR_CALC_SET(fc, eth_l4, source_port, inner); + } + + if (m->hdr.dst_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_dst_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_tcp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_tcp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on L4 type TCP */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + if (!m) + return 0; + + if (m->hdr.tcp_flags) { + fc = &cd->fc[DR_CALC_FNAME(TCP_FLAGS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_flags_set; + DR_CALC_SET(fc, eth_l4, tcp_flags, inner); + } + + if (m->hdr.src_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_SPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_src_port_set; + DR_CALC_SET(fc, eth_l4, source_port, inner); + } + + if (m->hdr.dst_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_dst_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gtp *m = item->mask; + struct mlx5dr_definer_fc *fc; + + /* Overwrite GTPU dest port if not present */ + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, false)]; + if (!fc->tag_set && !cd->relaxed) { + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_udp_port_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l4, destination_port, false); + } + + if (!m) + return 0; + + if (m->msg_len || m->v_pt_rsv_flags & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->teid) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_TEID]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_teid_set; + fc->bit_mask = __mlx5_mask(header_gtp, teid); + fc->byte_off = cd->caps->format_select_gtpu_dw_1 * DW_SIZE; + } + + if (m->v_pt_rsv_flags) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_ext_flag_set; + fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + + if (m->msg_type) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_MSG_TYPE]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_msg_type_set; + fc->bit_mask = __mlx5_mask(header_gtp, msg_type); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, msg_type); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gtp_psc *m = item->mask; + struct mlx5dr_definer_fc *fc; + + /* Overwrite GTP extension flag to be 1 */ + if (!cd->relaxed) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ones_set; + fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + /* Overwrite next extension header type */ + if (!cd->relaxed) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_2_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_NEXT_EXT_HDR]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_next_ext_hdr_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->bit_mask = __mlx5_mask(header_opt_gtp, next_ext_hdr_type); + fc->bit_off = __mlx5_dw_bit_off(header_opt_gtp, next_ext_hdr_type); + fc->byte_off = cd->caps->format_select_gtpu_dw_2 * DW_SIZE; + } + + if (!m) + return 0; + + if (m->hdr.type) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_PDU]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_ext_hdr_pdu_set; + fc->bit_mask = __mlx5_mask(header_gtp_psc, pdu_type); + fc->bit_off = __mlx5_dw_bit_off(header_gtp_psc, pdu_type); + fc->byte_off = cd->caps->format_select_gtpu_ext_dw_0 * DW_SIZE; + } + + if (m->hdr.qfi) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_QFI]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_ext_hdr_qfi_set; + fc->bit_mask = __mlx5_mask(header_gtp_psc, qfi); + fc->bit_off = __mlx5_dw_bit_off(header_gtp_psc, qfi); + fc->byte_off = cd->caps->format_select_gtpu_ext_dw_0 * DW_SIZE; + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_port(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_ethdev *m = item->mask; + struct mlx5dr_definer_fc *fc; + uint8_t bit_offset = 0; + + if (m->port_id) { + if (!cd->caps->wire_regc_mask) { + DR_LOG(ERR, "Port ID item not supported, missing wire REGC mask"); + rte_errno = ENOTSUP; + return rte_errno; + } + + while (!(cd->caps->wire_regc_mask & (1 << bit_offset))) + bit_offset++; + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VPORT_REG_C_0]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_vport_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET_HDR(fc, registers, register_c_0); + fc->bit_off = bit_offset; + fc->bit_mask = cd->caps->wire_regc_mask >> bit_offset; + } else { + DR_LOG(ERR, "Pord ID item mask must specify ID mask"); + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_vxlan(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_vxlan *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* In order to match on VXLAN we must match on ether_type, ip_protocol + * and l4_dport. + */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_udp_protocol_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_vxlan_udp_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + } + + if (!m) + return 0; + + if (m->flags) { + if (inner) { + DR_LOG(ERR, "Inner VXLAN flags item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VXLAN_FLAGS]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_vxlan_flags_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_vxlan, flags); + fc->bit_off = __mlx5_dw_bit_off(header_vxlan, flags); + } + + if (!is_mem_zero(m->vni, 3)) { + if (inner) { + DR_LOG(ERR, "Inner VXLAN vni item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VXLAN_VNI]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_vxlan_vni_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_1); + fc->bit_mask = __mlx5_mask(header_vxlan, vni); + fc->bit_off = __mlx5_dw_bit_off(header_vxlan, vni); + } + + return 0; +} + +static struct mlx5dr_definer_fc * +mlx5dr_definer_get_register_fc(struct mlx5dr_definer_conv_data *cd, int reg) +{ + struct mlx5dr_definer_fc *fc; + + switch (reg) { + case REG_C_0: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_0]; + DR_CALC_SET_HDR(fc, registers, register_c_0); + break; + case REG_C_1: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_1]; + DR_CALC_SET_HDR(fc, registers, register_c_1); + break; + case REG_C_2: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_2]; + DR_CALC_SET_HDR(fc, registers, register_c_2); + break; + case REG_C_3: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_3]; + DR_CALC_SET_HDR(fc, registers, register_c_3); + break; + case REG_C_4: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_4]; + DR_CALC_SET_HDR(fc, registers, register_c_4); + break; + case REG_C_5: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_5]; + DR_CALC_SET_HDR(fc, registers, register_c_5); + break; + case REG_C_6: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_6]; + DR_CALC_SET_HDR(fc, registers, register_c_6); + break; + case REG_C_7: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_7]; + DR_CALC_SET_HDR(fc, registers, register_c_7); + break; + case REG_A: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_A]; + DR_CALC_SET_HDR(fc, metadata, general_purpose); + break; + case REG_B: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_B]; + DR_CALC_SET_HDR(fc, metadata, metadata_to_cqe); + break; + default: + rte_errno = ENOTSUP; + return NULL; + } + + return fc; +} + +static int +mlx5dr_definer_conv_item_tag(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_tag *m = item->mask; + const struct rte_flow_item_tag *v = item->spec; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m || !v) + return 0; + + if (item->type == RTE_FLOW_ITEM_TYPE_TAG) + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_TAG, v->index); + else + reg = (int)v->index; + + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item tag"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tag_set; + return 0; +} + +static int +mlx5dr_definer_conv_item_metadata(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_meta *m = item->mask; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m) + return 0; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_META, -1); + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item metadata"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_metadata_set; + return 0; +} + +static int +mlx5dr_definer_conv_item_sq(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct mlx5_rte_flow_item_sq *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!m) + return 0; + + if (m->queue) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_SOURCE_QP]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_source_qp_set; + DR_CALC_SET_HDR(fc, source_qp_gvmi, source_qp); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gre *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (inner) { + DR_LOG(ERR, "Inner GRE item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (!m) + return 0; + + if (m->c_rsvd0_ver) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_C_VER]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_c_ver_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_gre, c_rsvd0_ver); + fc->bit_off = __mlx5_dw_bit_off(header_gre, c_rsvd0_ver); + } + + if (m->protocol) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_PROTOCOL]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_protocol_type_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->byte_off += MLX5_BYTE_OFF(header_gre, gre_protocol); + fc->bit_mask = __mlx5_mask(header_gre, gre_protocol); + fc->bit_off = __mlx5_dw_bit_off(header_gre, gre_protocol); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gre_opt(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gre_opt *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, false)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, false); + } + } + + if (!m) + return 0; + + if (m->checksum_rsvd.checksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_CHECKSUM]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_checksum_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_1); + } + + if (m->key.key) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_KEY]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_key_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_2); + } + + if (m->sequence.sequence) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_SEQ]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_seq_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_3); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gre_key(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const rte_be32_t *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!cd->relaxed) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_KEY_PRESENT]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ones_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_gre, gre_k_present); + fc->bit_off = __mlx5_dw_bit_off(header_gre, gre_k_present); + + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, false)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, false); + } + } + + if (!m) + return 0; + + if (*m) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_KEY]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_key_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_2); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_integrity(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_integrity *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->packet_ok || m->l2_ok || m->l2_crc_ok || m->l3_len_ok) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->l3_ok || m->ipv4_csum_ok || m->l4_ok || m->l4_csum_ok) { + fc = &cd->fc[DR_CALC_FNAME(INTEGRITY, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_integrity_set; + DR_CALC_SET_HDR(fc, oks1, oks1_bits); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_conntrack(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_conntrack *m = item->mask; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m) + return 0; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_CONNTRACK, -1); + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item conntrack"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_conntrack_mask; + fc->tag_set = &mlx5dr_definer_conntrack_tag; + + return 0; +} + +static int +mlx5dr_definer_conv_item_icmp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_icmp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on L4 type ICMP */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type, inner); + } + + if (!m) + return 0; + + if (m->hdr.icmp_type || m->hdr.icmp_code || m->hdr.icmp_cksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW1]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_dw1_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw1); + } + + if (m->hdr.icmp_ident || m->hdr.icmp_seq_nb) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW2]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_dw2_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw2); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_icmp6(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_icmp6 *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on L4 type ICMP6 */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type, inner); + } + + if (!m) + return 0; + + if (m->type || m->code || m->checksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW1]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp6_dw1_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw1); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_meter_color(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_meter_color *m = item->mask; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m) + return 0; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_METER_COLOR, 0); + MLX5_ASSERT(reg > 0); + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_meter_color_set; + return 0; +} + +static int +mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt, + uint8_t *hl) +{ + struct mlx5dr_definer_fc fc[MLX5DR_DEFINER_FNAME_MAX] = {{0}}; + struct mlx5dr_definer_conv_data cd = {0}; + struct rte_flow_item *items = mt->items; + uint64_t item_flags = 0; + uint32_t total = 0; + int i, j; + int ret; + + cd.fc = fc; + cd.hl = hl; + cd.caps = ctx->caps; + cd.relaxed = mt->flags & MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH; + + /* Collect all RTE fields to the field array and set header layout */ + for (i = 0; items->type != RTE_FLOW_ITEM_TYPE_END; i++, items++) { + cd.tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + + switch ((int)items->type) { + case RTE_FLOW_ITEM_TYPE_ETH: + ret = mlx5dr_definer_conv_item_eth(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + ret = mlx5dr_definer_conv_item_vlan(&cd, items, i); + item_flags |= cd.tunnel ? + (MLX5_FLOW_LAYER_INNER_VLAN | MLX5_FLOW_LAYER_INNER_L2) : + (MLX5_FLOW_LAYER_OUTER_VLAN | MLX5_FLOW_LAYER_OUTER_L2); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + ret = mlx5dr_definer_conv_item_ipv4(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + ret = mlx5dr_definer_conv_item_ipv6(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + break; + case RTE_FLOW_ITEM_TYPE_UDP: + ret = mlx5dr_definer_conv_item_udp(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; + break; + case RTE_FLOW_ITEM_TYPE_TCP: + ret = mlx5dr_definer_conv_item_tcp(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + ret = mlx5dr_definer_conv_item_gtp(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = mlx5dr_definer_conv_item_gtp_psc(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + ret = mlx5dr_definer_conv_item_port(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_REPRESENTED_PORT; + mt->vport_item_id = i; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + ret = mlx5dr_definer_conv_item_vxlan(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_VXLAN; + break; + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: + ret = mlx5dr_definer_conv_item_sq(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_SQ; + break; + case RTE_FLOW_ITEM_TYPE_TAG: + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + ret = mlx5dr_definer_conv_item_tag(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_TAG; + break; + case RTE_FLOW_ITEM_TYPE_META: + ret = mlx5dr_definer_conv_item_metadata(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_METADATA; + break; + case RTE_FLOW_ITEM_TYPE_GRE: + ret = mlx5dr_definer_conv_item_gre(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + ret = mlx5dr_definer_conv_item_gre_opt(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + ret = mlx5dr_definer_conv_item_gre_key(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE_KEY; + break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + ret = mlx5dr_definer_conv_item_integrity(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_INTEGRITY : + MLX5_FLOW_ITEM_OUTER_INTEGRITY; + break; + case RTE_FLOW_ITEM_TYPE_CONNTRACK: + ret = mlx5dr_definer_conv_item_conntrack(&cd, items, i); + break; + case RTE_FLOW_ITEM_TYPE_ICMP: + ret = mlx5dr_definer_conv_item_icmp(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_ICMP; + break; + case RTE_FLOW_ITEM_TYPE_ICMP6: + ret = mlx5dr_definer_conv_item_icmp6(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_ICMP6; + break; + case RTE_FLOW_ITEM_TYPE_METER_COLOR: + ret = mlx5dr_definer_conv_item_meter_color(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_METER_COLOR; + break; + default: + DR_LOG(ERR, "Unsupported item type %d", items->type); + rte_errno = ENOTSUP; + return rte_errno; + } + + if (ret) { + DR_LOG(ERR, "Failed processing item type: %d", items->type); + return ret; + } + } + + mt->item_flags = item_flags; + + /* Fill in headers layout and calculate total number of fields */ + for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) { + if (fc[i].tag_set) { + total++; + DR_SET(hl, -1, fc[i].byte_off, fc[i].bit_off, fc[i].bit_mask); + } + } + + mt->fc_sz = total; + mt->fc = simple_calloc(total, sizeof(*mt->fc)); + if (!mt->fc) { + DR_LOG(ERR, "Failed to allocate field copy array"); + rte_errno = ENOMEM; + return rte_errno; + } + + j = 0; + for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) { + if (fc[i].tag_set) { + memcpy(&mt->fc[j], &fc[i], sizeof(*mt->fc)); + mt->fc[j].fname = i; + j++; + } + } + + return 0; +} + +static int +mlx5dr_definer_find_byte_in_tag(struct mlx5dr_definer *definer, + uint32_t hl_byte_off, + uint32_t *tag_byte_off) +{ + uint8_t byte_offset; + int i; + + /* Add offset since each DW covers multiple BYTEs */ + byte_offset = hl_byte_off % DW_SIZE; + for (i = 0; i < DW_SELECTORS; i++) { + if (definer->dw_selector[i] == hl_byte_off / DW_SIZE) { + *tag_byte_off = byte_offset + DW_SIZE * (DW_SELECTORS - i - 1); + return 0; + } + } + + /* Add offset to skip DWs in definer */ + byte_offset = DW_SIZE * DW_SELECTORS; + /* Iterate in reverse since the code uses bytes from 7 -> 0 */ + for (i = BYTE_SELECTORS; i-- > 0 ;) { + if (definer->byte_selector[i] == hl_byte_off) { + *tag_byte_off = byte_offset + (BYTE_SELECTORS - i - 1); + return 0; + } + } + + /* The hl byte offset must be part of the definer */ + DR_LOG(INFO, "Failed to map to definer, HL byte [%d] not found", byte_offset); + rte_errno = EINVAL; + return rte_errno; +} + +static int +mlx5dr_definer_fc_bind(struct mlx5dr_definer *definer, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz) +{ + uint32_t tag_offset = 0; + int ret, byte_diff; + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + /* Map header layout byte offset to byte offset in tag */ + ret = mlx5dr_definer_find_byte_in_tag(definer, fc->byte_off, &tag_offset); + if (ret) + return ret; + + /* Move setter based on the location in the definer */ + byte_diff = fc->byte_off % DW_SIZE - tag_offset % DW_SIZE; + fc->bit_off = fc->bit_off + byte_diff * BITS_IN_BYTE; + + /* Update offset in headers layout to offset in tag */ + fc->byte_off = tag_offset; + fc++; + } + + return 0; +} + +static bool +mlx5dr_definer_best_hl_fit_recu(struct mlx5dr_definer_sel_ctrl *ctrl, + uint32_t cur_dw, + uint32_t *data) +{ + uint8_t bytes_set; + int byte_idx; + bool ret; + int i; + + /* Reached end, nothing left to do */ + if (cur_dw == MLX5_ST_SZ_DW(definer_hl)) + return true; + + /* No data set, can skip to next DW */ + while (!*data) { + cur_dw++; + data++; + + /* Reached end, nothing left to do */ + if (cur_dw == MLX5_ST_SZ_DW(definer_hl)) + return true; + } + + /* Used all DW selectors and Byte selectors, no possible solution */ + if (ctrl->allowed_full_dw == ctrl->used_full_dw && + ctrl->allowed_lim_dw == ctrl->used_lim_dw && + ctrl->allowed_bytes == ctrl->used_bytes) + return false; + + /* Try to use limited DW selectors */ + if (ctrl->allowed_lim_dw > ctrl->used_lim_dw && cur_dw < 64) { + ctrl->lim_dw_selector[ctrl->used_lim_dw++] = cur_dw; + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + ctrl->lim_dw_selector[--ctrl->used_lim_dw] = 0; + } + + /* Try to use DW selectors */ + if (ctrl->allowed_full_dw > ctrl->used_full_dw) { + ctrl->full_dw_selector[ctrl->used_full_dw++] = cur_dw; + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + ctrl->full_dw_selector[--ctrl->used_full_dw] = 0; + } + + /* No byte selector for offset bigger than 255 */ + if (cur_dw * DW_SIZE > 255) + return false; + + bytes_set = !!(0x000000ff & *data) + + !!(0x0000ff00 & *data) + + !!(0x00ff0000 & *data) + + !!(0xff000000 & *data); + + /* Check if there are enough byte selectors left */ + if (bytes_set + ctrl->used_bytes > ctrl->allowed_bytes) + return false; + + /* Try to use Byte selectors */ + for (i = 0; i < DW_SIZE; i++) + if ((0xff000000 >> (i * BITS_IN_BYTE)) & rte_be_to_cpu_32(*data)) { + /* Use byte selectors high to low */ + byte_idx = ctrl->allowed_bytes - ctrl->used_bytes - 1; + ctrl->byte_selector[byte_idx] = cur_dw * DW_SIZE + i; + ctrl->used_bytes++; + } + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + for (i = 0; i < DW_SIZE; i++) + if ((0xff << (i * BITS_IN_BYTE)) & rte_be_to_cpu_32(*data)) { + ctrl->used_bytes--; + byte_idx = ctrl->allowed_bytes - ctrl->used_bytes - 1; + ctrl->byte_selector[byte_idx] = 0; + } + + return false; +} + +static void +mlx5dr_definer_apply_sel_ctrl(struct mlx5dr_definer_sel_ctrl *ctrl, + struct mlx5dr_definer *definer) +{ + memcpy(definer->byte_selector, ctrl->byte_selector, ctrl->allowed_bytes); + memcpy(definer->dw_selector, ctrl->full_dw_selector, ctrl->allowed_full_dw); + memcpy(definer->dw_selector + ctrl->allowed_full_dw, + ctrl->lim_dw_selector, ctrl->allowed_lim_dw); +} + +static int +mlx5dr_definer_find_best_hl_fit(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt, + uint8_t *hl) +{ + struct mlx5dr_definer_sel_ctrl ctrl = {0}; + bool found; + + /* Try to create a match definer */ + ctrl.allowed_full_dw = DW_SELECTORS_MATCH; + ctrl.allowed_lim_dw = 0; + ctrl.allowed_bytes = BYTE_SELECTORS; + + found = mlx5dr_definer_best_hl_fit_recu(&ctrl, 0, (uint32_t *)hl); + if (found) { + mlx5dr_definer_apply_sel_ctrl(&ctrl, mt->definer); + mt->definer->type = MLX5DR_DEFINER_TYPE_MATCH; + return 0; + } + + /* Try to create a full/limited jumbo definer */ + ctrl.allowed_full_dw = ctx->caps->full_dw_jumbo_support ? DW_SELECTORS : + DW_SELECTORS_MATCH; + ctrl.allowed_lim_dw = ctx->caps->full_dw_jumbo_support ? 0 : + DW_SELECTORS_LIMITED; + ctrl.allowed_bytes = BYTE_SELECTORS; + + found = mlx5dr_definer_best_hl_fit_recu(&ctrl, 0, (uint32_t *)hl); + if (found) { + mlx5dr_definer_apply_sel_ctrl(&ctrl, mt->definer); + mt->definer->type = MLX5DR_DEFINER_TYPE_JUMBO; + return 0; + } + + DR_LOG(ERR, "Unable to find supporting match/jumbo definer combination"); + rte_errno = ENOTSUP; + return rte_errno; +} + +static void +mlx5dr_definer_create_tag_mask(struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag) +{ + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + if (fc->tag_mask_set) + fc->tag_mask_set(fc, items[fc->item_idx].mask, tag); + else + fc->tag_set(fc, items[fc->item_idx].mask, tag); + fc++; + } +} + +void mlx5dr_definer_create_tag(const struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag) +{ + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + fc->tag_set(fc, items[fc->item_idx].spec, tag); + fc++; + } +} + +int mlx5dr_definer_get_id(struct mlx5dr_definer *definer) +{ + return definer->obj->id; +} + +int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b) +{ + int i; + + if (definer_a->type != definer_b->type) + return 1; + + for (i = 0; i < BYTE_SELECTORS; i++) + if (definer_a->byte_selector[i] != definer_b->byte_selector[i]) + return 1; + + for (i = 0; i < DW_SELECTORS; i++) + if (definer_a->dw_selector[i] != definer_b->dw_selector[i]) + return 1; + + for (i = 0; i < MLX5DR_JUMBO_TAG_SZ; i++) + if (definer_a->mask.jumbo[i] != definer_b->mask.jumbo[i]) + return 1; + + return 0; +} + +int mlx5dr_definer_get(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt) +{ + struct mlx5dr_cmd_definer_create_attr def_attr = {0}; + struct ibv_context *ibv_ctx = ctx->ibv_ctx; + uint8_t *hl; + int ret; + + if (mt->refcount++) + return 0; + + mt->definer = simple_calloc(1, sizeof(*mt->definer)); + if (!mt->definer) { + DR_LOG(ERR, "Failed to allocate memory for definer"); + rte_errno = ENOMEM; + goto dec_refcount; + } + + /* Header layout (hl) holds full bit mask per field */ + hl = simple_calloc(1, MLX5_ST_SZ_BYTES(definer_hl)); + if (!hl) { + DR_LOG(ERR, "Failed to allocate memory for header layout"); + rte_errno = ENOMEM; + goto free_definer; + } + + /* Convert items to hl and allocate the field copy array (fc) */ + ret = mlx5dr_definer_conv_items_to_hl(ctx, mt, hl); + if (ret) { + DR_LOG(ERR, "Failed to convert items to hl"); + goto free_hl; + } + + /* Find the definer for given header layout */ + ret = mlx5dr_definer_find_best_hl_fit(ctx, mt, hl); + if (ret) { + DR_LOG(ERR, "Failed to create definer from header layout"); + goto free_field_copy; + } + + /* Align field copy array based on the new definer */ + ret = mlx5dr_definer_fc_bind(mt->definer, + mt->fc, + mt->fc_sz); + if (ret) { + DR_LOG(ERR, "Failed to bind field copy to definer"); + goto free_field_copy; + } + + /* Create the tag mask used for definer creation */ + mlx5dr_definer_create_tag_mask(mt->items, + mt->fc, + mt->fc_sz, + mt->definer->mask.jumbo); + + /* Create definer based on the bitmask tag */ + def_attr.match_mask = mt->definer->mask.jumbo; + def_attr.dw_selector = mt->definer->dw_selector; + def_attr.byte_selector = mt->definer->byte_selector; + mt->definer->obj = mlx5dr_cmd_definer_create(ibv_ctx, &def_attr); + if (!mt->definer->obj) + goto free_field_copy; + + simple_free(hl); + + return 0; + +free_field_copy: + simple_free(mt->fc); +free_hl: + simple_free(hl); +free_definer: + simple_free(mt->definer); +dec_refcount: + mt->refcount--; + + return rte_errno; +} + +void mlx5dr_definer_put(struct mlx5dr_match_template *mt) +{ + if (--mt->refcount) + return; + + simple_free(mt->fc); + mlx5dr_cmd_destroy_obj(mt->definer->obj); + simple_free(mt->definer); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h new file mode 100644 index 0000000000..d52c6b0627 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_definer.h @@ -0,0 +1,585 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_DEFINER_H_ +#define MLX5DR_DEFINER_H_ + +/* Selectors based on match TAG */ +#define DW_SELECTORS_MATCH 6 +#define DW_SELECTORS_LIMITED 3 +#define DW_SELECTORS 9 +#define BYTE_SELECTORS 8 + +enum mlx5dr_definer_fname { + MLX5DR_DEFINER_FNAME_ETH_SMAC_48_16_O, + MLX5DR_DEFINER_FNAME_ETH_SMAC_48_16_I, + MLX5DR_DEFINER_FNAME_ETH_SMAC_15_0_O, + MLX5DR_DEFINER_FNAME_ETH_SMAC_15_0_I, + MLX5DR_DEFINER_FNAME_ETH_DMAC_48_16_O, + MLX5DR_DEFINER_FNAME_ETH_DMAC_48_16_I, + MLX5DR_DEFINER_FNAME_ETH_DMAC_15_0_O, + MLX5DR_DEFINER_FNAME_ETH_DMAC_15_0_I, + MLX5DR_DEFINER_FNAME_ETH_TYPE_O, + MLX5DR_DEFINER_FNAME_ETH_TYPE_I, + MLX5DR_DEFINER_FNAME_VLAN_TYPE_O, + MLX5DR_DEFINER_FNAME_VLAN_TYPE_I, + MLX5DR_DEFINER_FNAME_VLAN_TCI_O, + MLX5DR_DEFINER_FNAME_VLAN_TCI_I, + MLX5DR_DEFINER_FNAME_IPV4_IHL_O, + MLX5DR_DEFINER_FNAME_IPV4_IHL_I, + MLX5DR_DEFINER_FNAME_IP_TTL_O, + MLX5DR_DEFINER_FNAME_IP_TTL_I, + MLX5DR_DEFINER_FNAME_IPV4_DST_O, + MLX5DR_DEFINER_FNAME_IPV4_DST_I, + MLX5DR_DEFINER_FNAME_IPV4_SRC_O, + MLX5DR_DEFINER_FNAME_IPV4_SRC_I, + MLX5DR_DEFINER_FNAME_IP_VERSION_O, + MLX5DR_DEFINER_FNAME_IP_VERSION_I, + MLX5DR_DEFINER_FNAME_IP_FRAG_O, + MLX5DR_DEFINER_FNAME_IP_FRAG_I, + MLX5DR_DEFINER_FNAME_IPV6_PAYLOAD_LEN_O, + MLX5DR_DEFINER_FNAME_IPV6_PAYLOAD_LEN_I, + MLX5DR_DEFINER_FNAME_IP_TOS_O, + MLX5DR_DEFINER_FNAME_IP_TOS_I, + MLX5DR_DEFINER_FNAME_IPV6_FLOW_LABEL_O, + MLX5DR_DEFINER_FNAME_IPV6_FLOW_LABEL_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_127_96_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_95_64_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_63_32_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_31_0_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_127_96_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_95_64_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_63_32_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_31_0_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_127_96_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_95_64_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_63_32_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_31_0_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_127_96_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_95_64_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_63_32_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_31_0_I, + MLX5DR_DEFINER_FNAME_IP_PROTOCOL_O, + MLX5DR_DEFINER_FNAME_IP_PROTOCOL_I, + MLX5DR_DEFINER_FNAME_L4_SPORT_O, + MLX5DR_DEFINER_FNAME_L4_SPORT_I, + MLX5DR_DEFINER_FNAME_L4_DPORT_O, + MLX5DR_DEFINER_FNAME_L4_DPORT_I, + MLX5DR_DEFINER_FNAME_TCP_FLAGS_I, + MLX5DR_DEFINER_FNAME_TCP_FLAGS_O, + MLX5DR_DEFINER_FNAME_GTP_TEID, + MLX5DR_DEFINER_FNAME_GTP_MSG_TYPE, + MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG, + MLX5DR_DEFINER_FNAME_GTP_NEXT_EXT_HDR, + MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_PDU, + MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_QFI, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_0, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_1, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_2, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_3, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_4, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_5, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_6, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_7, + MLX5DR_DEFINER_FNAME_VPORT_REG_C_0, + MLX5DR_DEFINER_FNAME_VXLAN_FLAGS, + MLX5DR_DEFINER_FNAME_VXLAN_VNI, + MLX5DR_DEFINER_FNAME_SOURCE_QP, + MLX5DR_DEFINER_FNAME_REG_0, + MLX5DR_DEFINER_FNAME_REG_1, + MLX5DR_DEFINER_FNAME_REG_2, + MLX5DR_DEFINER_FNAME_REG_3, + MLX5DR_DEFINER_FNAME_REG_4, + MLX5DR_DEFINER_FNAME_REG_5, + MLX5DR_DEFINER_FNAME_REG_6, + MLX5DR_DEFINER_FNAME_REG_7, + MLX5DR_DEFINER_FNAME_REG_A, + MLX5DR_DEFINER_FNAME_REG_B, + MLX5DR_DEFINER_FNAME_GRE_KEY_PRESENT, + MLX5DR_DEFINER_FNAME_GRE_C_VER, + MLX5DR_DEFINER_FNAME_GRE_PROTOCOL, + MLX5DR_DEFINER_FNAME_GRE_OPT_KEY, + MLX5DR_DEFINER_FNAME_GRE_OPT_SEQ, + MLX5DR_DEFINER_FNAME_GRE_OPT_CHECKSUM, + MLX5DR_DEFINER_FNAME_INTEGRITY_O, + MLX5DR_DEFINER_FNAME_INTEGRITY_I, + MLX5DR_DEFINER_FNAME_ICMP_DW1, + MLX5DR_DEFINER_FNAME_ICMP_DW2, + MLX5DR_DEFINER_FNAME_MAX, +}; + +enum mlx5dr_definer_type { + MLX5DR_DEFINER_TYPE_MATCH, + MLX5DR_DEFINER_TYPE_JUMBO, +}; + +struct mlx5dr_definer_fc { + uint8_t item_idx; + uint32_t byte_off; + int bit_off; + uint32_t bit_mask; + enum mlx5dr_definer_fname fname; + void (*tag_set)(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag); + void (*tag_mask_set)(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag); +}; + +struct mlx5_ifc_definer_hl_eth_l2_bits { + u8 dmac_47_16[0x20]; + u8 dmac_15_0[0x10]; + u8 l3_ethertype[0x10]; + u8 reserved_at_40[0x1]; + u8 sx_sniffer[0x1]; + u8 functional_lb[0x1]; + u8 ip_fragmented[0x1]; + u8 qp_type[0x2]; + u8 encap_type[0x2]; + u8 port_number[0x2]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 tci[0x10]; /* contains first_priority[0x3] + first_cfi[0x1] + first_vlan_id[0xc] */ + u8 l4_type[0x4]; + u8 reserved_at_64[0x2]; + u8 ipsec_layer[0x2]; + u8 l2_type[0x2]; + u8 force_lb[0x1]; + u8 l2_ok[0x1]; + u8 l3_ok[0x1]; + u8 l4_ok[0x1]; + u8 second_vlan_qualifier[0x2]; + u8 second_priority[0x3]; + u8 second_cfi[0x1]; + u8 second_vlan_id[0xc]; +}; + +struct mlx5_ifc_definer_hl_eth_l2_src_bits { + u8 smac_47_16[0x20]; + u8 smac_15_0[0x10]; + u8 loopback_syndrome[0x8]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 ip_fragmented[0x1]; + u8 functional_lb[0x1]; +}; + +struct mlx5_ifc_definer_hl_ib_l2_bits { + u8 sx_sniffer[0x1]; + u8 force_lb[0x1]; + u8 functional_lb[0x1]; + u8 reserved_at_3[0x3]; + u8 port_number[0x2]; + u8 sl[0x4]; + u8 qp_type[0x2]; + u8 lnh[0x2]; + u8 dlid[0x10]; + u8 vl[0x4]; + u8 lrh_packet_length[0xc]; + u8 slid[0x10]; +}; + +struct mlx5_ifc_definer_hl_eth_l3_bits { + u8 ip_version[0x4]; + u8 ihl[0x4]; + union { + u8 tos[0x8]; + struct { + u8 dscp[0x6]; + u8 ecn[0x2]; + }; + }; + u8 time_to_live_hop_limit[0x8]; + u8 protocol_next_header[0x8]; + u8 identification[0x10]; + u8 flags[0x3]; + u8 fragment_offset[0xd]; + u8 ipv4_total_length[0x10]; + u8 checksum[0x10]; + u8 reserved_at_60[0xc]; + u8 flow_label[0x14]; + u8 packet_length[0x10]; + u8 ipv6_payload_length[0x10]; +}; + +struct mlx5_ifc_definer_hl_eth_l4_bits { + u8 source_port[0x10]; + u8 destination_port[0x10]; + u8 data_offset[0x4]; + u8 l4_ok[0x1]; + u8 l3_ok[0x1]; + u8 ip_fragmented[0x1]; + u8 tcp_ns[0x1]; + union { + u8 tcp_flags[0x8]; + struct { + u8 tcp_cwr[0x1]; + u8 tcp_ece[0x1]; + u8 tcp_urg[0x1]; + u8 tcp_ack[0x1]; + u8 tcp_psh[0x1]; + u8 tcp_rst[0x1]; + u8 tcp_syn[0x1]; + u8 tcp_fin[0x1]; + }; + }; + u8 first_fragment[0x1]; + u8 reserved_at_31[0xf]; +}; + +struct mlx5_ifc_definer_hl_src_qp_gvmi_bits { + u8 loopback_syndrome[0x8]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 reserved_at_e[0x1]; + u8 functional_lb[0x1]; + u8 source_gvmi[0x10]; + u8 force_lb[0x1]; + u8 ip_fragmented[0x1]; + u8 source_is_requestor[0x1]; + u8 reserved_at_23[0x5]; + u8 source_qp[0x18]; +}; + +struct mlx5_ifc_definer_hl_ib_l4_bits { + u8 opcode[0x8]; + u8 qp[0x18]; + u8 se[0x1]; + u8 migreq[0x1]; + u8 ackreq[0x1]; + u8 fecn[0x1]; + u8 becn[0x1]; + u8 bth[0x1]; + u8 deth[0x1]; + u8 dcceth[0x1]; + u8 reserved_at_28[0x2]; + u8 pad_count[0x2]; + u8 tver[0x4]; + u8 p_key[0x10]; + u8 reserved_at_40[0x8]; + u8 deth_source_qp[0x18]; +}; + +enum mlx5dr_integrity_ok1_bits { + MLX5DR_DEFINER_OKS1_FIRST_L4_OK = 24, + MLX5DR_DEFINER_OKS1_FIRST_L3_OK = 25, + MLX5DR_DEFINER_OKS1_SECOND_L4_OK = 26, + MLX5DR_DEFINER_OKS1_SECOND_L3_OK = 27, + MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK = 28, + MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK = 29, + MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK = 30, + MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK = 31, +}; + +struct mlx5_ifc_definer_hl_oks1_bits { + union { + u8 oks1_bits[0x20]; + struct { + u8 second_ipv4_checksum_ok[0x1]; + u8 second_l4_checksum_ok[0x1]; + u8 first_ipv4_checksum_ok[0x1]; + u8 first_l4_checksum_ok[0x1]; + u8 second_l3_ok[0x1]; + u8 second_l4_ok[0x1]; + u8 first_l3_ok[0x1]; + u8 first_l4_ok[0x1]; + u8 flex_parser7_steering_ok[0x1]; + u8 flex_parser6_steering_ok[0x1]; + u8 flex_parser5_steering_ok[0x1]; + u8 flex_parser4_steering_ok[0x1]; + u8 flex_parser3_steering_ok[0x1]; + u8 flex_parser2_steering_ok[0x1]; + u8 flex_parser1_steering_ok[0x1]; + u8 flex_parser0_steering_ok[0x1]; + u8 second_ipv6_extension_header_vld[0x1]; + u8 first_ipv6_extension_header_vld[0x1]; + u8 l3_tunneling_ok[0x1]; + u8 l2_tunneling_ok[0x1]; + u8 second_tcp_ok[0x1]; + u8 second_udp_ok[0x1]; + u8 second_ipv4_ok[0x1]; + u8 second_ipv6_ok[0x1]; + u8 second_l2_ok[0x1]; + u8 vxlan_ok[0x1]; + u8 gre_ok[0x1]; + u8 first_tcp_ok[0x1]; + u8 first_udp_ok[0x1]; + u8 first_ipv4_ok[0x1]; + u8 first_ipv6_ok[0x1]; + u8 first_l2_ok[0x1]; + }; + }; +}; + +struct mlx5_ifc_definer_hl_oks2_bits { + u8 reserved_at_0[0xa]; + u8 second_mpls_ok[0x1]; + u8 second_mpls4_s_bit[0x1]; + u8 second_mpls4_qualifier[0x1]; + u8 second_mpls3_s_bit[0x1]; + u8 second_mpls3_qualifier[0x1]; + u8 second_mpls2_s_bit[0x1]; + u8 second_mpls2_qualifier[0x1]; + u8 second_mpls1_s_bit[0x1]; + u8 second_mpls1_qualifier[0x1]; + u8 second_mpls0_s_bit[0x1]; + u8 second_mpls0_qualifier[0x1]; + u8 first_mpls_ok[0x1]; + u8 first_mpls4_s_bit[0x1]; + u8 first_mpls4_qualifier[0x1]; + u8 first_mpls3_s_bit[0x1]; + u8 first_mpls3_qualifier[0x1]; + u8 first_mpls2_s_bit[0x1]; + u8 first_mpls2_qualifier[0x1]; + u8 first_mpls1_s_bit[0x1]; + u8 first_mpls1_qualifier[0x1]; + u8 first_mpls0_s_bit[0x1]; + u8 first_mpls0_qualifier[0x1]; +}; + +struct mlx5_ifc_definer_hl_voq_bits { + u8 reserved_at_0[0x18]; + u8 ecn_ok[0x1]; + u8 congestion[0x1]; + u8 profile[0x2]; + u8 internal_prio[0x4]; +}; + +struct mlx5_ifc_definer_hl_ipv4_src_dst_bits { + u8 source_address[0x20]; + u8 destination_address[0x20]; +}; + +struct mlx5_ifc_definer_hl_ipv6_addr_bits { + u8 ipv6_address_127_96[0x20]; + u8 ipv6_address_95_64[0x20]; + u8 ipv6_address_63_32[0x20]; + u8 ipv6_address_31_0[0x20]; +}; + +struct mlx5_ifc_definer_tcp_icmp_header_bits { + union { + struct { + u8 icmp_dw1[0x20]; + u8 icmp_dw2[0x20]; + u8 icmp_dw3[0x20]; + }; + struct { + u8 tcp_seq[0x20]; + u8 tcp_ack[0x20]; + u8 tcp_win_urg[0x20]; + }; + }; +}; + +struct mlx5_ifc_definer_hl_tunnel_header_bits { + u8 tunnel_header_0[0x20]; + u8 tunnel_header_1[0x20]; + u8 tunnel_header_2[0x20]; + u8 tunnel_header_3[0x20]; +}; + +struct mlx5_ifc_definer_hl_ipsec_bits { + u8 spi[0x20]; + u8 sequence_number[0x20]; + u8 reserved[0x10]; + u8 ipsec_syndrome[0x8]; + u8 next_header[0x8]; +}; + +struct mlx5_ifc_definer_hl_metadata_bits { + u8 metadata_to_cqe[0x20]; + u8 general_purpose[0x20]; + u8 acomulated_hash[0x20]; +}; + +struct mlx5_ifc_definer_hl_flex_parser_bits { + u8 flex_parser_7[0x20]; + u8 flex_parser_6[0x20]; + u8 flex_parser_5[0x20]; + u8 flex_parser_4[0x20]; + u8 flex_parser_3[0x20]; + u8 flex_parser_2[0x20]; + u8 flex_parser_1[0x20]; + u8 flex_parser_0[0x20]; +}; + +struct mlx5_ifc_definer_hl_registers_bits { + u8 register_c_10[0x20]; + u8 register_c_11[0x20]; + u8 register_c_8[0x20]; + u8 register_c_9[0x20]; + u8 register_c_6[0x20]; + u8 register_c_7[0x20]; + u8 register_c_4[0x20]; + u8 register_c_5[0x20]; + u8 register_c_2[0x20]; + u8 register_c_3[0x20]; + u8 register_c_0[0x20]; + u8 register_c_1[0x20]; +}; + +struct mlx5_ifc_definer_hl_bits { + struct mlx5_ifc_definer_hl_eth_l2_bits eth_l2_outer; + struct mlx5_ifc_definer_hl_eth_l2_bits eth_l2_inner; + struct mlx5_ifc_definer_hl_eth_l2_src_bits eth_l2_src_outer; + struct mlx5_ifc_definer_hl_eth_l2_src_bits eth_l2_src_inner; + struct mlx5_ifc_definer_hl_ib_l2_bits ib_l2; + struct mlx5_ifc_definer_hl_eth_l3_bits eth_l3_outer; + struct mlx5_ifc_definer_hl_eth_l3_bits eth_l3_inner; + struct mlx5_ifc_definer_hl_eth_l4_bits eth_l4_outer; + struct mlx5_ifc_definer_hl_eth_l4_bits eth_l4_inner; + struct mlx5_ifc_definer_hl_src_qp_gvmi_bits source_qp_gvmi; + struct mlx5_ifc_definer_hl_ib_l4_bits ib_l4; + struct mlx5_ifc_definer_hl_oks1_bits oks1; + struct mlx5_ifc_definer_hl_oks2_bits oks2; + struct mlx5_ifc_definer_hl_voq_bits voq; + u8 reserved_at_480[0x380]; + struct mlx5_ifc_definer_hl_ipv4_src_dst_bits ipv4_src_dest_outer; + struct mlx5_ifc_definer_hl_ipv4_src_dst_bits ipv4_src_dest_inner; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_dst_outer; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_dst_inner; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_src_outer; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_src_inner; + u8 unsupported_dest_ib_l3[0x80]; + u8 unsupported_source_ib_l3[0x80]; + u8 unsupported_udp_misc_outer[0x20]; + u8 unsupported_udp_misc_inner[0x20]; + struct mlx5_ifc_definer_tcp_icmp_header_bits tcp_icmp; + struct mlx5_ifc_definer_hl_tunnel_header_bits tunnel_header; + u8 unsupported_mpls_outer[0xa0]; + u8 unsupported_mpls_inner[0xa0]; + u8 unsupported_config_headers_outer[0x80]; + u8 unsupported_config_headers_inner[0x80]; + u8 unsupported_random_number[0x20]; + struct mlx5_ifc_definer_hl_ipsec_bits ipsec; + struct mlx5_ifc_definer_hl_metadata_bits metadata; + u8 unsupported_utc_timestamp[0x40]; + u8 unsupported_free_running_timestamp[0x40]; + struct mlx5_ifc_definer_hl_flex_parser_bits flex_parser; + struct mlx5_ifc_definer_hl_registers_bits registers; + /* struct x ib_l3_extended; */ + /* struct x rwh */ + /* struct x dcceth */ + /* struct x dceth */ +}; + +enum mlx5dr_definer_gtp { + MLX5DR_DEFINER_GTP_EXT_HDR_BIT = 0x04, +}; + +struct mlx5_ifc_header_gtp_bits { + u8 version[0x3]; + u8 proto_type[0x1]; + u8 reserved1[0x1]; + u8 ext_hdr_flag[0x1]; + u8 seq_num_flag[0x1]; + u8 pdu_flag[0x1]; + u8 msg_type[0x8]; + u8 msg_len[0x8]; + u8 teid[0x20]; +}; + +struct mlx5_ifc_header_opt_gtp_bits { + u8 seq_num[0x10]; + u8 pdu_num[0x8]; + u8 next_ext_hdr_type[0x8]; +}; + +struct mlx5_ifc_header_gtp_psc_bits { + u8 len[0x8]; + u8 pdu_type[0x4]; + u8 flags[0x4]; + u8 qfi[0x8]; + u8 reserved2[0x8]; +}; + +struct mlx5_ifc_header_ipv6_vtc_bits { + u8 version[0x4]; + union { + u8 tos[0x8]; + struct { + u8 dscp[0x6]; + u8 ecn[0x2]; + }; + }; + u8 flow_label[0x14]; +}; + +struct mlx5_ifc_header_vxlan_bits { + u8 flags[0x8]; + u8 reserved1[0x18]; + u8 vni[0x18]; + u8 reserved2[0x8]; +}; + +struct mlx5_ifc_header_gre_bits { + union { + u8 c_rsvd0_ver[0x10]; + struct { + u8 gre_c_present[0x1]; + u8 reserved_at_1[0x1]; + u8 gre_k_present[0x1]; + u8 gre_s_present[0x1]; + u8 reserved_at_4[0x9]; + u8 version[0x3]; + }; + }; + u8 gre_protocol[0x10]; + u8 checksum[0x10]; + u8 reserved_at_30[0x10]; +}; + +struct mlx5_ifc_header_icmp_bits { + union { + u8 icmp_dw1[0x20]; + struct { + u8 type[0x8]; + u8 code[0x8]; + u8 cksum[0x10]; + }; + }; + union { + u8 icmp_dw2[0x20]; + struct { + u8 ident[0x10]; + u8 seq_nb[0x10]; + }; + }; +}; + +struct mlx5dr_definer { + enum mlx5dr_definer_type type; + uint8_t dw_selector[DW_SELECTORS]; + uint8_t byte_selector[BYTE_SELECTORS]; + struct mlx5dr_rule_match_tag mask; + struct mlx5dr_devx_obj *obj; +}; + +static inline bool +mlx5dr_definer_is_jumbo(struct mlx5dr_definer *definer) +{ + return (definer->type == MLX5DR_DEFINER_TYPE_JUMBO); +} + +void mlx5dr_definer_create_tag(const struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag); + +int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b); + +int mlx5dr_definer_get_id(struct mlx5dr_definer *definer); + +int mlx5dr_definer_get(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt); + +void mlx5dr_definer_put(struct mlx5dr_match_template *mt); + +#endif /* MLX5DR_DEFINER_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 12/18] net/mlx5/hws: Add HWS context object 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (10 preceding siblings ...) 2022-10-14 11:48 ` [v3 11/18] net/mlx5/hws: Add HWS definer layer Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 13/18] net/mlx5/hws: Add HWS table object Alex Vesker ` (5 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Context is the first mlx5dr object created, all sub object: table, matcher, rule, action are created using the context. The context holds the capabilities and send queues used for configuring the offloads to the HW. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_context.c | 223 ++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_context.h | 40 +++++ 2 files changed, 263 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.h diff --git a/drivers/net/mlx5/hws/mlx5dr_context.c b/drivers/net/mlx5/hws/mlx5dr_context.c new file mode 100644 index 0000000000..ae86694a51 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_context.c @@ -0,0 +1,223 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static int mlx5dr_context_pools_init(struct mlx5dr_context *ctx) +{ + struct mlx5dr_pool_attr pool_attr = {0}; + uint8_t max_log_sz; + int i; + + if (mlx5dr_pat_init_pattern_cache(&ctx->pattern_cache)) + return rte_errno; + + /* Create an STC pool per FT type */ + pool_attr.pool_type = MLX5DR_POOL_TYPE_STC; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_STC_POOL; + max_log_sz = RTE_MIN(MLX5DR_POOL_STC_LOG_SZ, ctx->caps->stc_alloc_log_max); + pool_attr.alloc_log_sz = RTE_MAX(max_log_sz, ctx->caps->stc_alloc_log_gran); + + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) { + pool_attr.table_type = i; + ctx->stc_pool[i] = mlx5dr_pool_create(ctx, &pool_attr); + if (!ctx->stc_pool[i]) { + DR_LOG(ERR, "Failed to allocate STC pool [%d]", i); + goto free_stc_pools; + } + } + + return 0; + +free_stc_pools: + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) + if (ctx->stc_pool[i]) + mlx5dr_pool_destroy(ctx->stc_pool[i]); + + mlx5dr_pat_uninit_pattern_cache(ctx->pattern_cache); + + return rte_errno; +} + +static void mlx5dr_context_pools_uninit(struct mlx5dr_context *ctx) +{ + int i; + + mlx5dr_pat_uninit_pattern_cache(ctx->pattern_cache); + + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) { + if (ctx->stc_pool[i]) + mlx5dr_pool_destroy(ctx->stc_pool[i]); + } +} + +static int mlx5dr_context_init_pd(struct mlx5dr_context *ctx, + struct ibv_pd *pd) +{ + struct mlx5dv_pd mlx5_pd = {0}; + struct mlx5dv_obj obj; + int ret; + + if (pd) { + ctx->pd = pd; + } else { + ctx->pd = mlx5_glue->alloc_pd(ctx->ibv_ctx); + if (!ctx->pd) { + DR_LOG(ERR, "Failed to allocate PD"); + rte_errno = errno; + return rte_errno; + } + ctx->flags |= MLX5DR_CONTEXT_FLAG_PRIVATE_PD; + } + + obj.pd.in = ctx->pd; + obj.pd.out = &mlx5_pd; + + ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD); + if (ret) + goto free_private_pd; + + ctx->pd_num = mlx5_pd.pdn; + + return 0; + +free_private_pd: + if (ctx->flags & MLX5DR_CONTEXT_FLAG_PRIVATE_PD) + mlx5_glue->dealloc_pd(ctx->pd); + + return ret; +} + +static int mlx5dr_context_uninit_pd(struct mlx5dr_context *ctx) +{ + if (ctx->flags & MLX5DR_CONTEXT_FLAG_PRIVATE_PD) + return mlx5_glue->dealloc_pd(ctx->pd); + + return 0; +} + +static void mlx5dr_context_check_hws_supp(struct mlx5dr_context *ctx) +{ + struct mlx5dr_cmd_query_caps *caps = ctx->caps; + + /* HWS not supported on device / FW */ + if (!caps->wqe_based_update) { + DR_LOG(INFO, "Required HWS WQE based insertion cap not supported"); + return; + } + + /* Current solution requires all rules to set reparse bit */ + if ((!caps->nic_ft.reparse || !caps->fdb_ft.reparse) || + !IS_BIT_SET(caps->rtc_reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS)) { + DR_LOG(INFO, "Required HWS reparse cap not supported"); + return; + } + + /* FW/HW must support 8DW STE */ + if (!IS_BIT_SET(caps->ste_format, MLX5_IFC_RTC_STE_FORMAT_8DW)) { + DR_LOG(INFO, "Required HWS STE format not supported"); + return; + } + + /* Adding rules by hash and by offset are requirements */ + if (!IS_BIT_SET(caps->rtc_index_mode, MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH) || + !IS_BIT_SET(caps->rtc_index_mode, MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET)) { + DR_LOG(INFO, "Required HWS RTC update mode not supported"); + return; + } + + /* Support for SELECT definer ID is required */ + if (!IS_BIT_SET(caps->definer_format_sup, MLX5_IFC_DEFINER_FORMAT_ID_SELECT)) { + DR_LOG(INFO, "Required HWS Dynamic definer not supported"); + return; + } + + ctx->flags |= MLX5DR_CONTEXT_FLAG_HWS_SUPPORT; +} + +static int mlx5dr_context_init_hws(struct mlx5dr_context *ctx, + struct mlx5dr_context_attr *attr) +{ + int ret; + + mlx5dr_context_check_hws_supp(ctx); + + if (!(ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) + return 0; + + ret = mlx5dr_context_init_pd(ctx, attr->pd); + if (ret) + return ret; + + ret = mlx5dr_context_pools_init(ctx); + if (ret) + goto uninit_pd; + + ret = mlx5dr_send_queues_open(ctx, attr->queues, attr->queue_size); + if (ret) + goto pools_uninit; + + return 0; + +pools_uninit: + mlx5dr_context_pools_uninit(ctx); +uninit_pd: + mlx5dr_context_uninit_pd(ctx); + return ret; +} + +static void mlx5dr_context_uninit_hws(struct mlx5dr_context *ctx) +{ + if (!(ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) + return; + + mlx5dr_send_queues_close(ctx); + mlx5dr_context_pools_uninit(ctx); + mlx5dr_context_uninit_pd(ctx); +} + +struct mlx5dr_context *mlx5dr_context_open(struct ibv_context *ibv_ctx, + struct mlx5dr_context_attr *attr) +{ + struct mlx5dr_context *ctx; + int ret; + + ctx = simple_calloc(1, sizeof(*ctx)); + if (!ctx) { + rte_errno = ENOMEM; + return NULL; + } + + ctx->ibv_ctx = ibv_ctx; + pthread_spin_init(&ctx->ctrl_lock, PTHREAD_PROCESS_PRIVATE); + + ctx->caps = simple_calloc(1, sizeof(*ctx->caps)); + if (!ctx->caps) + goto free_ctx; + + ret = mlx5dr_cmd_query_caps(ibv_ctx, ctx->caps); + if (ret) + goto free_caps; + + ret = mlx5dr_context_init_hws(ctx, attr); + if (ret) + goto free_caps; + + return ctx; + +free_caps: + simple_free(ctx->caps); +free_ctx: + simple_free(ctx); + return NULL; +} + +int mlx5dr_context_close(struct mlx5dr_context *ctx) +{ + mlx5dr_context_uninit_hws(ctx); + simple_free(ctx->caps); + pthread_spin_destroy(&ctx->ctrl_lock); + simple_free(ctx); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_context.h b/drivers/net/mlx5/hws/mlx5dr_context.h new file mode 100644 index 0000000000..b0c7802daf --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_context.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_CONTEXT_H_ +#define MLX5DR_CONTEXT_H_ + +enum mlx5dr_context_flags { + MLX5DR_CONTEXT_FLAG_HWS_SUPPORT = 1 << 0, + MLX5DR_CONTEXT_FLAG_PRIVATE_PD = 1 << 1, +}; + +enum mlx5dr_context_shared_stc_type { + MLX5DR_CONTEXT_SHARED_STC_DECAP = 0, + MLX5DR_CONTEXT_SHARED_STC_POP = 1, + MLX5DR_CONTEXT_SHARED_STC_MAX = 2, +}; + +struct mlx5dr_context_common_res { + struct mlx5dr_action_default_stc *default_stc; + struct mlx5dr_action_shared_stc *shared_stc[MLX5DR_CONTEXT_SHARED_STC_MAX]; + struct mlx5dr_cmd_forward_tbl *default_miss; +}; + +struct mlx5dr_context { + struct ibv_context *ibv_ctx; + struct mlx5dr_cmd_query_caps *caps; + struct ibv_pd *pd; + uint32_t pd_num; + struct mlx5dr_pool *stc_pool[MLX5DR_TABLE_TYPE_MAX]; + struct mlx5dr_context_common_res common_res[MLX5DR_TABLE_TYPE_MAX]; + struct mlx5dr_pattern_cache *pattern_cache; + pthread_spinlock_t ctrl_lock; + enum mlx5dr_context_flags flags; + struct mlx5dr_send_engine *send_queue; + size_t queues; + LIST_HEAD(table_head, mlx5dr_table) head; +}; + +#endif /* MLX5DR_CONTEXT_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 13/18] net/mlx5/hws: Add HWS table object 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (11 preceding siblings ...) 2022-10-14 11:48 ` [v3 12/18] net/mlx5/hws: Add HWS context object Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 14/18] net/mlx5/hws: Add HWS matcher object Alex Vesker ` (4 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit HWS table resides under the context object, each context can have multiple tables with different steering types RX/TX/FDB. The table is not only a logical object but it is also represented in the HW, packets can be steered to the table and from there to other tables. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_table.c | 248 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_table.h | 44 +++++ 2 files changed, 292 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.h diff --git a/drivers/net/mlx5/hws/mlx5dr_table.c b/drivers/net/mlx5/hws/mlx5dr_table.c new file mode 100644 index 0000000000..d3f77e4780 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_table.c @@ -0,0 +1,248 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static void mlx5dr_table_init_next_ft_attr(struct mlx5dr_table *tbl, + struct mlx5dr_cmd_ft_create_attr *ft_attr) +{ + ft_attr->type = tbl->fw_ft_type; + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) + ft_attr->level = tbl->ctx->caps->fdb_ft.max_level - 1; + else + ft_attr->level = tbl->ctx->caps->nic_ft.max_level - 1; + ft_attr->rtc_valid = true; +} + +/* Call this under ctx->ctrl_lock */ +static int +mlx5dr_table_up_default_fdb_miss_tbl(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_ft_create_attr ft_attr = {0}; + struct mlx5dr_cmd_forward_tbl *default_miss; + struct mlx5dr_context *ctx = tbl->ctx; + uint8_t tbl_type = tbl->type; + uint32_t vport; + + if (tbl->type != MLX5DR_TABLE_TYPE_FDB) + return 0; + + if (ctx->common_res[tbl_type].default_miss) { + ctx->common_res[tbl_type].default_miss->refcount++; + return 0; + } + + ft_attr.type = tbl->fw_ft_type; + ft_attr.level = tbl->ctx->caps->fdb_ft.max_level; /* The last level */ + ft_attr.rtc_valid = false; + + assert(ctx->caps->eswitch_manager); + vport = ctx->caps->eswitch_manager_vport_number; + + default_miss = mlx5dr_cmd_miss_ft_create(ctx->ibv_ctx, &ft_attr, vport); + if (!default_miss) { + DR_LOG(ERR, "Failed to default miss table type: 0x%x", tbl_type); + return rte_errno; + } + + ctx->common_res[tbl_type].default_miss = default_miss; + ctx->common_res[tbl_type].default_miss->refcount++; + return 0; +} + +/* Called under pthread_spin_lock(&ctx->ctrl_lock) */ +static void mlx5dr_table_down_default_fdb_miss_tbl(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_forward_tbl *default_miss; + struct mlx5dr_context *ctx = tbl->ctx; + uint8_t tbl_type = tbl->type; + + if (tbl->type != MLX5DR_TABLE_TYPE_FDB) + return; + + default_miss = ctx->common_res[tbl_type].default_miss; + if (--default_miss->refcount) + return; + + mlx5dr_cmd_miss_ft_destroy(default_miss); + + simple_free(default_miss); + ctx->common_res[tbl_type].default_miss = NULL; +} + +static int +mlx5dr_table_connect_to_default_miss_tbl(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + int ret; + + assert(tbl->type == MLX5DR_TABLE_TYPE_FDB); + + mlx5dr_cmd_set_attr_connect_miss_tbl(tbl->ctx, + tbl->fw_ft_type, + tbl->type, + &ft_attr); + + /* Connect to next */ + ret = mlx5dr_cmd_flow_table_modify(ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect FT to default FDB FT"); + return errno; + } + + return 0; +} + +struct mlx5dr_devx_obj * +mlx5dr_table_create_default_ft(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_ft_create_attr ft_attr = {0}; + struct mlx5dr_devx_obj *ft_obj; + int ret; + + mlx5dr_table_init_next_ft_attr(tbl, &ft_attr); + + ft_obj = mlx5dr_cmd_flow_table_create(tbl->ctx->ibv_ctx, &ft_attr); + if (ft_obj && tbl->type == MLX5DR_TABLE_TYPE_FDB) { + /* Take/create ref over the default miss */ + ret = mlx5dr_table_up_default_fdb_miss_tbl(tbl); + if (ret) { + DR_LOG(ERR, "Failed to get default fdb miss"); + goto free_ft_obj; + } + ret = mlx5dr_table_connect_to_default_miss_tbl(tbl, ft_obj); + if (ret) { + DR_LOG(ERR, "Failed connecting to default miss tbl"); + goto down_miss_tbl; + } + } + + return ft_obj; + +down_miss_tbl: + mlx5dr_table_down_default_fdb_miss_tbl(tbl); +free_ft_obj: + mlx5dr_cmd_destroy_obj(ft_obj); + return NULL; +} + +void mlx5dr_table_destroy_default_ft(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft_obj) +{ + mlx5dr_table_down_default_fdb_miss_tbl(tbl); + mlx5dr_cmd_destroy_obj(ft_obj); +} + +static int mlx5dr_table_init(struct mlx5dr_table *tbl) +{ + struct mlx5dr_context *ctx = tbl->ctx; + int ret; + + if (mlx5dr_table_is_root(tbl)) + return 0; + + if (!(tbl->ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) { + DR_LOG(ERR, "HWS not supported, cannot create mlx5dr_table"); + rte_errno = EOPNOTSUPP; + return rte_errno; + } + + switch (tbl->type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + tbl->fw_ft_type = FS_FT_NIC_RX; + break; + case MLX5DR_TABLE_TYPE_NIC_TX: + tbl->fw_ft_type = FS_FT_NIC_TX; + break; + case MLX5DR_TABLE_TYPE_FDB: + tbl->fw_ft_type = FS_FT_FDB; + break; + default: + assert(0); + break; + } + + pthread_spin_lock(&ctx->ctrl_lock); + tbl->ft = mlx5dr_table_create_default_ft(tbl); + if (!tbl->ft) { + DR_LOG(ERR, "Failed to create flow table devx object"); + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; + } + + ret = mlx5dr_action_get_default_stc(ctx, tbl->type); + if (ret) + goto tbl_destroy; + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +tbl_destroy: + mlx5dr_table_destroy_default_ft(tbl, tbl->ft); + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void mlx5dr_table_uninit(struct mlx5dr_table *tbl) +{ + if (mlx5dr_table_is_root(tbl)) + return; + pthread_spin_lock(&tbl->ctx->ctrl_lock); + mlx5dr_action_put_default_stc(tbl->ctx, tbl->type); + mlx5dr_table_destroy_default_ft(tbl, tbl->ft); + pthread_spin_unlock(&tbl->ctx->ctrl_lock); +} + +struct mlx5dr_table *mlx5dr_table_create(struct mlx5dr_context *ctx, + struct mlx5dr_table_attr *attr) +{ + struct mlx5dr_table *tbl; + int ret; + + if (attr->type > MLX5DR_TABLE_TYPE_FDB) { + DR_LOG(ERR, "Invalid table type %d", attr->type); + return NULL; + } + + tbl = simple_malloc(sizeof(*tbl)); + if (!tbl) { + rte_errno = ENOMEM; + return NULL; + } + + tbl->ctx = ctx; + tbl->type = attr->type; + tbl->level = attr->level; + LIST_INIT(&tbl->head); + + ret = mlx5dr_table_init(tbl); + if (ret) { + DR_LOG(ERR, "Failed to initialise table"); + goto free_tbl; + } + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_INSERT_HEAD(&ctx->head, tbl, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + return tbl; + +free_tbl: + simple_free(tbl); + return NULL; +} + +int mlx5dr_table_destroy(struct mlx5dr_table *tbl) +{ + struct mlx5dr_context *ctx = tbl->ctx; + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_REMOVE(tbl, next); + pthread_spin_unlock(&ctx->ctrl_lock); + mlx5dr_table_uninit(tbl); + simple_free(tbl); + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_table.h b/drivers/net/mlx5/hws/mlx5dr_table.h new file mode 100644 index 0000000000..786dddfaa4 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_table.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_TABLE_H_ +#define MLX5DR_TABLE_H_ + +#define MLX5DR_ROOT_LEVEL 0 + +struct mlx5dr_table { + struct mlx5dr_context *ctx; + struct mlx5dr_devx_obj *ft; + enum mlx5dr_table_type type; + uint32_t fw_ft_type; + uint32_t level; + LIST_HEAD(matcher_head, mlx5dr_matcher) head; + LIST_ENTRY(mlx5dr_table) next; +}; + +static inline +uint32_t mlx5dr_table_get_res_fw_ft_type(enum mlx5dr_table_type tbl_type, + bool is_mirror) +{ + if (tbl_type == MLX5DR_TABLE_TYPE_NIC_RX) + return FS_FT_NIC_RX; + else if (tbl_type == MLX5DR_TABLE_TYPE_NIC_TX) + return FS_FT_NIC_TX; + else if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + return is_mirror ? FS_FT_FDB_TX : FS_FT_FDB_RX; + + assert(0); + return 0; +} + +static inline bool mlx5dr_table_is_root(struct mlx5dr_table *tbl) +{ + return (tbl->level == MLX5DR_ROOT_LEVEL); +} + +struct mlx5dr_devx_obj *mlx5dr_table_create_default_ft(struct mlx5dr_table *tbl); + +void mlx5dr_table_destroy_default_ft(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft_obj); +#endif /* MLX5DR_TABLE_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 14/18] net/mlx5/hws: Add HWS matcher object 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (12 preceding siblings ...) 2022-10-14 11:48 ` [v3 13/18] net/mlx5/hws: Add HWS table object Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 15/18] net/mlx5/hws: Add HWS rule object Alex Vesker ` (3 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika HWS matcher resides under the table object, each table can have multiple chained matcher with different attributes. Each matcher represents a combination of match and action templates. Each matcher can contain multiple configurations based on the templates. Packets are steered from the table to the matcher and from there to other objects. The matcher allows efficent HW packet field matching and action execution based on the configuration done to it. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_matcher.c | 922 ++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_matcher.h | 76 +++ 2 files changed, 998 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.h diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c new file mode 100644 index 0000000000..835a3908eb --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -0,0 +1,922 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static bool mlx5dr_matcher_requires_col_tbl(uint8_t log_num_of_rules) +{ + /* Collision table concatenation is done only for large rule tables */ + return log_num_of_rules > MLX5DR_MATCHER_ASSURED_RULES_TH; +} + +static uint8_t mlx5dr_matcher_rules_to_tbl_depth(uint8_t log_num_of_rules) +{ + if (mlx5dr_matcher_requires_col_tbl(log_num_of_rules)) + return MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH; + + /* For small rule tables we use a single deep table to assure insertion */ + return RTE_MIN(log_num_of_rules, MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH); +} + +static int mlx5dr_matcher_create_end_ft(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_table *tbl = matcher->tbl; + + matcher->end_ft = mlx5dr_table_create_default_ft(tbl); + if (!matcher->end_ft) { + DR_LOG(ERR, "Failed to create matcher end flow table"); + return rte_errno; + } + return 0; +} + +static void mlx5dr_matcher_destroy_end_ft(struct mlx5dr_matcher *matcher) +{ + mlx5dr_table_destroy_default_ft(matcher->tbl, matcher->end_ft); +} + +static int mlx5dr_matcher_connect(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_matcher *prev = NULL; + struct mlx5dr_matcher *next = NULL; + struct mlx5dr_matcher *tmp_matcher; + struct mlx5dr_devx_obj *ft; + int ret; + + /* Find location in matcher list */ + if (LIST_EMPTY(&tbl->head)) { + LIST_INSERT_HEAD(&tbl->head, matcher, next); + goto connect; + } + + LIST_FOREACH(tmp_matcher, &tbl->head, next) { + if (tmp_matcher->attr.priority > matcher->attr.priority) { + next = tmp_matcher; + break; + } + prev = tmp_matcher; + } + + if (next) + LIST_INSERT_BEFORE(next, matcher, next); + else + LIST_INSERT_AFTER(prev, matcher, next); + +connect: + ft_attr.modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID; + ft_attr.type = tbl->fw_ft_type; + + /* Connect to next */ + if (next) { + if (next->match_ste.rtc_0) + ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; + if (next->match_ste.rtc_1) + ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + + ret = mlx5dr_cmd_flow_table_modify(matcher->end_ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect new matcher to next RTC"); + goto remove_from_list; + } + } + + /* Connect to previous */ + ft = prev ? prev->end_ft : tbl->ft; + + if (matcher->match_ste.rtc_0) + ft_attr.rtc_id_0 = matcher->match_ste.rtc_0->id; + if (matcher->match_ste.rtc_1) + ft_attr.rtc_id_1 = matcher->match_ste.rtc_1->id; + + ret = mlx5dr_cmd_flow_table_modify(ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect new matcher to previous FT"); + goto remove_from_list; + } + + return 0; + +remove_from_list: + LIST_REMOVE(matcher, next); + return ret; +} + +static int mlx5dr_matcher_disconnect(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_matcher *tmp_matcher; + struct mlx5dr_devx_obj *prev_ft; + struct mlx5dr_matcher *next; + int ret; + + prev_ft = matcher->tbl->ft; + LIST_FOREACH(tmp_matcher, &tbl->head, next) { + if (tmp_matcher == matcher) + break; + + prev_ft = tmp_matcher->end_ft; + } + + next = matcher->next.le_next; + + ft_attr.modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID; + ft_attr.type = matcher->tbl->fw_ft_type; + + if (next) { + /* Connect previous end FT to next RTC if exists */ + if (next->match_ste.rtc_0) + ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; + if (next->match_ste.rtc_1) + ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + } else { + /* Matcher is last, point prev end FT to default miss */ + mlx5dr_cmd_set_attr_connect_miss_tbl(tbl->ctx, + tbl->fw_ft_type, + tbl->type, + &ft_attr); + } + + ret = mlx5dr_cmd_flow_table_modify(prev_ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to disconnect matcher"); + return ret; + } + + LIST_REMOVE(matcher, next); + + return 0; +} + +static void mlx5dr_matcher_set_rtc_attr_sz(struct mlx5dr_matcher *matcher, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr, + bool is_match_rtc, + bool is_mirror) +{ + enum mlx5dr_matcher_flow_src flow_src = matcher->attr.optimize_flow_src; + struct mlx5dr_pool_chunk *ste = &matcher->action_ste.ste; + + if ((flow_src == MLX5DR_MATCHER_FLOW_SRC_VPORT && !is_mirror) || + (flow_src == MLX5DR_MATCHER_FLOW_SRC_WIRE && is_mirror)) { + /* Optimize FDB RTC */ + rtc_attr->log_size = 0; + rtc_attr->log_depth = 0; + } else { + /* Keep original values */ + rtc_attr->log_size = is_match_rtc ? matcher->attr.table.sz_row_log : ste->order; + rtc_attr->log_depth = is_match_rtc ? matcher->attr.table.sz_col_log : 0; + } +} + +static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, + bool is_match_rtc) +{ + const char *rtc_type_str = is_match_rtc ? "match" : "action"; + struct mlx5dr_cmd_rtc_create_attr rtc_attr = {0}; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_action_default_stc *default_stc; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_devx_obj **rtc_0, **rtc_1; + struct mlx5dr_pool *ste_pool, *stc_pool; + struct mlx5dr_devx_obj *devx_obj; + struct mlx5dr_pool_chunk *ste; + int ret; + + if (is_match_rtc) { + rtc_0 = &matcher->match_ste.rtc_0; + rtc_1 = &matcher->match_ste.rtc_1; + ste_pool = matcher->match_ste.pool; + ste = &matcher->match_ste.ste; + ste->order = matcher->attr.table.sz_col_log + + matcher->attr.table.sz_row_log; + rtc_attr.log_size = matcher->attr.table.sz_row_log; + rtc_attr.log_depth = matcher->attr.table.sz_col_log; + rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH; + /* The first match template is used since all share the same definer */ + rtc_attr.definer_id = mlx5dr_definer_get_id(matcher->mt[0]->definer); + rtc_attr.is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + rtc_attr.miss_ft_id = matcher->end_ft->id; + /* Match pool requires implicit allocation */ + ret = mlx5dr_pool_chunk_alloc(ste_pool, ste); + if (ret) { + DR_LOG(ERR, "Failed to allocate STE for %s RTC", rtc_type_str); + return ret; + } + } else { + rtc_0 = &matcher->action_ste.rtc_0; + rtc_1 = &matcher->action_ste.rtc_1; + ste_pool = matcher->action_ste.pool; + ste = &matcher->action_ste.ste; + ste->order = rte_log2_u32(matcher->action_ste.max_stes) + + matcher->attr.table.sz_row_log; + rtc_attr.log_size = ste->order; + rtc_attr.log_depth = 0; + rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; + /* The action STEs use the default always hit definer */ + rtc_attr.definer_id = ctx->caps->trivial_match_definer; + rtc_attr.is_jumbo = false; + rtc_attr.miss_ft_id = 0; + } + + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + + rtc_attr.pd = ctx->pd_num; + rtc_attr.ste_base = devx_obj->id; + rtc_attr.ste_offset = ste->offset; + rtc_attr.table_type = mlx5dr_table_get_res_fw_ft_type(tbl->type, false); + mlx5dr_matcher_set_rtc_attr_sz(matcher, &rtc_attr, is_match_rtc, false); + + /* STC is a single resource (devx_obj), use any STC for the ID */ + stc_pool = ctx->stc_pool[tbl->type]; + default_stc = ctx->common_res[tbl->type].default_stc; + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, &default_stc->default_hit); + rtc_attr.stc_base = devx_obj->id; + + *rtc_0 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); + if (!*rtc_0) { + DR_LOG(ERR, "Failed to create matcher %s RTC", rtc_type_str); + goto free_ste; + } + + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) { + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + rtc_attr.ste_base = devx_obj->id; + rtc_attr.table_type = mlx5dr_table_get_res_fw_ft_type(tbl->type, true); + + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, &default_stc->default_hit); + rtc_attr.stc_base = devx_obj->id; + mlx5dr_matcher_set_rtc_attr_sz(matcher, &rtc_attr, is_match_rtc, true); + + *rtc_1 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); + if (!*rtc_1) { + DR_LOG(ERR, "Failed to create peer matcher %s RTC0", rtc_type_str); + goto destroy_rtc_0; + } + } + + return 0; + +destroy_rtc_0: + mlx5dr_cmd_destroy_obj(*rtc_0); +free_ste: + if (is_match_rtc) + mlx5dr_pool_chunk_free(ste_pool, ste); + return rte_errno; +} + +static void mlx5dr_matcher_destroy_rtc(struct mlx5dr_matcher *matcher, + bool is_match_rtc) +{ + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_devx_obj *rtc_0, *rtc_1; + struct mlx5dr_pool_chunk *ste; + struct mlx5dr_pool *ste_pool; + + if (is_match_rtc) { + rtc_0 = matcher->match_ste.rtc_0; + rtc_1 = matcher->match_ste.rtc_1; + ste_pool = matcher->match_ste.pool; + ste = &matcher->match_ste.ste; + } else { + rtc_0 = matcher->action_ste.rtc_0; + rtc_1 = matcher->action_ste.rtc_1; + ste_pool = matcher->action_ste.pool; + ste = &matcher->action_ste.ste; + } + + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) + mlx5dr_cmd_destroy_obj(rtc_1); + + mlx5dr_cmd_destroy_obj(rtc_0); + if (is_match_rtc) + mlx5dr_pool_chunk_free(ste_pool, ste); +} + +static void mlx5dr_matcher_set_pool_attr(struct mlx5dr_pool_attr *attr, + struct mlx5dr_matcher *matcher) +{ + switch (matcher->attr.optimize_flow_src) { + case MLX5DR_MATCHER_FLOW_SRC_VPORT: + attr->opt_type = MLX5DR_POOL_OPTIMIZE_ORIG; + break; + case MLX5DR_MATCHER_FLOW_SRC_WIRE: + attr->opt_type = MLX5DR_POOL_OPTIMIZE_MIRROR; + break; + default: + break; + } +} + +static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher) +{ + bool is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_pool_attr pool_attr = {0}; + struct mlx5dr_context *ctx = tbl->ctx; + uint32_t required_stes; + int i, ret; + bool valid; + + for (i = 0; i < matcher->num_of_at; i++) { + struct mlx5dr_action_template *at = matcher->at[i]; + + /* Check if action combinabtion is valid */ + valid = mlx5dr_action_check_combo(at->action_type_arr, matcher->tbl->type); + if (!valid) { + DR_LOG(ERR, "Invalid combination in action template %d", i); + return rte_errno; + } + + /* Process action template to setters */ + ret = mlx5dr_action_template_process(at); + if (ret) { + DR_LOG(ERR, "Failed to process action template %d", i); + return rte_errno; + } + + required_stes = at->num_of_action_stes - (!is_jumbo || at->only_term); + matcher->action_ste.max_stes = RTE_MAX(matcher->action_ste.max_stes, required_stes); + + /* Future: Optimize reparse */ + } + + /* There are no additioanl STEs required for matcher */ + if (!matcher->action_ste.max_stes) + return 0; + + /* Allocate action STE mempool */ + pool_attr.table_type = tbl->type; + pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_STE_ACTION_POOL; + pool_attr.alloc_log_sz = rte_log2_u32(matcher->action_ste.max_stes) + + matcher->attr.table.sz_row_log; + mlx5dr_matcher_set_pool_attr(&pool_attr, matcher); + matcher->action_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); + if (!matcher->action_ste.pool) { + DR_LOG(ERR, "Failed to create action ste pool"); + return rte_errno; + } + + /* Allocate action RTC */ + ret = mlx5dr_matcher_create_rtc(matcher, false); + if (ret) { + DR_LOG(ERR, "Failed to create action RTC"); + goto free_ste_pool; + } + + /* Allocate STC for jumps to STE */ + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE; + stc_attr.ste_table.ste = matcher->action_ste.ste; + stc_attr.ste_table.ste_pool = matcher->action_ste.pool; + stc_attr.ste_table.match_definer_id = ctx->caps->trivial_match_definer; + + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl->type, + &matcher->action_ste.stc); + if (ret) { + DR_LOG(ERR, "Failed to create action jump to table STC"); + goto free_rtc; + } + + return 0; + +free_rtc: + mlx5dr_matcher_destroy_rtc(matcher, false); +free_ste_pool: + mlx5dr_pool_destroy(matcher->action_ste.pool); + return rte_errno; +} + +static void mlx5dr_matcher_unbind_at(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_table *tbl = matcher->tbl; + + if (!matcher->action_ste.max_stes) + return; + + mlx5dr_action_free_single_stc(tbl->ctx, tbl->type, &matcher->action_ste.stc); + mlx5dr_matcher_destroy_rtc(matcher, false); + mlx5dr_pool_destroy(matcher->action_ste.pool); +} + +static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_pool_attr pool_attr = {0}; + int i, created = 0; + int ret = -1; + + for (i = 0; i < matcher->num_of_mt; i++) { + /* Get a definer for each match template */ + ret = mlx5dr_definer_get(ctx, matcher->mt[i]); + if (ret) + goto definer_put; + + created++; + + /* Verify all templates produce the same definer */ + if (i == 0) + continue; + + ret = mlx5dr_definer_compare(matcher->mt[i]->definer, + matcher->mt[i - 1]->definer); + if (ret) { + DR_LOG(ERR, "Match templates cannot be used on the same matcher"); + rte_errno = ENOTSUP; + goto definer_put; + } + } + + /* Create an STE pool per matcher*/ + pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_MATCHER_STE_POOL; + pool_attr.table_type = matcher->tbl->type; + pool_attr.alloc_log_sz = matcher->attr.table.sz_col_log + + matcher->attr.table.sz_row_log; + mlx5dr_matcher_set_pool_attr(&pool_attr, matcher); + + matcher->match_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); + if (!matcher->match_ste.pool) { + DR_LOG(ERR, "Failed to allocate matcher STE pool"); + goto definer_put; + } + + return 0; + +definer_put: + while (created--) + mlx5dr_definer_put(matcher->mt[created]); + + return ret; +} + +static void mlx5dr_matcher_unbind_mt(struct mlx5dr_matcher *matcher) +{ + int i; + + for (i = 0; i < matcher->num_of_mt; i++) + mlx5dr_definer_put(matcher->mt[i]); + + mlx5dr_pool_destroy(matcher->match_ste.pool); +} + +static int +mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps, + struct mlx5dr_matcher *matcher, + bool is_root) +{ + struct mlx5dr_matcher_attr *attr = &matcher->attr; + + if (matcher->tbl->type != MLX5DR_TABLE_TYPE_FDB && attr->optimize_flow_src) { + DR_LOG(ERR, "NIC domain doesn't support flow_src"); + goto not_supported; + } + + if (is_root) { + if (attr->mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) { + DR_LOG(ERR, "Root matcher supports only rule resource mode"); + goto not_supported; + } + if (attr->optimize_flow_src) { + DR_LOG(ERR, "Root matcher can't specify FDB direction"); + goto not_supported; + } + return 0; + } + + /* Convert number of rules to the required depth */ + if (attr->mode == MLX5DR_MATCHER_RESOURCE_MODE_RULE) + attr->table.sz_col_log = mlx5dr_matcher_rules_to_tbl_depth(attr->rule.num_log); + + if (attr->table.sz_col_log > caps->rtc_log_depth_max) { + DR_LOG(ERR, "Matcher depth exceeds limit %d", caps->rtc_log_depth_max); + goto not_supported; + } + + if (attr->table.sz_col_log + attr->table.sz_row_log > caps->ste_alloc_log_max) { + DR_LOG(ERR, "Total matcher size exceeds limit %d", caps->ste_alloc_log_max); + goto not_supported; + } + + if (attr->table.sz_col_log + attr->table.sz_row_log < caps->ste_alloc_log_gran) { + DR_LOG(ERR, "Total matcher size below limit %d", caps->ste_alloc_log_gran); + goto not_supported; + } + + return 0; + +not_supported: + rte_errno = EOPNOTSUPP; + return rte_errno; +} + +static int mlx5dr_matcher_create_and_connect(struct mlx5dr_matcher *matcher) +{ + int ret; + + /* Select and create the definers for current matcher */ + ret = mlx5dr_matcher_bind_mt(matcher); + if (ret) + return ret; + + /* Calculate and verify action combination */ + ret = mlx5dr_matcher_bind_at(matcher); + if (ret) + goto unbind_mt; + + /* Create matcher end flow table anchor */ + ret = mlx5dr_matcher_create_end_ft(matcher); + if (ret) + goto unbind_at; + + /* Allocate the RTC for the new matcher */ + ret = mlx5dr_matcher_create_rtc(matcher, true); + if (ret) + goto destroy_end_ft; + + /* Connect the matcher to the matcher list */ + ret = mlx5dr_matcher_connect(matcher); + if (ret) + goto destroy_rtc; + + return 0; + +destroy_rtc: + mlx5dr_matcher_destroy_rtc(matcher, true); +destroy_end_ft: + mlx5dr_matcher_destroy_end_ft(matcher); +unbind_at: + mlx5dr_matcher_unbind_at(matcher); +unbind_mt: + mlx5dr_matcher_unbind_mt(matcher); + return ret; +} + +static void mlx5dr_matcher_destroy_and_disconnect(struct mlx5dr_matcher *matcher) +{ + mlx5dr_matcher_disconnect(matcher); + mlx5dr_matcher_destroy_rtc(matcher, true); + mlx5dr_matcher_destroy_end_ft(matcher); + mlx5dr_matcher_unbind_at(matcher); + mlx5dr_matcher_unbind_mt(matcher); +} + +static int +mlx5dr_matcher_create_col_matcher(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_matcher *col_matcher; + int ret; + + if (matcher->attr.mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) + return 0; + + if (!mlx5dr_matcher_requires_col_tbl(matcher->attr.rule.num_log)) + return 0; + + col_matcher = simple_calloc(1, sizeof(*matcher)); + if (!col_matcher) { + rte_errno = ENOMEM; + return rte_errno; + } + + col_matcher->tbl = matcher->tbl; + col_matcher->num_of_mt = matcher->num_of_mt; + memcpy(col_matcher->mt, matcher->mt, matcher->num_of_mt * sizeof(*matcher->mt)); + col_matcher->num_of_at = matcher->num_of_at; + memcpy(col_matcher->at, matcher->at, matcher->num_of_at * sizeof(*matcher->at)); + + col_matcher->attr.priority = matcher->attr.priority; + col_matcher->attr.mode = MLX5DR_MATCHER_RESOURCE_MODE_HTABLE; + col_matcher->attr.optimize_flow_src = matcher->attr.optimize_flow_src; + col_matcher->attr.table.sz_row_log = matcher->attr.rule.num_log; + col_matcher->attr.table.sz_col_log = MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH; + if (col_matcher->attr.table.sz_row_log > MLX5DR_MATCHER_ASSURED_ROW_RATIO) + col_matcher->attr.table.sz_row_log -= MLX5DR_MATCHER_ASSURED_ROW_RATIO; + + ret = mlx5dr_matcher_process_attr(ctx->caps, col_matcher, false); + if (ret) + goto free_col_matcher; + + ret = mlx5dr_matcher_create_and_connect(col_matcher); + if (ret) + goto free_col_matcher; + + matcher->col_matcher = col_matcher; + + return 0; + +free_col_matcher: + simple_free(col_matcher); + DR_LOG(ERR, "Failed to create assured collision matcher"); + return ret; +} + +static void +mlx5dr_matcher_destroy_col_matcher(struct mlx5dr_matcher *matcher) +{ + if (matcher->attr.mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) + return; + + if (matcher->col_matcher) { + mlx5dr_matcher_destroy_and_disconnect(matcher->col_matcher); + simple_free(matcher->col_matcher); + } +} + +static int mlx5dr_matcher_init(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + + /* Allocate matcher resource and connect to the packet pipe */ + ret = mlx5dr_matcher_create_and_connect(matcher); + if (ret) + goto unlock_err; + + /* Create additional matcher for collision handling */ + ret = mlx5dr_matcher_create_col_matcher(matcher); + if (ret) + goto destory_and_disconnect; + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +destory_and_disconnect: + mlx5dr_matcher_destroy_and_disconnect(matcher); +unlock_err: + pthread_spin_unlock(&ctx->ctrl_lock); + return ret; +} + +static int mlx5dr_matcher_uninit(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + + pthread_spin_lock(&ctx->ctrl_lock); + mlx5dr_matcher_destroy_col_matcher(matcher); + mlx5dr_matcher_destroy_and_disconnect(matcher); + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; +} + +static int mlx5dr_matcher_init_root(struct mlx5dr_matcher *matcher) +{ + enum mlx5dr_table_type type = matcher->tbl->type; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dv_flow_matcher_attr attr = {0}; + struct mlx5dv_flow_match_parameters *mask; + struct mlx5_flow_attr flow_attr = {0}; + enum mlx5dv_flow_table_type ft_type; + struct rte_flow_error rte_error; + uint8_t match_criteria; + int ret; + + switch (type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_RX; + break; + case MLX5DR_TABLE_TYPE_NIC_TX: + ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_TX; + break; + case MLX5DR_TABLE_TYPE_FDB: + ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_FDB; + break; + default: + assert(0); + break; + } + + if (matcher->attr.priority > UINT16_MAX) { + DR_LOG(ERR, "Root matcher priority exceeds allowed limit"); + rte_errno = EINVAL; + return rte_errno; + } + + mask = simple_calloc(1, MLX5_ST_SZ_BYTES(fte_match_param) + + offsetof(struct mlx5dv_flow_match_parameters, match_buf)); + if (!mask) { + rte_errno = ENOMEM; + return rte_errno; + } + + flow_attr.tbl_type = type; + + /* On root table matcher, only a single match template is supported */ + ret = flow_dv_translate_items_hws(matcher->mt[0]->items, + &flow_attr, mask->match_buf, + MLX5_SET_MATCHER_HS_M, NULL, + &match_criteria, + &rte_error); + if (ret) { + DR_LOG(ERR, "Failed to convert items to PRM [%s]", rte_error.message); + goto free_mask; + } + + mask->match_sz = MLX5_ST_SZ_BYTES(fte_match_param); + attr.match_mask = mask; + attr.match_criteria_enable = match_criteria; + attr.ft_type = ft_type; + attr.type = IBV_FLOW_ATTR_NORMAL; + attr.priority = matcher->attr.priority; + attr.comp_mask = MLX5DV_FLOW_MATCHER_MASK_FT_TYPE; + + matcher->dv_matcher = + mlx5_glue->dv_create_flow_matcher_root(ctx->ibv_ctx, &attr); + if (!matcher->dv_matcher) { + DR_LOG(ERR, "Failed to create DV flow matcher"); + rte_errno = errno; + goto free_mask; + } + + simple_free(mask); + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_INSERT_HEAD(&matcher->tbl->head, matcher, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_mask: + simple_free(mask); + return rte_errno; +} + +static int mlx5dr_matcher_uninit_root(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_REMOVE(matcher, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + ret = mlx5_glue->dv_destroy_flow_matcher_root(matcher->dv_matcher); + if (ret) { + DR_LOG(ERR, "Failed to Destroy DV flow matcher"); + rte_errno = errno; + } + + return ret; +} + +static int +mlx5dr_matcher_check_template(uint8_t num_of_mt, uint8_t num_of_at, bool is_root) +{ + uint8_t max_num_of_mt; + + max_num_of_mt = is_root ? + MLX5DR_MATCHER_MAX_MT_ROOT : + MLX5DR_MATCHER_MAX_MT; + + if (!num_of_mt || !num_of_at) { + DR_LOG(ERR, "Number of action/match template cannot be zero"); + goto out_not_sup; + } + + if (num_of_at > MLX5DR_MATCHER_MAX_AT) { + DR_LOG(ERR, "Number of action templates exceeds limit"); + goto out_not_sup; + } + + if (num_of_mt > max_num_of_mt) { + DR_LOG(ERR, "Number of match templates exceeds limit"); + goto out_not_sup; + } + + return 0; + +out_not_sup: + rte_errno = ENOTSUP; + return rte_errno; +} + +struct mlx5dr_matcher * +mlx5dr_matcher_create(struct mlx5dr_table *tbl, + struct mlx5dr_match_template *mt[], + uint8_t num_of_mt, + struct mlx5dr_action_template *at[], + uint8_t num_of_at, + struct mlx5dr_matcher_attr *attr) +{ + bool is_root = mlx5dr_table_is_root(tbl); + struct mlx5dr_matcher *matcher; + int ret; + + ret = mlx5dr_matcher_check_template(num_of_mt, num_of_at, is_root); + if (ret) + return NULL; + + matcher = simple_calloc(1, sizeof(*matcher)); + if (!matcher) { + rte_errno = ENOMEM; + return NULL; + } + + matcher->tbl = tbl; + matcher->attr = *attr; + matcher->num_of_mt = num_of_mt; + memcpy(matcher->mt, mt, num_of_mt * sizeof(*mt)); + matcher->num_of_at = num_of_at; + memcpy(matcher->at, at, num_of_at * sizeof(*at)); + + ret = mlx5dr_matcher_process_attr(tbl->ctx->caps, matcher, is_root); + if (ret) + goto free_matcher; + + if (is_root) + ret = mlx5dr_matcher_init_root(matcher); + else + ret = mlx5dr_matcher_init(matcher); + + if (ret) { + DR_LOG(ERR, "Failed to initialise matcher: %d", ret); + goto free_matcher; + } + + return matcher; + +free_matcher: + simple_free(matcher); + return NULL; +} + +int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher) +{ + if (mlx5dr_table_is_root(matcher->tbl)) + mlx5dr_matcher_uninit_root(matcher); + else + mlx5dr_matcher_uninit(matcher); + + simple_free(matcher); + return 0; +} + +struct mlx5dr_match_template * +mlx5dr_match_template_create(const struct rte_flow_item items[], + enum mlx5dr_match_template_flags flags) +{ + struct mlx5dr_match_template *mt; + struct rte_flow_error error; + int ret, len; + + if (flags > MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH) { + DR_LOG(ERR, "Unsupported match template flag provided"); + rte_errno = EINVAL; + return NULL; + } + + mt = simple_calloc(1, sizeof(*mt)); + if (!mt) { + DR_LOG(ERR, "Failed to allocate match template"); + rte_errno = ENOMEM; + return NULL; + } + + mt->flags = flags; + + /* Duplicate the user given items */ + ret = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, NULL, 0, items, &error); + if (ret <= 0) { + DR_LOG(ERR, "Unable to process items (%s): %s", + error.message ? error.message : "unspecified", + strerror(rte_errno)); + goto free_template; + } + + len = RTE_ALIGN(ret, 16); + mt->items = simple_calloc(1, len); + if (!mt->items) { + DR_LOG(ERR, "Failed to allocate item copy"); + rte_errno = ENOMEM; + goto free_template; + } + + ret = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, mt->items, ret, items, &error); + if (ret <= 0) + goto free_dst; + + return mt; + +free_dst: + simple_free(mt->items); +free_template: + simple_free(mt); + return NULL; +} + +int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt) +{ + assert(!mt->refcount); + simple_free(mt->items); + simple_free(mt); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h new file mode 100644 index 0000000000..b7bf94762c --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h @@ -0,0 +1,76 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_MATCHER_H_ +#define MLX5DR_MATCHER_H_ + +/* Max supported match template */ +#define MLX5DR_MATCHER_MAX_MT 2 +#define MLX5DR_MATCHER_MAX_MT_ROOT 1 + +/* Max supported action template */ +#define MLX5DR_MATCHER_MAX_AT 4 + +/* We calculated that concatenating a collision table to the main table with + * 3% of the main table rows will be enough resources for high insertion + * success probability. + * + * The calculation: log2(2^x * 3 / 100) = log2(2^x) + log2(3/100) = x - 5.05 ~ 5 + */ +#define MLX5DR_MATCHER_ASSURED_ROW_RATIO 5 +/* Thrashold to determine if amount of rules require a collision table */ +#define MLX5DR_MATCHER_ASSURED_RULES_TH 10 +/* Required depth of an assured collision table */ +#define MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH 4 +/* Required depth of the main large table */ +#define MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH 2 + +struct mlx5dr_match_template { + struct rte_flow_item *items; + struct mlx5dr_definer *definer; + struct mlx5dr_definer_fc *fc; + uint32_t fc_sz; + uint64_t item_flags; + uint8_t vport_item_id; + enum mlx5dr_match_template_flags flags; + uint32_t refcount; +}; + +struct mlx5dr_matcher_match_ste { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_devx_obj *rtc_0; + struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_pool *pool; +}; + +struct mlx5dr_matcher_action_ste { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_pool_chunk stc; + struct mlx5dr_devx_obj *rtc_0; + struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_pool *pool; + uint8_t max_stes; +}; + +struct mlx5dr_matcher { + struct mlx5dr_table *tbl; + struct mlx5dr_matcher_attr attr; + struct mlx5dv_flow_matcher *dv_matcher; + struct mlx5dr_match_template *mt[MLX5DR_MATCHER_MAX_MT]; + uint8_t num_of_mt; + struct mlx5dr_action_template *at[MLX5DR_MATCHER_MAX_AT]; + uint8_t num_of_at; + struct mlx5dr_devx_obj *end_ft; + struct mlx5dr_matcher *col_matcher; + struct mlx5dr_matcher_match_ste match_ste; + struct mlx5dr_matcher_action_ste action_ste; + LIST_ENTRY(mlx5dr_matcher) next; +}; + +int mlx5dr_matcher_conv_items_to_prm(uint64_t *match_buf, + struct rte_flow_item *items, + uint8_t *match_criteria, + bool is_value); + +#endif /* MLX5DR_MATCHER_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 15/18] net/mlx5/hws: Add HWS rule object 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (13 preceding siblings ...) 2022-10-14 11:48 ` [v3 14/18] net/mlx5/hws: Add HWS matcher object Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 16/18] net/mlx5/hws: Add HWS action object Alex Vesker ` (2 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit HWS rule objects reside under the matcher, each rule holds the configuration for the packet fields to match on and the set of actions to execute over the packet that has the requested fields. Rules can be created asynchronously in parallel over multiple queues to different matchers. Each rule is configured to the HW. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_rule.c | 528 +++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_rule.h | 50 +++ 2 files changed, 578 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.h diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c new file mode 100644 index 0000000000..b27318e6d4 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -0,0 +1,528 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static void mlx5dr_rule_skip(struct mlx5dr_matcher *matcher, + const struct rte_flow_item *items, + bool *skip_rx, bool *skip_tx) +{ + struct mlx5dr_match_template *mt = matcher->mt[0]; + const struct flow_hw_port_info *vport; + const struct rte_flow_item_ethdev *v; + + /* Flow_src is the 1st priority */ + if (matcher->attr.optimize_flow_src) { + *skip_tx = matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_WIRE; + *skip_rx = matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_VPORT; + return; + } + + /* By default FDB rules are added to both RX and TX */ + *skip_rx = false; + *skip_tx = false; + + if (mt->item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) { + v = items[mt->vport_item_id].spec; + vport = flow_hw_conv_port_id(v->port_id); + if (unlikely(!vport)) { + DR_LOG(ERR, "Fail to map port ID %d, ignoring", v->port_id); + return; + } + + if (!vport->is_wire) + /* Match vport ID is not WIRE -> Skip RX */ + *skip_rx = true; + else + /* Match vport ID is WIRE -> Skip TX */ + *skip_tx = true; + } +} + +static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, + struct mlx5dr_rule *rule, + const struct rte_flow_item *items, + void *user_data) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_table *tbl = matcher->tbl; + bool skip_rx, skip_tx; + + dep_wqe->rule = rule; + dep_wqe->user_data = user_data; + + switch (tbl->type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + case MLX5DR_TABLE_TYPE_NIC_TX: + dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; + dep_wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0->id : 0; + dep_wqe->rtc_1 = 0; + dep_wqe->retry_rtc_1 = 0; + break; + + case MLX5DR_TABLE_TYPE_FDB: + mlx5dr_rule_skip(matcher, items, &skip_rx, &skip_tx); + + if (!skip_rx) { + dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; + dep_wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0->id : 0; + } else { + dep_wqe->rtc_0 = 0; + dep_wqe->retry_rtc_0 = 0; + } + + if (!skip_tx) { + dep_wqe->rtc_1 = matcher->match_ste.rtc_1->id; + dep_wqe->retry_rtc_1 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_1->id : 0; + } else { + dep_wqe->rtc_1 = 0; + dep_wqe->retry_rtc_1 = 0; + } + + break; + + default: + assert(false); + break; + } +} + +static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue, + struct mlx5dr_rule *rule, + bool err, + void *user_data, + enum mlx5dr_rule_status rule_status_on_succ) +{ + enum rte_flow_op_status comp_status; + + if (!err) { + comp_status = RTE_FLOW_OP_SUCCESS; + rule->status = rule_status_on_succ; + } else { + comp_status = RTE_FLOW_OP_ERROR; + rule->status = MLX5DR_RULE_STATUS_FAILED; + } + + mlx5dr_send_engine_inc_rule(queue); + mlx5dr_send_engine_gen_comp(queue, user_data, comp_status); +} + +static int mlx5dr_rule_alloc_action_ste(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + int ret; + + /* Use rule_idx for locking optimzation, otherwise allocate from pool */ + if (matcher->attr.optimize_using_rule_idx) { + rule->action_ste_idx = attr->rule_idx * matcher->action_ste.max_stes; + } else { + struct mlx5dr_pool_chunk ste = {0}; + + ste.order = rte_log2_u32(matcher->action_ste.max_stes); + ret = mlx5dr_pool_chunk_alloc(matcher->action_ste.pool, &ste); + if (ret) { + DR_LOG(ERR, "Failed to allocate STE for rule actions"); + return ret; + } + rule->action_ste_idx = ste.offset; + } + return 0; +} + +void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + + if (rule->action_ste_idx > -1 && !matcher->attr.optimize_using_rule_idx) { + struct mlx5dr_pool_chunk ste = {0}; + + /* This release is safe only when the rule match part was deleted */ + ste.order = rte_log2_u32(matcher->action_ste.max_stes); + ste.offset = rule->action_ste_idx; + mlx5dr_pool_chunk_free(matcher->action_ste.pool, &ste); + } +} + +static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr, + struct mlx5dr_actions_apply_data *apply) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_context *ctx = tbl->ctx; + + /* Init rule before reuse */ + rule->rtc_0 = 0; + rule->rtc_1 = 0; + rule->pending_wqes = 0; + rule->action_ste_idx = -1; + rule->status = MLX5DR_RULE_STATUS_CREATING; + + /* Init default send STE attributes */ + ste_attr->gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE; + ste_attr->send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr->send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr->send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + + /* Init default action apply */ + apply->tbl_type = tbl->type; + apply->common_res = &ctx->common_res[tbl->type]; + apply->jump_to_action_stc = matcher->action_ste.stc.offset; + apply->require_dep = 0; +} + +static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dr_action_template *at = rule->matcher->at[at_idx]; + struct mlx5dr_match_template *mt = rule->matcher->mt[mt_idx]; + bool is_jumbo = mlx5dr_definer_is_jumbo(mt->definer); + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + struct mlx5dr_actions_wqe_setter *setter; + struct mlx5dr_actions_apply_data apply; + struct mlx5dr_send_engine *queue; + uint8_t total_stes, action_stes; + int i, ret; + + queue = &ctx->send_queue[attr->queue_id]; + if (unlikely(mlx5dr_send_engine_err(queue))) { + rte_errno = EIO; + return rte_errno; + } + + mlx5dr_rule_create_init(rule, &ste_attr, &apply); + + /* Allocate dependent match WQE since rule might have dependent writes. + * The queued dependent WQE can be later aborted or kept as a dependency. + * dep_wqe buffers (ctrl, data) are also reused for all STE writes. + */ + dep_wqe = mlx5dr_send_add_new_dep_wqe(queue); + mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, attr->user_data); + + ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; + ste_attr.wqe_data = &dep_wqe->wqe_data; + apply.wqe_ctrl = &dep_wqe->wqe_ctrl; + apply.wqe_data = (uint32_t *)&dep_wqe->wqe_data; + apply.rule_action = rule_actions; + apply.queue = queue; + + setter = &at->setters[at->num_of_action_stes]; + total_stes = at->num_of_action_stes + (is_jumbo && !at->only_term); + action_stes = total_stes - 1; + + if (action_stes) { + /* Allocate action STEs for complex rules */ + ret = mlx5dr_rule_alloc_action_ste(rule, attr); + if (ret) { + DR_LOG(ERR, "Failed to allocate action memory %d", ret); + mlx5dr_send_abort_new_dep_wqe(queue); + return ret; + } + /* Skip RX/TX based on the dep_wqe init */ + ste_attr.rtc_0 = dep_wqe->rtc_0 ? matcher->action_ste.rtc_0->id : 0; + ste_attr.rtc_1 = dep_wqe->rtc_1 ? matcher->action_ste.rtc_1->id : 0; + /* Action STEs are written to a specific index last to first */ + ste_attr.direct_index = rule->action_ste_idx + action_stes; + apply.next_direct_idx = ste_attr.direct_index; + } else { + apply.next_direct_idx = 0; + } + + for (i = total_stes; i-- > 0;) { + mlx5dr_action_apply_setter(&apply, setter--, !i && is_jumbo); + + if (i == 0) { + /* Handle last match STE */ + mlx5dr_definer_create_tag(items, mt->fc, mt->fc_sz, + (uint8_t *)dep_wqe->wqe_data.action); + + /* Rule has dependent WQEs, match dep_wqe is queued */ + if (action_stes || apply.require_dep) + break; + + /* Rule has no dependencies, abort dep_wqe and send WQE now */ + mlx5dr_send_abort_new_dep_wqe(queue); + ste_attr.wqe_tag_is_jumbo = is_jumbo; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = dep_wqe->user_data; + ste_attr.send_attr.rule = dep_wqe->rule; + ste_attr.direct_index = 0; + ste_attr.rtc_0 = dep_wqe->rtc_0; + ste_attr.rtc_1 = dep_wqe->rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.retry_rtc_0 = dep_wqe->retry_rtc_0; + ste_attr.retry_rtc_1 = dep_wqe->retry_rtc_1; + } else { + apply.next_direct_idx = --ste_attr.direct_index; + } + + mlx5dr_send_ste(queue, &ste_attr); + } + + /* Backup TAG on the rule for deletion */ + if (is_jumbo) + memcpy(rule->tag.jumbo, dep_wqe->wqe_data.action, MLX5DR_JUMBO_TAG_SZ); + else + memcpy(rule->tag.match, dep_wqe->wqe_data.tag, MLX5DR_MATCH_TAG_SZ); + + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQEs */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + return 0; +} + +static void mlx5dr_rule_destroy_failed_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + + mlx5dr_rule_gen_comp(queue, rule, false, + attr->user_data, MLX5DR_RULE_STATUS_DELETED); + + /* Rule failed now we can safely release action STEs */ + mlx5dr_rule_free_action_ste_idx(rule); + + /* If a rule that was indicated as burst (need to trigger HW) has failed + * insertion we won't ring the HW as nothing is being written to the WQ. + * In such case update the last WQE and ring the HW with that work + */ + if (attr->burst) + return; + + mlx5dr_send_all_dep_wqe(queue); + mlx5dr_send_engine_flush_queue(queue); +} + +static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_wqe_gta_ctrl_seg wqe_ctrl = {0}; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + + /* Rule is not completed yet */ + if (rule->status == MLX5DR_RULE_STATUS_CREATING) { + rte_errno = EBUSY; + return rte_errno; + } + + /* Rule failed and doesn't require cleanup */ + if (rule->status == MLX5DR_RULE_STATUS_FAILED) { + mlx5dr_rule_destroy_failed_hws(rule, attr); + return 0; + } + + if (unlikely(mlx5dr_send_engine_err(queue))) { + mlx5dr_rule_destroy_failed_hws(rule, attr); + return 0; + } + + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQE */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + rule->status = MLX5DR_RULE_STATUS_DELETING; + + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + + ste_attr.send_attr.rule = rule; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = attr->user_data; + + ste_attr.rtc_0 = rule->rtc_0; + ste_attr.rtc_1 = rule->rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.wqe_ctrl = &wqe_ctrl; + ste_attr.wqe_tag = &rule->tag; + ste_attr.wqe_tag_is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE; + + mlx5dr_send_ste(queue, &ste_attr); + + return 0; +} + +static int mlx5dr_rule_create_root(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *rule_attr, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dv_flow_matcher *dv_matcher = rule->matcher->dv_matcher; + uint8_t num_actions = rule->matcher->at[at_idx]->num_actions; + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dv_flow_match_parameters *value; + struct mlx5_flow_attr flow_attr = {0}; + struct mlx5dv_flow_action_attr *attr; + struct rte_flow_error error; + uint8_t match_criteria; + int ret; + + attr = simple_calloc(num_actions, sizeof(*attr)); + if (!attr) { + rte_errno = ENOMEM; + return rte_errno; + } + + value = simple_calloc(1, MLX5_ST_SZ_BYTES(fte_match_param) + + offsetof(struct mlx5dv_flow_match_parameters, match_buf)); + if (!value) { + rte_errno = ENOMEM; + goto free_attr; + } + + flow_attr.tbl_type = rule->matcher->tbl->type; + + ret = flow_dv_translate_items_hws(items, &flow_attr, value->match_buf, + MLX5_SET_MATCHER_HS_V, NULL, + &match_criteria, + &error); + if (ret) { + DR_LOG(ERR, "Failed to convert items to PRM [%s]", error.message); + goto free_value; + } + + /* Convert actions to verb action attr */ + ret = mlx5dr_action_root_build_attr(rule_actions, num_actions, attr); + if (ret) + goto free_value; + + /* Create verb flow */ + value->match_sz = MLX5_ST_SZ_BYTES(fte_match_param); + rule->flow = mlx5_glue->dv_create_flow_root(dv_matcher, + value, + num_actions, + attr); + + mlx5dr_rule_gen_comp(&ctx->send_queue[rule_attr->queue_id], rule, !rule->flow, + rule_attr->user_data, MLX5DR_RULE_STATUS_CREATED); + + simple_free(value); + simple_free(attr); + + return 0; + +free_value: + simple_free(value); +free_attr: + simple_free(attr); + + return -rte_errno; +} + +static int mlx5dr_rule_destroy_root(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + int err = 0; + + if (rule->flow) + err = ibv_destroy_flow(rule->flow); + + mlx5dr_rule_gen_comp(&ctx->send_queue[attr->queue_id], rule, err, + attr->user_data, MLX5DR_RULE_STATUS_DELETED); + + return 0; +} + +int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[], + struct mlx5dr_rule_attr *attr, + struct mlx5dr_rule *rule_handle) +{ + struct mlx5dr_context *ctx; + int ret; + + rule_handle->matcher = matcher; + ctx = matcher->tbl->ctx; + + if (unlikely(!attr->user_data)) { + rte_errno = EINVAL; + return -rte_errno; + } + + /* Check if there is room in queue */ + if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + rte_errno = EBUSY; + return -rte_errno; + } + + assert(matcher->num_of_mt >= mt_idx); + assert(matcher->num_of_at >= at_idx); + + if (unlikely(mlx5dr_table_is_root(matcher->tbl))) + ret = mlx5dr_rule_create_root(rule_handle, + attr, + items, + at_idx, + rule_actions); + else + ret = mlx5dr_rule_create_hws(rule_handle, + attr, + mt_idx, + items, + at_idx, + rule_actions); + return -ret; +} + +int mlx5dr_rule_destroy(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + int ret; + + if (unlikely(!attr->user_data)) { + rte_errno = EINVAL; + return -rte_errno; + } + + /* Check if there is room in queue */ + if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + rte_errno = EBUSY; + return -rte_errno; + } + + if (unlikely(mlx5dr_table_is_root(rule->matcher->tbl))) + ret = mlx5dr_rule_destroy_root(rule, attr); + else + ret = mlx5dr_rule_destroy_hws(rule, attr); + + return -ret; +} + +size_t mlx5dr_rule_get_handle_size(void) +{ + return sizeof(struct mlx5dr_rule); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h new file mode 100644 index 0000000000..96c85674f2 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_rule.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_RULE_H_ +#define MLX5DR_RULE_H_ + +enum { + MLX5DR_STE_CTRL_SZ = 20, + MLX5DR_ACTIONS_SZ = 12, + MLX5DR_MATCH_TAG_SZ = 32, + MLX5DR_JUMBO_TAG_SZ = 44, +}; + +enum mlx5dr_rule_status { + MLX5DR_RULE_STATUS_UNKNOWN, + MLX5DR_RULE_STATUS_CREATING, + MLX5DR_RULE_STATUS_CREATED, + MLX5DR_RULE_STATUS_DELETING, + MLX5DR_RULE_STATUS_DELETED, + MLX5DR_RULE_STATUS_FAILING, + MLX5DR_RULE_STATUS_FAILED, +}; + +struct mlx5dr_rule_match_tag { + union { + uint8_t jumbo[MLX5DR_JUMBO_TAG_SZ]; + struct { + uint8_t reserved[MLX5DR_ACTIONS_SZ]; + uint8_t match[MLX5DR_MATCH_TAG_SZ]; + }; + }; +}; + +struct mlx5dr_rule { + struct mlx5dr_matcher *matcher; + union { + struct mlx5dr_rule_match_tag tag; + struct ibv_flow *flow; + }; + uint32_t rtc_0; /* The RTC into which the STE was inserted */ + uint32_t rtc_1; /* The RTC into which the STE was inserted */ + int action_ste_idx; /* Action STE pool ID */ + uint8_t status; /* enum mlx5dr_rule_status */ + uint8_t pending_wqes; +}; + +void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule); + +#endif /* MLX5DR_RULE_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 16/18] net/mlx5/hws: Add HWS action object 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (14 preceding siblings ...) 2022-10-14 11:48 ` [v3 15/18] net/mlx5/hws: Add HWS rule object Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 17/18] net/mlx5/hws: Add HWS debug layer Alex Vesker 2022-10-14 11:48 ` [v3 18/18] net/mlx5/hws: Enable HWS Alex Vesker 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit Action objects are used for executing different HW actions over packets. Each action contains the HW resources and parameters needed for action use over the HW when creating a rule. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 2221 +++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 253 +++ drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 511 ++++++ drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 83 + 4 files changed, 3068 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.h diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c new file mode 100644 index 0000000000..d3eb091498 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -0,0 +1,2221 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +#define WIRE_PORT 0xFFFF + +#define MLX5DR_ACTION_METER_INIT_COLOR_OFFSET 1 + +/* This is the maximum allowed action order for each table type: + * TX: POP_VLAN, CTR, ASO_METER, AS_CT, PUSH_VLAN, MODIFY, ENCAP, Term + * RX: TAG, DECAP, POP_VLAN, CTR, ASO_METER, ASO_CT, PUSH_VLAN, MODIFY, + * ENCAP, Term + * FDB: DECAP, POP_VLAN, CTR, ASO_METER, ASO_CT, PUSH_VLAN, MODIFY, + * ENCAP, Term + */ +static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_MAX] = { + [MLX5DR_TABLE_TYPE_NIC_RX] = { + BIT(MLX5DR_ACTION_TYP_TAG), + BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_TIR) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, + [MLX5DR_TABLE_TYPE_NIC_TX] = { + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, + [MLX5DR_TABLE_TYPE_FDB] = { + BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_VPORT) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, +}; + +static int mlx5dr_action_get_shared_stc_nic(struct mlx5dr_context *ctx, + enum mlx5dr_context_shared_stc_type stc_type, + uint8_t tbl_type) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_action_shared_stc *shared_stc; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + if (ctx->common_res[tbl_type].shared_stc[stc_type]) { + rte_atomic32_add(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount, 1); + pthread_spin_unlock(&ctx->ctrl_lock); + return 0; + } + + shared_stc = simple_calloc(1, sizeof(*shared_stc)); + if (!shared_stc) { + DR_LOG(ERR, "Failed to allocate memory for shared STCs"); + rte_errno = ENOMEM; + goto unlock_and_out; + } + switch (stc_type) { + case MLX5DR_CONTEXT_SHARED_STC_DECAP: + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + stc_attr.remove_header.decap = 0; + stc_attr.remove_header.start_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + stc_attr.remove_header.end_anchor = MLX5_HEADER_ANCHOR_IPV6_IPV4; + break; + case MLX5DR_CONTEXT_SHARED_STC_POP: + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + stc_attr.remove_words.start_anchor = MLX5_HEADER_ANCHOR_FIRST_VLAN_START; + stc_attr.remove_words.num_of_words = MLX5DR_ACTION_HDR_LEN_L2_VLAN; + break; + default: + DR_LOG(ERR, "No such type : stc_type\n"); + assert(false); + rte_errno = EINVAL; + goto unlock_and_out; + } + + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &shared_stc->remove_header); + if (ret) { + DR_LOG(ERR, "Failed to allocate shared decap l2 STC"); + goto free_shared_stc; + } + + ctx->common_res[tbl_type].shared_stc[stc_type] = shared_stc; + + rte_atomic32_init(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount); + rte_atomic32_set(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount, 1); + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_shared_stc: + simple_free(shared_stc); +unlock_and_out: + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void mlx5dr_action_put_shared_stc_nic(struct mlx5dr_context *ctx, + enum mlx5dr_context_shared_stc_type stc_type, + uint8_t tbl_type) +{ + struct mlx5dr_action_shared_stc *shared_stc; + + pthread_spin_lock(&ctx->ctrl_lock); + if (!rte_atomic32_dec_and_test(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount)) { + pthread_spin_unlock(&ctx->ctrl_lock); + return; + } + + shared_stc = ctx->common_res[tbl_type].shared_stc[stc_type]; + + mlx5dr_action_free_single_stc(ctx, tbl_type, &shared_stc->remove_header); + simple_free(shared_stc); + ctx->common_res[tbl_type].shared_stc[stc_type] = NULL; + pthread_spin_unlock(&ctx->ctrl_lock); +} + +static int mlx5dr_action_get_shared_stc(struct mlx5dr_action *action, + enum mlx5dr_context_shared_stc_type stc_type) +{ + struct mlx5dr_context *ctx = action->ctx; + int ret; + + if (stc_type >= MLX5DR_CONTEXT_SHARED_STC_MAX) { + assert(false); + rte_errno = EINVAL; + return rte_errno; + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for RX shared STCs (type: %d)", + stc_type); + return ret; + } + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for TX shared STCs(type: %d)", + stc_type); + goto clean_nic_rx_stc; + } + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for FDB shared STCs (type: %d)", + stc_type); + goto clean_nic_tx_stc; + } + } + + return 0; + +clean_nic_tx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); +clean_nic_rx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + + return ret; +} + +static void mlx5dr_action_put_shared_stc(struct mlx5dr_action *action, + enum mlx5dr_context_shared_stc_type stc_type) +{ + struct mlx5dr_context *ctx = action->ctx; + + if (stc_type >= MLX5DR_CONTEXT_SHARED_STC_MAX) { + assert(false); + return; + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB); +} + +static void mlx5dr_action_print_combo(enum mlx5dr_action_type *user_actions) +{ + DR_LOG(ERR, "Invalid action_type sequence"); + while (*user_actions != MLX5DR_ACTION_TYP_LAST) { + DR_LOG(ERR, "%s", mlx5dr_debug_action_type_to_str(*user_actions)); + user_actions++; + } +} + +bool mlx5dr_action_check_combo(enum mlx5dr_action_type *user_actions, + enum mlx5dr_table_type table_type) +{ + const uint32_t *order_arr = action_order_arr[table_type]; + uint8_t order_idx = 0; + uint8_t user_idx = 0; + bool valid_combo; + + while (order_arr[order_idx] != BIT(MLX5DR_ACTION_TYP_LAST)) { + /* User action order validated move to next user action */ + if (BIT(user_actions[user_idx]) & order_arr[order_idx]) + user_idx++; + + /* Iterate to the next supported action in the order */ + order_idx++; + } + + /* Combination is valid if all user action were processed */ + valid_combo = user_actions[user_idx] == MLX5DR_ACTION_TYP_LAST; + if (!valid_combo) + mlx5dr_action_print_combo(user_actions); + + return valid_combo; +} + +int mlx5dr_action_root_build_attr(struct mlx5dr_rule_action rule_actions[], + uint32_t num_actions, + struct mlx5dv_flow_action_attr *attr) +{ + struct mlx5dr_action *action; + uint32_t i; + + for (i = 0; i < num_actions; i++) { + action = rule_actions[i].action; + + switch (action->type) { + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_TIR: + attr[i].type = MLX5DV_FLOW_ACTION_DEST_DEVX; + attr[i].obj = action->devx_obj; + break; + case MLX5DR_ACTION_TYP_TAG: + attr[i].type = MLX5DV_FLOW_ACTION_TAG; + attr[i].tag_value = rule_actions[i].tag.value; + break; +#ifdef HAVE_MLX5_DR_CREATE_ACTION_DEFAULT_MISS + case MLX5DR_ACTION_TYP_MISS: + attr[i].type = MLX5DV_FLOW_ACTION_DEFAULT_MISS; + break; +#endif + case MLX5DR_ACTION_TYP_DROP: + attr[i].type = MLX5DV_FLOW_ACTION_DROP; + break; + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + attr[i].type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; + attr[i].action = action->flow_action; + break; +#ifdef HAVE_IBV_FLOW_DEVX_COUNTERS + case MLX5DR_ACTION_TYP_CTR: + attr[i].type = MLX5DV_FLOW_ACTION_COUNTERS_DEVX; + attr[i].obj = action->devx_obj; + + if (rule_actions[i].counter.offset) { + DR_LOG(ERR, "Counter offset not supported over root"); + rte_errno = ENOTSUP; + return rte_errno; + } + break; +#endif + default: + DR_LOG(ERR, "Found unsupported action type: %d", action->type); + rte_errno = ENOTSUP; + return rte_errno; + } + } + + return 0; +} + +static bool mlx5dr_action_fixup_stc_attr(struct mlx5dr_cmd_stc_modify_attr *stc_attr, + struct mlx5dr_cmd_stc_modify_attr *fixup_stc_attr, + enum mlx5dr_table_type table_type, + bool is_mirror) +{ + struct mlx5dr_devx_obj *devx_obj; + bool use_fixup = false; + uint32_t fw_tbl_type; + + fw_tbl_type = mlx5dr_table_get_res_fw_ft_type(table_type, is_mirror); + + switch (stc_attr->action_type) { + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE: + if (!is_mirror) + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_attr->ste_table.ste_pool, + &stc_attr->ste_table.ste); + else + devx_obj = + mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_attr->ste_table.ste_pool, + &stc_attr->ste_table.ste); + + *fixup_stc_attr = *stc_attr; + fixup_stc_attr->ste_table.ste_obj_id = devx_obj->id; + use_fixup = true; + break; + + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT: + if (stc_attr->vport.vport_num != WIRE_PORT) + break; + + if (fw_tbl_type == FS_FT_FDB_RX) { + /* The FW doesn't allow to go back to wire in RX, so change it to DROP */ + fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + fixup_stc_attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + fixup_stc_attr->stc_offset = stc_attr->stc_offset; + } else if (fw_tbl_type == FS_FT_FDB_TX) { + /*The FW doesn't allow to go to wire in the TX by JUMP_TO_VPORT*/ + fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK; + fixup_stc_attr->action_offset = stc_attr->action_offset; + fixup_stc_attr->stc_offset = stc_attr->stc_offset; + fixup_stc_attr->vport.vport_num = 0; + fixup_stc_attr->vport.esw_owner_vhca_id = stc_attr->vport.esw_owner_vhca_id; + } + use_fixup = true; + break; + + default: + break; + } + + return use_fixup; +} + +int mlx5dr_action_alloc_single_stc(struct mlx5dr_context *ctx, + struct mlx5dr_cmd_stc_modify_attr *stc_attr, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc) +{ + struct mlx5dr_cmd_stc_modify_attr cleanup_stc_attr = {0}; + struct mlx5dr_pool *stc_pool = ctx->stc_pool[table_type]; + struct mlx5dr_cmd_stc_modify_attr fixup_stc_attr = {0}; + struct mlx5dr_devx_obj *devx_obj_0; + bool use_fixup; + int ret; + + ret = mlx5dr_pool_chunk_alloc(stc_pool, stc); + if (ret) { + DR_LOG(ERR, "Failed to allocate single action STC"); + return ret; + } + + stc_attr->stc_offset = stc->offset; + devx_obj_0 = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, stc); + + /* According to table/action limitation change the stc_attr */ + use_fixup = mlx5dr_action_fixup_stc_attr(stc_attr, &fixup_stc_attr, table_type, false); + ret = mlx5dr_cmd_stc_modify(devx_obj_0, use_fixup ? &fixup_stc_attr : stc_attr); + if (ret) { + DR_LOG(ERR, "Failed to modify STC action_type %d tbl_type %d", + stc_attr->action_type, table_type); + goto free_chunk; + } + + /* Modify the FDB peer */ + if (table_type == MLX5DR_TABLE_TYPE_FDB) { + struct mlx5dr_devx_obj *devx_obj_1; + + devx_obj_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, stc); + + use_fixup = mlx5dr_action_fixup_stc_attr(stc_attr, &fixup_stc_attr, + table_type, true); + ret = mlx5dr_cmd_stc_modify(devx_obj_1, use_fixup ? &fixup_stc_attr : stc_attr); + if (ret) { + DR_LOG(ERR, "Failed to modify peer STC action_type %d tbl_type %d", + stc_attr->action_type, table_type); + goto clean_devx_obj_0; + } + } + + return 0; + +clean_devx_obj_0: + cleanup_stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + cleanup_stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + cleanup_stc_attr.stc_offset = stc->offset; + mlx5dr_cmd_stc_modify(devx_obj_0, &cleanup_stc_attr); +free_chunk: + mlx5dr_pool_chunk_free(stc_pool, stc); + return rte_errno; +} + +void mlx5dr_action_free_single_stc(struct mlx5dr_context *ctx, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc) +{ + struct mlx5dr_pool *stc_pool = ctx->stc_pool[table_type]; + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_devx_obj *devx_obj; + + /* Modify the STC not to point to an object */ + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + stc_attr.stc_offset = stc->offset; + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, stc); + mlx5dr_cmd_stc_modify(devx_obj, &stc_attr); + + if (table_type == MLX5DR_TABLE_TYPE_FDB) { + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, stc); + mlx5dr_cmd_stc_modify(devx_obj, &stc_attr); + } + + mlx5dr_pool_chunk_free(stc_pool, stc); +} + +static uint32_t mlx5dr_action_get_mh_stc_type(__be64 pattern) +{ + uint8_t action_type = MLX5_GET(set_action_in, &pattern, action_type); + + switch (action_type) { + case MLX5_MODIFICATION_TYPE_SET: + return MLX5_IFC_STC_ACTION_TYPE_SET; + case MLX5_MODIFICATION_TYPE_ADD: + return MLX5_IFC_STC_ACTION_TYPE_ADD; + case MLX5_MODIFICATION_TYPE_COPY: + return MLX5_IFC_STC_ACTION_TYPE_COPY; + default: + assert(false); + DR_LOG(ERR, "Unsupported action type: 0x%x\n", action_type); + rte_errno = ENOTSUP; + return MLX5_IFC_STC_ACTION_TYPE_NOP; + } +} + +static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, + struct mlx5dr_devx_obj *obj, + struct mlx5dr_cmd_stc_modify_attr *attr) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TAG: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_TAG; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + break; + case MLX5DR_ACTION_TYP_DROP: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + break; + case MLX5DR_ACTION_TYP_MISS: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ALLOW; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + /* TODO Need to support default miss for FDB */ + break; + case MLX5DR_ACTION_TYP_CTR: + attr->id = obj->id; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_COUNTER; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW0; + break; + case MLX5DR_ACTION_TYP_TIR: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->dest_tir_num = obj->id; + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + if (action->modify_header.num_of_actions == 1) { + attr->modify_action.data = action->modify_header.single_action; + attr->action_type = mlx5dr_action_get_mh_stc_type(attr->modify_action.data); + + if (attr->action_type == MLX5_IFC_STC_ACTION_TYPE_ADD || + attr->action_type == MLX5_IFC_STC_ACTION_TYPE_SET) + MLX5_SET(set_action_in, &attr->modify_action.data, data, 0); + } else { + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST; + attr->modify_header.arg_id = action->modify_header.arg_obj->id; + attr->modify_header.pattern_id = action->modify_header.pattern_obj->id; + } + break; + case MLX5DR_ACTION_TYP_FT: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->dest_table_id = obj->id; + break; + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + attr->remove_header.decap = 1; + attr->remove_header.start_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->remove_header.end_anchor = MLX5_HEADER_ANCHOR_INNER_MAC; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.arg_id = action->reformat.arg_obj->id; + attr->insert_header.header_size = action->reformat.header_size; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.arg_id = action->reformat.arg_obj->id; + attr->insert_header.header_size = action->reformat.header_size; + break; + case MLX5DR_ACTION_TYP_ASO_METER: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ASO; + attr->aso.aso_type = ASO_OPC_MOD_POLICER; + attr->aso.devx_obj_id = obj->id; + attr->aso.return_reg_id = action->aso.return_reg_id; + break; + case MLX5DR_ACTION_TYP_ASO_CT: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ASO; + attr->aso.aso_type = ASO_OPC_MOD_CONNECTION_TRACKING; + attr->aso.devx_obj_id = obj->id; + attr->aso.return_reg_id = action->aso.return_reg_id; + break; + case MLX5DR_ACTION_TYP_VPORT: + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT; + attr->vport.vport_num = action->vport.vport_num; + attr->vport.esw_owner_vhca_id = action->vport.esw_owner_vhca_id; + break; + case MLX5DR_ACTION_TYP_POP_VLAN: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + attr->remove_words.start_anchor = MLX5_HEADER_ANCHOR_FIRST_VLAN_START; + attr->remove_words.num_of_words = MLX5DR_ACTION_HDR_LEN_L2_VLAN / 2; + break; + case MLX5DR_ACTION_TYP_PUSH_VLAN: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 0; + attr->insert_header.is_inline = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.insert_offset = MLX5DR_ACTION_HDR_LEN_L2_MACS; + attr->insert_header.header_size = MLX5DR_ACTION_HDR_LEN_L2_VLAN; + break; + default: + DR_LOG(ERR, "Invalid action type %d", action->type); + assert(false); + } +} + +static int +mlx5dr_action_create_stcs(struct mlx5dr_action *action, + struct mlx5dr_devx_obj *obj) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_context *ctx = action->ctx; + int ret; + + mlx5dr_action_fill_stc_attr(action, obj, &stc_attr); + + /* Block unsupported parallel devx obj modify over the same base */ + pthread_spin_lock(&ctx->ctrl_lock); + + /* Allocate STC for RX */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); + if (ret) + goto out_err; + } + + /* Allocate STC for TX */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); + if (ret) + goto free_nic_rx_stc; + } + + /* Allocate STC for FDB */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_FDB, + &action->stc[MLX5DR_TABLE_TYPE_FDB]); + if (ret) + goto free_nic_tx_stc; + } + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_nic_tx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_free_single_stc(ctx, + MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); +free_nic_rx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_free_single_stc(ctx, + MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); +out_err: + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void +mlx5dr_action_destroy_stcs(struct mlx5dr_action *action) +{ + struct mlx5dr_context *ctx = action->ctx; + + /* Block unsupported parallel devx obj modify over the same base */ + pthread_spin_lock(&ctx->ctrl_lock); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_FDB, + &action->stc[MLX5DR_TABLE_TYPE_FDB]); + + pthread_spin_unlock(&ctx->ctrl_lock); +} + +static bool +mlx5dr_action_is_root_flags(uint32_t flags) +{ + return flags & (MLX5DR_ACTION_FLAG_ROOT_RX | + MLX5DR_ACTION_FLAG_ROOT_TX | + MLX5DR_ACTION_FLAG_ROOT_FDB); +} + +static bool +mlx5dr_action_is_hws_flags(uint32_t flags) +{ + return flags & (MLX5DR_ACTION_FLAG_HWS_RX | + MLX5DR_ACTION_FLAG_HWS_TX | + MLX5DR_ACTION_FLAG_HWS_FDB); +} + +static struct mlx5dr_action * +mlx5dr_action_create_generic(struct mlx5dr_context *ctx, + uint32_t flags, + enum mlx5dr_action_type action_type) +{ + struct mlx5dr_action *action; + + if (!mlx5dr_action_is_root_flags(flags) && + !mlx5dr_action_is_hws_flags(flags)) { + DR_LOG(ERR, "Action flags must specify root or non root (HWS)"); + rte_errno = ENOTSUP; + return NULL; + } + + action = simple_calloc(1, sizeof(*action)); + if (!action) { + DR_LOG(ERR, "Failed to allocate memory for action [%d]", action_type); + rte_errno = ENOMEM; + return NULL; + } + + action->ctx = ctx; + action->flags = flags; + action->type = action_type; + + return action; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx, + struct mlx5dr_table *tbl, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_table_is_root(tbl)) { + DR_LOG(ERR, "Root table cannot be set as destination"); + rte_errno = ENOTSUP; + return NULL; + } + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_FT); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = tbl->ft->obj; + } else { + ret = mlx5dr_action_create_stcs(action, tbl->ft); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *obj, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_TIR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = obj->obj; + } else { + ret = mlx5dr_action_create_stcs(action, obj); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_drop(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_DROP); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_default_miss(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_MISS); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_tag(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_TAG); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static struct mlx5dr_action * +mlx5dr_action_create_aso(struct mlx5dr_context *ctx, + enum mlx5dr_action_type action_type, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "ASO action cannot be used over root table"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + action->aso.devx_obj = devx_obj; + action->aso.return_reg_id = return_reg_id; + + ret = mlx5dr_action_create_stcs(action, devx_obj); + if (ret) + goto free_action; + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_aso_meter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + return mlx5dr_action_create_aso(ctx, MLX5DR_ACTION_TYP_ASO_METER, + devx_obj, return_reg_id, flags); +} + +struct mlx5dr_action * +mlx5dr_action_create_aso_ct(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + return mlx5dr_action_create_aso(ctx, MLX5DR_ACTION_TYP_ASO_CT, + devx_obj, return_reg_id, flags); +} + +struct mlx5dr_action * +mlx5dr_action_create_counter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *obj, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_CTR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = obj->obj; + } else { + ret = mlx5dr_action_create_stcs(action, obj); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static int mlx5dr_action_create_dest_vport_hws(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint32_t ib_port_num) +{ + struct mlx5dr_cmd_query_vport_caps vport_caps = {0}; + int ret; + + ret = mlx5dr_cmd_query_ib_port(ctx->ibv_ctx, &vport_caps, ib_port_num); + if (ret) { + DR_LOG(ERR, "Failed querying port %d\n", ib_port_num); + return ret; + } + action->vport.vport_num = vport_caps.vport_num; + action->vport.esw_owner_vhca_id = vport_caps.esw_owner_vhca_id; + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed creating stc for port %d\n", ib_port_num); + return ret; + } + + return 0; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_vport(struct mlx5dr_context *ctx, + uint32_t ib_port_num, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (!(flags & MLX5DR_ACTION_FLAG_HWS_FDB)) { + DR_LOG(ERR, "Vport action is supported for FDB only\n"); + rte_errno = EINVAL; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_VPORT); + if (!action) + return NULL; + + ret = mlx5dr_action_create_dest_vport_hws(ctx, action, ib_port_num); + if (ret) { + DR_LOG(ERR, "Failed to create vport action HWS\n"); + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_push_vlan(struct mlx5dr_context *ctx, uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Push vlan action not supported for root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_PUSH_VLAN); + if (!action) + return NULL; + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed creating stc for push vlan\n"); + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_pop_vlan(struct mlx5dr_context *ctx, uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Pop vlan action not supported for root"); + rte_errno = ENOTSUP; + return NULL; + } + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_POP_VLAN); + if (!action) + return NULL; + + ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); + if (ret) { + DR_LOG(ERR, "Failed to create remove stc for reformat"); + goto free_action; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed creating stc for pop vlan\n"); + goto free_shared; + } + + return action; + +free_shared: + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); +free_action: + simple_free(action); + return NULL; +} + +static int +mlx5dr_action_conv_reformat_type_to_action(uint32_t reformat_type, + enum mlx5dr_action_type *action_type) +{ + switch (reformat_type) { + case MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2: + *action_type = MLX5DR_ACTION_TYP_TNL_L2_TO_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2: + *action_type = MLX5DR_ACTION_TYP_L2_TO_TNL_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_TNL_L3_TO_L2: + *action_type = MLX5DR_ACTION_TYP_TNL_L3_TO_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3: + *action_type = MLX5DR_ACTION_TYP_L2_TO_TNL_L3; + break; + default: + DR_LOG(ERR, "Invalid reformat type requested"); + rte_errno = ENOTSUP; + return rte_errno; + } + return 0; +} + +static void +mlx5dr_action_conv_reformat_to_verbs(uint32_t action_type, + uint32_t *verb_reformat_type) +{ + switch (action_type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TUNNEL_TO_L2; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TO_L2_TUNNEL; + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L3_TUNNEL_TO_L2; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TO_L3_TUNNEL; + break; + } +} + +static void +mlx5dr_action_conv_flags_to_ft_type(uint32_t flags, enum mlx5dv_flow_table_type *ft_type) +{ + if (flags & MLX5DR_ACTION_FLAG_ROOT_RX) + *ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_RX; + else if (flags & MLX5DR_ACTION_FLAG_ROOT_TX) + *ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_TX; + else if (flags & MLX5DR_ACTION_FLAG_ROOT_FDB) + *ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_FDB; +} + +static int +mlx5dr_action_create_reformat_root(struct mlx5dr_action *action, + size_t data_sz, + void *data) +{ + enum mlx5dv_flow_table_type ft_type = 0; /*fix compilation warn*/ + uint32_t verb_reformat_type = 0; + + /* Convert action to FT type and verbs reformat type */ + mlx5dr_action_conv_flags_to_ft_type(action->flags, &ft_type); + mlx5dr_action_conv_reformat_to_verbs(action->type, &verb_reformat_type); + + /* Create the reformat type for root table */ + action->flow_action = + mlx5_glue->dv_create_flow_action_packet_reformat_root(action->ctx->ibv_ctx, + data_sz, + data, + verb_reformat_type, + ft_type); + if (!action->flow_action) { + rte_errno = errno; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_action_handle_reformat_args(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + uint32_t args_log_size; + int ret; + + if (data_sz % 2 != 0) { + DR_LOG(ERR, "Data size should be multiply of 2"); + rte_errno = EINVAL; + return rte_errno; + } + action->reformat.header_size = data_sz; + + args_log_size = mlx5dr_arg_data_size_to_arg_log_size(data_sz); + if (args_log_size >= MLX5DR_ARG_CHUNK_SIZE_MAX) { + DR_LOG(ERR, "Data size is bigger than supported"); + rte_errno = EINVAL; + return rte_errno; + } + args_log_size += bulk_size; + + if (!mlx5dr_arg_is_valid_arg_request_size(ctx, args_log_size)) { + DR_LOG(ERR, "Arg size %d does not fit FW requests", + args_log_size); + rte_errno = EINVAL; + return rte_errno; + } + + action->reformat.arg_obj = mlx5dr_cmd_arg_create(ctx->ibv_ctx, + args_log_size, + ctx->pd_num); + if (!action->reformat.arg_obj) { + DR_LOG(ERR, "Failed to create arg for reformat"); + return rte_errno; + } + + /* When INLINE need to write the arg data */ + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->reformat.arg_obj->id, + data, + data_sz); + if (ret) { + DR_LOG(ERR, "Failed to write inline arg for reformat"); + goto free_arg; + } + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static int mlx5dr_action_handle_l2_to_tunnel_l2(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + ret = mlx5dr_action_handle_reformat_args(ctx, data_sz, data, bulk_size, + action); + if (ret) { + DR_LOG(ERR, "Failed to create args for reformat"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed to create stc for reformat"); + goto free_arg; + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static int mlx5dr_action_get_shared_stc_offset(struct mlx5dr_context_common_res *common_res, + enum mlx5dr_context_shared_stc_type stc_type) +{ + return common_res->shared_stc[stc_type]->remove_header.offset; +} + +static int mlx5dr_action_handle_l2_to_tunnel_l3(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + ret = mlx5dr_action_handle_reformat_args(ctx, data_sz, data, bulk_size, + action); + if (ret) { + DR_LOG(ERR, "Failed to create args for reformat"); + return ret; + } + + /* The action is remove-l2-header + insert-l3-header */ + ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); + if (ret) { + DR_LOG(ERR, "Failed to create remove stc for reformat"); + goto free_arg; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed to create insert stc for reformat"); + goto down_shared; + } + + return 0; + +down_shared: + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static void mlx5dr_action_prepare_decap_l3_actions(size_t data_sz, + uint8_t *mh_data, + int *num_of_actions) +{ + int actions; + uint32_t i; + + /* Remove L2L3 outer headers */ + MLX5_SET(stc_ste_param_remove, mh_data, action_type, + MLX5_MODIFICATION_TYPE_REMOVE); + MLX5_SET(stc_ste_param_remove, mh_data, decap, 0x1); + MLX5_SET(stc_ste_param_remove, mh_data, remove_start_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + MLX5_SET(stc_ste_param_remove, mh_data, remove_end_anchor, + MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4); + mh_data += MLX5DR_ACTION_DOUBLE_SIZE; /* Assume every action is 2 dw */ + actions = 1; + + /* Add the new header using inline action 4Byte at a time, the header + * is added in reversed order to the beginning of the packet to avoid + * incorrect parsing by the HW. Since header is 14B or 18B an extra + * two bytes are padded and later removed. + */ + for (i = 0; i < data_sz / MLX5DR_ACTION_INLINE_DATA_SIZE + 1; i++) { + MLX5_SET(stc_ste_param_insert, mh_data, action_type, + MLX5_MODIFICATION_TYPE_INSERT); + MLX5_SET(stc_ste_param_insert, mh_data, inline_data, 0x1); + MLX5_SET(stc_ste_param_insert, mh_data, insert_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + MLX5_SET(stc_ste_param_insert, mh_data, insert_size, 2); + mh_data += MLX5DR_ACTION_DOUBLE_SIZE; + actions++; + } + + /* Remove first 2 extra bytes */ + MLX5_SET(stc_ste_param_remove_words, mh_data, action_type, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS); + MLX5_SET(stc_ste_param_remove_words, mh_data, remove_start_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + /* The hardware expects here size in words (2 bytes) */ + MLX5_SET(stc_ste_param_remove_words, mh_data, remove_size, 1); + actions++; + + *num_of_actions = actions; +} + +static int +mlx5dr_action_handle_tunnel_l3_to_l2(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + uint8_t mh_data[MLX5DR_ACTION_REFORMAT_DATA_SIZE] = {0}; + int num_of_actions; + int mh_data_size; + int ret; + + if (data_sz != MLX5DR_ACTION_HDR_LEN_L2 && + data_sz != MLX5DR_ACTION_HDR_LEN_L2_W_VLAN) { + DR_LOG(ERR, "Data size is not supported for decap-l3\n"); + rte_errno = EINVAL; + return rte_errno; + } + + mlx5dr_action_prepare_decap_l3_actions(data_sz, mh_data, &num_of_actions); + + mh_data_size = num_of_actions * MLX5DR_MODIFY_ACTION_SIZE; + + ret = mlx5dr_pat_arg_create_modify_header(ctx, action, mh_data_size, + (__be64 *)mh_data, bulk_size); + if (ret) { + DR_LOG(ERR, "Failed allocating modify-header for decap-l3\n"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_mh_obj; + + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + mlx5dr_action_prepare_decap_l3_data(data, mh_data, num_of_actions); + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->modify_header.arg_obj->id, + (uint8_t *)mh_data, + num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); + if (ret) { + DR_LOG(ERR, "Failed writing INLINE arg decap_l3"); + goto clean_stc; + } + } + + return 0; + +clean_stc: + mlx5dr_action_destroy_stcs(action); +free_mh_obj: + mlx5dr_pat_arg_destroy_modify_header(ctx, action); + return ret; +} + +static int +mlx5dr_action_create_reformat_hws(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + switch (action->type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + ret = mlx5dr_action_create_stcs(action, NULL); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + ret = mlx5dr_action_handle_l2_to_tunnel_l2(ctx, data_sz, data, bulk_size, action); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + ret = mlx5dr_action_handle_l2_to_tunnel_l3(ctx, data_sz, data, bulk_size, action); + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + ret = mlx5dr_action_handle_tunnel_l3_to_l2(ctx, data_sz, data, bulk_size, action); + break; + + default: + assert(false); + rte_errno = ENOTSUP; + return rte_errno; + } + + return ret; +} + +struct mlx5dr_action * +mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, + enum mlx5dr_action_reformat_type reformat_type, + size_t data_sz, + void *inline_data, + uint32_t log_bulk_size, + uint32_t flags) +{ + enum mlx5dr_action_type action_type; + struct mlx5dr_action *action; + int ret; + + ret = mlx5dr_action_conv_reformat_type_to_action(reformat_type, &action_type); + if (ret) + return NULL; + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + if (log_bulk_size) { + DR_LOG(ERR, "Bulk reformat not supported over root"); + rte_errno = ENOTSUP; + goto free_action; + } + + ret = mlx5dr_action_create_reformat_root(action, data_sz, inline_data); + if (ret) + goto free_action; + + return action; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Reformat flags don't fit HWS (flags: %x0x)\n", + flags); + rte_errno = EINVAL; + goto free_action; + } + + ret = mlx5dr_action_create_reformat_hws(ctx, data_sz, inline_data, log_bulk_size, action); + if (ret) { + DR_LOG(ERR, "Failed to create reformat.\n"); + rte_errno = EINVAL; + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static int +mlx5dr_action_create_modify_header_root(struct mlx5dr_action *action, + size_t actions_sz, + __be64 *actions) +{ + enum mlx5dv_flow_table_type ft_type = 0; + + mlx5dr_action_conv_flags_to_ft_type(action->flags, &ft_type); + + action->flow_action = + mlx5_glue->dv_create_flow_action_modify_header_root(action->ctx->ibv_ctx, + actions_sz, + (uint64_t *)actions, + ft_type); + if (!action->flow_action) { + rte_errno = errno; + return rte_errno; + } + + return 0; +} + +struct mlx5dr_action * +mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, + size_t pattern_sz, + __be64 pattern[], + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_MODIFY_HDR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + if (log_bulk_size) { + DR_LOG(ERR, "Bulk modify-header not supported over root"); + rte_errno = ENOTSUP; + goto free_action; + } + ret = mlx5dr_action_create_modify_header_root(action, pattern_sz, pattern); + if (ret) + goto free_action; + + return action; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Flags don't fit hws (flags: %x0x, log_bulk_size: %d)\n", + flags, log_bulk_size); + rte_errno = EINVAL; + goto free_action; + } + + if (pattern_sz / MLX5DR_MODIFY_ACTION_SIZE == 1) { + /* Optimize single modiy action to be used inline */ + action->modify_header.single_action = pattern[0]; + action->modify_header.num_of_actions = 1; + action->modify_header.single_action_type = + MLX5_GET(set_action_in, pattern, action_type); + } else { + /* Use multi action pattern and argument */ + ret = mlx5dr_pat_arg_create_modify_header(ctx, action, pattern_sz, + pattern, log_bulk_size); + if (ret) { + DR_LOG(ERR, "Failed allocating modify-header\n"); + goto free_action; + } + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_mh_obj; + + return action; + +free_mh_obj: + if (action->modify_header.num_of_actions > 1) + mlx5dr_pat_arg_destroy_modify_header(ctx, action); +free_action: + simple_free(action); + return NULL; +} + +static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TIR: + case MLX5DR_ACTION_TYP_MISS: + case MLX5DR_ACTION_TYP_TAG: + case MLX5DR_ACTION_TYP_DROP: + case MLX5DR_ACTION_TYP_CTR: + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_ASO_METER: + case MLX5DR_ACTION_TYP_ASO_CT: + case MLX5DR_ACTION_TYP_PUSH_VLAN: + mlx5dr_action_destroy_stcs(action); + break; + case MLX5DR_ACTION_TYP_POP_VLAN: + mlx5dr_action_destroy_stcs(action); + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + mlx5dr_action_destroy_stcs(action); + if (action->modify_header.num_of_actions > 1) + mlx5dr_pat_arg_destroy_modify_header(action->ctx, action); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + mlx5dr_action_destroy_stcs(action); + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + mlx5dr_action_destroy_stcs(action); + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + break; + } +} + +static void mlx5dr_action_destroy_root(struct mlx5dr_action *action) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + ibv_destroy_flow_action(action->flow_action); + break; + } +} + +int mlx5dr_action_destroy(struct mlx5dr_action *action) +{ + if (mlx5dr_action_is_root_flags(action->flags)) + mlx5dr_action_destroy_root(action); + else + mlx5dr_action_destroy_hws(action); + + simple_free(action); + return 0; +} + +/* Called under pthread_spin_lock(&ctx->ctrl_lock) */ +int mlx5dr_action_get_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_action_default_stc *default_stc; + int ret; + + if (ctx->common_res[tbl_type].default_stc) { + ctx->common_res[tbl_type].default_stc->refcount++; + return 0; + } + + default_stc = simple_calloc(1, sizeof(*default_stc)); + if (!default_stc) { + DR_LOG(ERR, "Failed to allocate memory for default STCs"); + rte_errno = ENOMEM; + return rte_errno; + } + + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_NOP; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW0; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_ctr); + if (ret) { + DR_LOG(ERR, "Failed to allocate default counter STC"); + goto free_default_stc; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw5); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW5 STC"); + goto free_nop_ctr; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW6; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw6); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW6 STC"); + goto free_nop_dw5; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW7; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw7); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW7 STC"); + goto free_nop_dw6; + } + + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_ALLOW; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->default_hit); + if (ret) { + DR_LOG(ERR, "Failed to allocate default allow STC"); + goto free_nop_dw7; + } + + ctx->common_res[tbl_type].default_stc = default_stc; + ctx->common_res[tbl_type].default_stc->refcount++; + + return 0; + +free_nop_dw7: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw7); +free_nop_dw6: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw6); +free_nop_dw5: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw5); +free_nop_ctr: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_ctr); +free_default_stc: + simple_free(default_stc); + return rte_errno; +} + +void mlx5dr_action_put_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type) +{ + struct mlx5dr_action_default_stc *default_stc; + + default_stc = ctx->common_res[tbl_type].default_stc; + + default_stc = ctx->common_res[tbl_type].default_stc; + if (--default_stc->refcount) + return; + + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->default_hit); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw7); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw6); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw5); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_ctr); + simple_free(default_stc); + ctx->common_res[tbl_type].default_stc = NULL; +} + +static void mlx5dr_action_modify_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions) +{ + mlx5dr_arg_write(queue, NULL, arg_idx, arg_data, + num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); +} + +void +mlx5dr_action_prepare_decap_l3_data(uint8_t *src, uint8_t *dst, + uint16_t num_of_actions) +{ + uint8_t *e_src; + int i; + + /* num_of_actions = remove l3l2 + 4/5 inserts + remove extra 2 bytes + * copy from end of src to the start of dst. + * move to the end, 2 is the leftover from 14B or 18B + */ + if (num_of_actions == DECAP_L3_NUM_ACTIONS_W_NO_VLAN) + e_src = src + MLX5DR_ACTION_HDR_LEN_L2; + else + e_src = src + MLX5DR_ACTION_HDR_LEN_L2_W_VLAN; + + /* Move dst over the first remove action + zero data */ + dst += MLX5DR_ACTION_DOUBLE_SIZE; + /* Move dst over the first insert ctrl action */ + dst += MLX5DR_ACTION_DOUBLE_SIZE / 2; + /* Actions: + * no vlan: r_h-insert_4b-insert_4b-insert_4b-insert_4b-remove_2b. + * with vlan: r_h-insert_4b-insert_4b-insert_4b-insert_4b-insert_4b-remove_2b. + * the loop is without the last insertion. + */ + for (i = 0; i < num_of_actions - 3; i++) { + e_src -= MLX5DR_ACTION_INLINE_DATA_SIZE; + memcpy(dst, e_src, MLX5DR_ACTION_INLINE_DATA_SIZE); /* data */ + dst += MLX5DR_ACTION_DOUBLE_SIZE; + } + /* Copy the last 2 bytes after a gap of 2 bytes which will be removed */ + e_src -= MLX5DR_ACTION_INLINE_DATA_SIZE / 2; + dst += MLX5DR_ACTION_INLINE_DATA_SIZE / 2; + memcpy(dst, e_src, 2); +} + +static struct mlx5dr_actions_wqe_setter * +mlx5dr_action_setter_find_first(struct mlx5dr_actions_wqe_setter *setter, + uint8_t req_flags) +{ + /* Use a new setter if requested flags are taken */ + while (setter->flags & req_flags) + setter++; + + /* Use current setter in required flags are not used */ + return setter; +} + +static void +mlx5dr_action_apply_stc(struct mlx5dr_actions_apply_data *apply, + enum mlx5dr_action_stc_idx stc_idx, + uint8_t action_idx) +{ + struct mlx5dr_action *action = apply->rule_action[action_idx].action; + + apply->wqe_ctrl->stc_ix[stc_idx] = + htobe32(action->stc[apply->tbl_type].offset); +} + +static void +mlx5dr_action_setter_push_vlan(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_double]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = rule_action->push_vlan.vlan_hdr; + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; +} + +static void +mlx5dr_action_setter_modify_header(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + uint32_t arg_sz, arg_idx; + uint8_t *single_action; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + + if (action->modify_header.num_of_actions == 1) { + if (action->modify_header.single_action_type == + MLX5_MODIFICATION_TYPE_COPY) { + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + return; + } + + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) + single_action = (uint8_t *)&action->modify_header.single_action; + else + single_action = rule_action->modify_header.data; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = + *(__be32 *)MLX5_ADDR_OF(set_action_in, single_action, data); + } else { + /* Argument offset multiple with number of args per these actions */ + arg_sz = mlx5dr_arg_get_arg_size(action->modify_header.num_of_actions); + arg_idx = rule_action->modify_header.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + if (!(action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_action_modify_write(apply->queue, + action->modify_header.arg_obj->id + arg_idx, + rule_action->modify_header.data, + action->modify_header.num_of_actions); + } + } +} + +static void +mlx5dr_action_setter_insert_ptr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + uint32_t arg_idx, arg_sz; + + rule_action = &apply->rule_action[setter->idx_double]; + + /* Argument offset multiple on args required for header size */ + arg_sz = mlx5dr_arg_data_size_to_arg_size(rule_action->action->reformat.header_size); + arg_idx = rule_action->reformat.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + if (!(rule_action->action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_arg_write(apply->queue, NULL, + rule_action->action->reformat.arg_obj->id + arg_idx, + rule_action->reformat.data, + rule_action->action->reformat.header_size); + } +} + +static void +mlx5dr_action_setter_tnl_l3_to_l2(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + uint32_t arg_sz, arg_idx; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + + /* Argument offset multiple on args required for num of actions */ + arg_sz = mlx5dr_arg_get_arg_size(action->modify_header.num_of_actions); + arg_idx = rule_action->reformat.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + if (!(action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_arg_decapl3_write(apply->queue, + action->modify_header.arg_obj->id + arg_idx, + rule_action->reformat.data, + action->modify_header.num_of_actions); + } +} + +static void +mlx5dr_action_setter_aso(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + uint32_t exe_aso_ctrl; + uint32_t offset; + + rule_action = &apply->rule_action[setter->idx_double]; + + switch(rule_action->action->type) { + case MLX5DR_ACTION_TYP_ASO_METER: + /* exe_aso_ctrl format: + * [STC only and reserved bits 29b][init_color 2b][meter_id 1b] + */ + offset = rule_action->aso_meter.offset / MLX5_ASO_METER_NUM_PER_OBJ; + exe_aso_ctrl = rule_action->aso_meter.offset % MLX5_ASO_METER_NUM_PER_OBJ; + exe_aso_ctrl |= rule_action->aso_meter.init_color << + MLX5DR_ACTION_METER_INIT_COLOR_OFFSET; + break; + case MLX5DR_ACTION_TYP_ASO_CT: + /* exe_aso_ctrl CT format: + * [STC only and reserved bits 31b][direction 1b] + */ + offset = rule_action->aso_ct.offset / MLX5_ASO_CT_NUM_PER_OBJ; + exe_aso_ctrl = rule_action->aso_ct.direction; + break; + default: + DR_LOG(ERR, "Unsupported ASO action type: %d", rule_action->action->type); + rte_errno = ENOTSUP; + return; + } + + /* aso_object_offset format: [24B] */ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = htobe32(offset); + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(exe_aso_ctrl); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; +} + +static void +mlx5dr_action_setter_tag(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_single]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = htobe32(rule_action->tag.value); + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW5, setter->idx_single); +} + +static void +mlx5dr_action_setter_ctrl_ctr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_ctr]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW0] = htobe32(rule_action->counter.offset); + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_CTRL, setter->idx_ctr); +} + +static void +mlx5dr_action_setter_single(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW5, setter->idx_single); +} + +static void +mlx5dr_action_setter_single_double_pop(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(mlx5dr_action_get_shared_stc_offset(apply->common_res, + MLX5DR_CONTEXT_SHARED_STC_POP)); +} + +static void +mlx5dr_action_setter_hit(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = 0; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_HIT, setter->idx_hit); +} + +static void +mlx5dr_action_setter_default_hit(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_HIT] = + htobe32(apply->common_res->default_stc->default_hit.offset); +} + +static void +mlx5dr_action_setter_hit_next_action(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = htobe32(apply->next_direct_idx << 6); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_HIT] = htobe32(apply->jump_to_action_stc); +} + +static void +mlx5dr_action_setter_common_decap(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(mlx5dr_action_get_shared_stc_offset(apply->common_res, + MLX5DR_CONTEXT_SHARED_STC_DECAP)); +} + +int mlx5dr_action_template_process(struct mlx5dr_action_template *at) +{ + struct mlx5dr_actions_wqe_setter *start_setter = at->setters + 1; + enum mlx5dr_action_type *action_type = at->action_type_arr; + struct mlx5dr_actions_wqe_setter *setter = at->setters; + struct mlx5dr_actions_wqe_setter *pop_setter = NULL; + struct mlx5dr_actions_wqe_setter *last_setter; + int i; + + /* Note: Given action combination must be valid */ + + /* Check if action were already processed */ + if (at->num_of_action_stes) + return 0; + + for (i = 0; i < MLX5DR_ACTION_MAX_STE; i++) + setter[i].set_hit = &mlx5dr_action_setter_hit_next_action; + + /* The same action template setters can be used with jumbo or match + * STE, to support both cases we reseve the first setter for cases + * with jumbo STE to allow jump to the first action STE. + * This extra setter can be reduced in some cases on rule creation. + */ + setter = start_setter; + last_setter = start_setter; + + for (i = 0; i < at->num_actions; i++) { + switch (action_type[i]) { + case MLX5DR_ACTION_TYP_DROP: + case MLX5DR_ACTION_TYP_TIR: + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_VPORT: + case MLX5DR_ACTION_TYP_MISS: + /* Hit action */ + last_setter->flags |= ASF_HIT; + last_setter->set_hit = &mlx5dr_action_setter_hit; + last_setter->idx_hit = i; + break; + + case MLX5DR_ACTION_TYP_POP_VLAN: + /* Single remove header to header */ + if (pop_setter) { + /* We have 2 pops, use the shared */ + pop_setter->set_single = &mlx5dr_action_setter_single_double_pop; + break; + } + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY); + setter->flags |= ASF_SINGLE1 | ASF_REPARSE | ASF_REMOVE; + setter->set_single = &mlx5dr_action_setter_single; + setter->idx_single = i; + pop_setter = setter; + break; + + case MLX5DR_ACTION_TYP_PUSH_VLAN: + /* Double insert inline */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_REPARSE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_push_vlan; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_MODIFY_HDR: + /* Double modify header list */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_modify_header; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_ASO_METER: + case MLX5DR_ACTION_TYP_ASO_CT: + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE); + setter->flags |= ASF_DOUBLE; + setter->set_double = &mlx5dr_action_setter_aso; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + /* Single remove header to header */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY); + setter->flags |= ASF_SINGLE1 | ASF_REMOVE | ASF_REPARSE; + setter->set_single = &mlx5dr_action_setter_single; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + /* Double insert header with pointer */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE); + setter->flags |= ASF_DOUBLE | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_insert_ptr; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + /* Single remove + Double insert header with pointer */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_DOUBLE); + setter->flags |= ASF_SINGLE1 | ASF_DOUBLE | ASF_REPARSE | ASF_REMOVE; + setter->set_double = &mlx5dr_action_setter_insert_ptr; + setter->idx_double = i; + setter->set_single = &mlx5dr_action_setter_common_decap; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + /* Double modify header list with remove and push inline */ + setter = mlx5dr_action_setter_find_first(last_setter, + ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_tnl_l3_to_l2; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_TAG: + /* Single TAG action, search for any room from the start */ + setter = mlx5dr_action_setter_find_first(start_setter, ASF_SINGLE1); + setter->flags |= ASF_SINGLE1; + setter->set_single = &mlx5dr_action_setter_tag; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_CTR: + /* Control counter action + * TODO: Current counter executed first. Support is needed + * for single ation counter action which is done last. + * Example: Decap + CTR + */ + setter = mlx5dr_action_setter_find_first(start_setter, ASF_CTR); + setter->flags |= ASF_CTR; + setter->set_ctr = &mlx5dr_action_setter_ctrl_ctr; + setter->idx_ctr = i; + break; + + default: + DR_LOG(ERR, "Unsupported action type: %d", action_type[i]); + rte_errno = ENOTSUP; + assert(false); + return rte_errno; + } + + last_setter = RTE_MAX(setter, last_setter); + } + + /* Set default hit on the last STE if no hit action provided */ + if (!(last_setter->flags & ASF_HIT)) + last_setter->set_hit = &mlx5dr_action_setter_default_hit; + + at->num_of_action_stes = last_setter - start_setter + 1; + + /* Check if action template doesn't require any action DWs */ + at->only_term = (at->num_of_action_stes == 1) && + !(last_setter->flags & ~(ASF_CTR | ASF_HIT)); + + return 0; +} + +struct mlx5dr_action_template * +mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[]) +{ + struct mlx5dr_action_template *at; + uint8_t num_actions = 0; + int i; + + at = simple_calloc(1, sizeof(*at)); + if (!at) { + DR_LOG(ERR, "Failed to allocate action template"); + rte_errno = ENOMEM; + return NULL; + } + + while (action_type[num_actions++] != MLX5DR_ACTION_TYP_LAST); + + at->num_actions = num_actions - 1; + at->action_type_arr = simple_calloc(num_actions, sizeof(*action_type)); + if (!at->action_type_arr) { + DR_LOG(ERR, "Failed to allocate action type array"); + rte_errno = ENOMEM; + goto free_at; + } + + for (i = 0; i < num_actions; i++) + at->action_type_arr[i] = action_type[i]; + + return at; + +free_at: + simple_free(at); + return NULL; +} + +int mlx5dr_action_template_destroy(struct mlx5dr_action_template *at) +{ + simple_free(at->action_type_arr); + simple_free(at); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h new file mode 100644 index 0000000000..f14d91f994 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -0,0 +1,253 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_ACTION_H_ +#define MLX5DR_ACTION_H_ + +/* Max number of STEs needed for a rule (including match) */ +#define MLX5DR_ACTION_MAX_STE 7 + +enum mlx5dr_action_stc_idx { + MLX5DR_ACTION_STC_IDX_CTRL = 0, + MLX5DR_ACTION_STC_IDX_HIT = 1, + MLX5DR_ACTION_STC_IDX_DW5 = 2, + MLX5DR_ACTION_STC_IDX_DW6 = 3, + MLX5DR_ACTION_STC_IDX_DW7 = 4, + MLX5DR_ACTION_STC_IDX_MAX = 5, + /* STC Jumvo STE combo: CTR, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_JUMBO_STE = 1, + /* STC combo1: CTR, SINGLE, DOUBLE, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_COMBO1 = 3, + /* STC combo2: CTR, 3 x SINGLE, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_COMBO2 = 4, +}; + +enum mlx5dr_action_offset { + MLX5DR_ACTION_OFFSET_DW0 = 0, + MLX5DR_ACTION_OFFSET_DW5 = 5, + MLX5DR_ACTION_OFFSET_DW6 = 6, + MLX5DR_ACTION_OFFSET_DW7 = 7, + MLX5DR_ACTION_OFFSET_HIT = 3, + MLX5DR_ACTION_OFFSET_HIT_LSB = 4, +}; + +enum { + MLX5DR_ACTION_DOUBLE_SIZE = 8, + MLX5DR_ACTION_INLINE_DATA_SIZE = 4, + MLX5DR_ACTION_HDR_LEN_L2_MACS = 12, + MLX5DR_ACTION_HDR_LEN_L2_VLAN = 4, + MLX5DR_ACTION_HDR_LEN_L2_ETHER = 2, + MLX5DR_ACTION_HDR_LEN_L2 = (MLX5DR_ACTION_HDR_LEN_L2_MACS + + MLX5DR_ACTION_HDR_LEN_L2_ETHER), + MLX5DR_ACTION_HDR_LEN_L2_W_VLAN = (MLX5DR_ACTION_HDR_LEN_L2 + + MLX5DR_ACTION_HDR_LEN_L2_VLAN), + MLX5DR_ACTION_REFORMAT_DATA_SIZE = 64, + DECAP_L3_NUM_ACTIONS_W_NO_VLAN = 6, + DECAP_L3_NUM_ACTIONS_W_VLAN = 7, +}; + +enum mlx5dr_action_setter_flag { + ASF_SINGLE1 = 1 << 0, + ASF_SINGLE2 = 1 << 1, + ASF_SINGLE3 = 1 << 2, + ASF_DOUBLE = ASF_SINGLE2 | ASF_SINGLE3, + ASF_REPARSE = 1 << 3, + ASF_REMOVE = 1 << 4, + ASF_MODIFY = 1 << 5, + ASF_CTR = 1 << 6, + ASF_HIT = 1 << 7, +}; + +struct mlx5dr_action_default_stc { + struct mlx5dr_pool_chunk nop_ctr; + struct mlx5dr_pool_chunk nop_dw5; + struct mlx5dr_pool_chunk nop_dw6; + struct mlx5dr_pool_chunk nop_dw7; + struct mlx5dr_pool_chunk default_hit; + uint32_t refcount; +}; + +struct mlx5dr_action_shared_stc { + struct mlx5dr_pool_chunk remove_header; + rte_atomic32_t refcount; +}; + +struct mlx5dr_actions_apply_data { + struct mlx5dr_send_engine *queue; + struct mlx5dr_rule_action *rule_action; + uint32_t *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + uint32_t jump_to_action_stc; + struct mlx5dr_context_common_res *common_res; + enum mlx5dr_table_type tbl_type; + uint32_t next_direct_idx; + uint8_t require_dep; +}; + +struct mlx5dr_actions_wqe_setter; + +typedef void (*mlx5dr_action_setter_fp) + (struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter); + +struct mlx5dr_actions_wqe_setter { + mlx5dr_action_setter_fp set_single; + mlx5dr_action_setter_fp set_double; + mlx5dr_action_setter_fp set_hit; + mlx5dr_action_setter_fp set_ctr; + uint8_t idx_single; + uint8_t idx_double; + uint8_t idx_ctr; + uint8_t idx_hit; + uint8_t flags; +}; + +struct mlx5dr_action_template { + struct mlx5dr_actions_wqe_setter setters[MLX5DR_ACTION_MAX_STE]; + enum mlx5dr_action_type *action_type_arr; + uint8_t num_of_action_stes; + uint8_t num_actions; + uint8_t only_term; +}; + +struct mlx5dr_action { + uint8_t type; + uint8_t flags; + struct mlx5dr_context *ctx; + union { + struct { + struct mlx5dr_pool_chunk stc[MLX5DR_TABLE_TYPE_MAX]; + union { + struct { + struct mlx5dr_devx_obj *pattern_obj; + struct mlx5dr_devx_obj *arg_obj; + __be64 single_action; + uint8_t single_action_type; + uint16_t num_of_actions; + } modify_header; + struct { + struct mlx5dr_devx_obj *arg_obj; + uint32_t header_size; + } reformat; + struct { + struct mlx5dr_devx_obj *devx_obj; + uint8_t return_reg_id; + } aso; + struct { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + } vport; + }; + }; + + struct ibv_flow_action *flow_action; + struct mlx5dv_devx_obj *devx_obj; + struct ibv_qp *qp; + }; +}; + +int mlx5dr_action_root_build_attr(struct mlx5dr_rule_action rule_actions[], + uint32_t num_actions, + struct mlx5dv_flow_action_attr *attr); + +int mlx5dr_action_get_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type); + +void mlx5dr_action_put_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type); + +void mlx5dr_action_prepare_decap_l3_data(uint8_t *src, uint8_t *dst, + uint16_t num_of_actions); + +int mlx5dr_action_template_process(struct mlx5dr_action_template *at); + +bool mlx5dr_action_check_combo(enum mlx5dr_action_type *user_actions, + enum mlx5dr_table_type table_type); + +int mlx5dr_action_alloc_single_stc(struct mlx5dr_context *ctx, + struct mlx5dr_cmd_stc_modify_attr *stc_attr, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc); + +void mlx5dr_action_free_single_stc(struct mlx5dr_context *ctx, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc); + +static inline void +mlx5dr_action_setter_default_single(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(apply->common_res->default_stc->nop_dw5.offset); +} + +static inline void +mlx5dr_action_setter_default_double(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = + htobe32(apply->common_res->default_stc->nop_dw6.offset); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = + htobe32(apply->common_res->default_stc->nop_dw7.offset); +} + +static inline void +mlx5dr_action_setter_default_ctr(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW0] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_CTRL] = + htobe32(apply->common_res->default_stc->nop_ctr.offset); +} + +static inline void +mlx5dr_action_apply_setter(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter, + bool is_jumbo) +{ + uint8_t num_of_actions; + + /* Set control counter */ + if (setter->flags & ASF_CTR) + setter->set_ctr(apply, setter); + else + mlx5dr_action_setter_default_ctr(apply, setter); + + /* Set single and double on match */ + if (!is_jumbo) { + if (setter->flags & ASF_SINGLE1) + setter->set_single(apply, setter); + else + mlx5dr_action_setter_default_single(apply, setter); + + if (setter->flags & ASF_DOUBLE) + setter->set_double(apply, setter); + else + mlx5dr_action_setter_default_double(apply, setter); + + num_of_actions = setter->flags & ASF_DOUBLE ? + MLX5DR_ACTION_STC_IDX_LAST_COMBO1 : + MLX5DR_ACTION_STC_IDX_LAST_COMBO2; + } else { + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + num_of_actions = MLX5DR_ACTION_STC_IDX_LAST_JUMBO_STE; + } + + /* Set next/final hit action */ + setter->set_hit(apply, setter); + + /* Set number of actions */ + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_CTRL] |= + htobe32(num_of_actions << 29); +} + +#endif /* MLX5DR_ACTION_H_ */ diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.c b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c new file mode 100644 index 0000000000..9b73707ee8 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c @@ -0,0 +1,511 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_data_size_to_arg_log_size(uint16_t data_size) +{ + /* Return the roundup of log2(data_size) */ + if (data_size <= MLX5DR_ARG_DATA_SIZE) + return MLX5DR_ARG_CHUNK_SIZE_1; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 2) + return MLX5DR_ARG_CHUNK_SIZE_2; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 4) + return MLX5DR_ARG_CHUNK_SIZE_3; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 8) + return MLX5DR_ARG_CHUNK_SIZE_4; + + return MLX5DR_ARG_CHUNK_SIZE_MAX; +} + +uint32_t mlx5dr_arg_data_size_to_arg_size(uint16_t data_size) +{ + return BIT(mlx5dr_arg_data_size_to_arg_log_size(data_size)); +} + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_get_arg_log_size(uint16_t num_of_actions) +{ + return mlx5dr_arg_data_size_to_arg_log_size(num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); +} + +uint32_t mlx5dr_arg_get_arg_size(uint16_t num_of_actions) +{ + return BIT(mlx5dr_arg_get_arg_log_size(num_of_actions)); +} + +/* Cache and cache element handling */ +int mlx5dr_pat_init_pattern_cache(struct mlx5dr_pattern_cache **cache) +{ + struct mlx5dr_pattern_cache *new_cache; + + new_cache = simple_calloc(1, sizeof(*new_cache)); + if (!new_cache) { + rte_errno = ENOMEM; + return rte_errno; + } + LIST_INIT(&new_cache->head); + pthread_spin_init(&new_cache->lock, PTHREAD_PROCESS_PRIVATE); + + *cache = new_cache; + + return 0; +} + +void mlx5dr_pat_uninit_pattern_cache(struct mlx5dr_pattern_cache *cache) +{ + simple_free(cache); +} + +static bool mlx5dr_pat_compare_pattern(enum mlx5dr_action_type cur_type, + int cur_num_of_actions, + __be64 cur_actions[], + enum mlx5dr_action_type type, + int num_of_actions, + __be64 actions[]) +{ + int i; + + if ((cur_num_of_actions != num_of_actions) || (cur_type != type)) + return false; + + /* All decap-l3 look the same, only change is the num of actions */ + if (type == MLX5DR_ACTION_TYP_TNL_L3_TO_L2) + return true; + + for (i = 0; i < num_of_actions; i++) { + u8 action_id = + MLX5_GET(set_action_in, &actions[i], action_type); + + if (action_id == MLX5_MODIFICATION_TYPE_COPY) { + if (actions[i] != cur_actions[i]) + return false; + } else { + /* Compare just the control, not the values */ + if ((__be32)actions[i] != + (__be32)cur_actions[i]) + return false; + } + } + + return true; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_find_cached_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pat; + + LIST_FOREACH(cached_pat, &cache->head, next) { + if (mlx5dr_pat_compare_pattern(cached_pat->type, + cached_pat->mh_data.num_of_actions, + (__be64 *)cached_pat->mh_data.data, + action->type, + num_of_actions, + actions)) + return cached_pat; + } + + return NULL; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_get_existing_cached_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + cached_pattern = mlx5dr_pat_find_cached_pattern(cache, action, num_of_actions, actions); + if (cached_pattern) { + /* LRU: move it to be first in the list */ + LIST_REMOVE(cached_pattern, next); + LIST_INSERT_HEAD(&cache->head, cached_pattern, next); + rte_atomic32_add(&cached_pattern->refcount, 1); + } + + return cached_pattern; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_get_cached_pattern_by_action(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + LIST_FOREACH(cached_pattern, &cache->head, next) { + if (cached_pattern->mh_data.pattern_obj->id == action->modify_header.pattern_obj->id) + return cached_pattern; + } + + return NULL; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_add_pattern_to_cache(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_devx_obj *pattern_obj, + enum mlx5dr_action_type type, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + cached_pattern = simple_calloc(1, sizeof(*cached_pattern)); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to allocate cached_pattern"); + rte_errno = ENOMEM; + return NULL; + } + + cached_pattern->type = type; + cached_pattern->mh_data.num_of_actions = num_of_actions; + cached_pattern->mh_data.pattern_obj = pattern_obj; + cached_pattern->mh_data.data = + simple_malloc(num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); + if (!cached_pattern->mh_data.data) { + DR_LOG(ERR, "Failed to allocate mh_data.data"); + rte_errno = ENOMEM; + goto free_cached_obj; + } + + memcpy(cached_pattern->mh_data.data, actions, + num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); + + LIST_INSERT_HEAD(&cache->head, cached_pattern, next); + + rte_atomic32_init(&cached_pattern->refcount); + rte_atomic32_set(&cached_pattern->refcount, 1); + + return cached_pattern; + +free_cached_obj: + simple_free(cached_pattern); + return NULL; +} + +static void +mlx5dr_pat_remove_pattern(struct mlx5dr_pat_cached_pattern *cached_pattern) +{ + LIST_REMOVE(cached_pattern, next); + simple_free(cached_pattern->mh_data.data); + simple_free(cached_pattern); +} + +static void +mlx5dr_pat_put_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + pthread_spin_lock(&cache->lock); + cached_pattern = mlx5dr_pat_get_cached_pattern_by_action(cache, action); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to find pattern according to action with pt"); + assert(false); + goto out; + } + + if (!rte_atomic32_dec_and_test(&cached_pattern->refcount)) + goto out; + + mlx5dr_pat_remove_pattern(cached_pattern); + +out: + pthread_spin_unlock(&cache->lock); +} + +static int mlx5dr_pat_get_pattern(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint16_t num_of_actions, + size_t pattern_sz, + __be64 *pattern) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + int ret = 0; + + pthread_spin_lock(&ctx->pattern_cache->lock); + + cached_pattern = mlx5dr_pat_get_existing_cached_pattern(ctx->pattern_cache, + action, + num_of_actions, + pattern); + if (cached_pattern) { + action->modify_header.pattern_obj = cached_pattern->mh_data.pattern_obj; + goto out_unlock; + } + + action->modify_header.pattern_obj = + mlx5dr_cmd_header_modify_pattern_create(ctx->ibv_ctx, + pattern_sz, + (uint8_t *)pattern); + if (!action->modify_header.pattern_obj) { + DR_LOG(ERR, "Failed to create pattern FW object"); + + ret = rte_errno; + goto out_unlock; + } + + cached_pattern = + mlx5dr_pat_add_pattern_to_cache(ctx->pattern_cache, + action->modify_header.pattern_obj, + action->type, + num_of_actions, + pattern); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to add pattern to cache"); + ret = rte_errno; + goto clean_pattern; + } + +out_unlock: + pthread_spin_unlock(&ctx->pattern_cache->lock); + return ret; + +clean_pattern: + mlx5dr_cmd_destroy_obj(action->modify_header.pattern_obj); + pthread_spin_unlock(&ctx->pattern_cache->lock); + return ret; +} + +static void +mlx5d_arg_init_send_attr(struct mlx5dr_send_engine_post_attr *send_attr, + void *comp_data, + uint32_t arg_idx) +{ + send_attr->opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + send_attr->opmod = MLX5DR_WQE_GTA_OPMOD_MOD_ARG; + send_attr->len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + send_attr->id = arg_idx; + send_attr->user_data = comp_data; +} + +void mlx5dr_arg_decapl3_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_arg *wqe_arg; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + size_t wqe_len; + + mlx5d_arg_init_send_attr(&send_attr, NULL, arg_idx); + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + mlx5dr_action_prepare_decap_l3_data(arg_data, (uint8_t *) wqe_arg, + num_of_actions); + mlx5dr_send_engine_post_end(&ctrl, &send_attr); +} + +static int +mlx5dr_arg_poll_for_comp(struct mlx5dr_context *ctx, uint16_t queue_id) +{ + struct rte_flow_op_result comp[1]; + int ret; + + while (true) { + ret = mlx5dr_send_queue_poll(ctx, queue_id, comp, 1); + if (ret) { + if (ret < 0) { + DR_LOG(ERR, "Failed mlx5dr_send_queue_poll"); + } else if (comp[0].status == RTE_FLOW_OP_ERROR) { + DR_LOG(ERR, "Got comp with error"); + rte_errno = ENOENT; + } + break; + } + } + return (ret == 1 ? 0 : ret); +} + +void mlx5dr_arg_write(struct mlx5dr_send_engine *queue, + void *comp_data, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_arg *wqe_arg; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + int i, full_iter, leftover; + size_t wqe_len; + + mlx5d_arg_init_send_attr(&send_attr, comp_data, arg_idx); + + /* Each WQE can hold 64B of data, it might require multiple iteration */ + full_iter = data_size / MLX5DR_ARG_DATA_SIZE; + leftover = data_size & (MLX5DR_ARG_DATA_SIZE - 1); + + for (i = 0; i < full_iter; i++) { + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + memcpy(wqe_arg, arg_data, wqe_len); + send_attr.id = arg_idx++; + mlx5dr_send_engine_post_end(&ctrl, &send_attr); + + /* Move to next argument data */ + arg_data += MLX5DR_ARG_DATA_SIZE; + } + + if (leftover) { + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); // TODO OPT: GTA ctrl might be ignored in case of arg + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + memcpy(wqe_arg, arg_data, leftover); + send_attr.id = arg_idx; + mlx5dr_send_engine_post_end(&ctrl, &send_attr); + } +} + +int mlx5dr_arg_write_inline_arg_data(struct mlx5dr_context *ctx, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size) +{ + struct mlx5dr_send_engine *queue; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + + /* Get the control queue */ + queue = &ctx->send_queue[ctx->queues - 1]; + + mlx5dr_arg_write(queue, arg_data, arg_idx, arg_data, data_size); + + mlx5dr_send_engine_flush_queue(queue); + + /* Poll for completion */ + ret = mlx5dr_arg_poll_for_comp(ctx, ctx->queues - 1); + if (ret) + DR_LOG(ERR, "Failed to get completions for shared action"); + + pthread_spin_unlock(&ctx->ctrl_lock); + + return ret; +} + +bool mlx5dr_arg_is_valid_arg_request_size(struct mlx5dr_context *ctx, + uint32_t arg_size) +{ + if (arg_size < ctx->caps->log_header_modify_argument_granularity || + arg_size > ctx->caps->log_header_modify_argument_max_alloc) { + return false; + } + return true; +} + +static int +mlx5dr_arg_create_modify_header_arg(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *pattern, + uint32_t bulk_size) +{ + uint32_t flags = action->flags; + uint16_t args_log_size; + int ret = 0; + + /* Alloc bulk of args */ + args_log_size = mlx5dr_arg_get_arg_log_size(num_of_actions); + if (args_log_size >= MLX5DR_ARG_CHUNK_SIZE_MAX) { + DR_LOG(ERR, "Exceed number of allowed actions %u", + num_of_actions); + rte_errno = EINVAL; + return rte_errno; + } + + if (!mlx5dr_arg_is_valid_arg_request_size(ctx, args_log_size + bulk_size)) { + DR_LOG(ERR, "Arg size %d does not fit FW capability", + args_log_size + bulk_size); + rte_errno = EINVAL; + return rte_errno; + } + + action->modify_header.arg_obj = + mlx5dr_cmd_arg_create(ctx->ibv_ctx, args_log_size + bulk_size, + ctx->pd_num); + if (!action->modify_header.arg_obj) { + DR_LOG(ERR, "Failed allocating arg in order: %d", + args_log_size + bulk_size); + return rte_errno; + } + + /* When INLINE need to write the arg data */ + if (flags & MLX5DR_ACTION_FLAG_SHARED) + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->modify_header.arg_obj->id, + (uint8_t *)pattern, + num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); + if (ret) { + DR_LOG(ERR, "Failed writing INLINE arg in order: %d", + args_log_size + bulk_size); + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + return rte_errno; + } + + return 0; +} + +int mlx5dr_pat_arg_create_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + size_t pattern_sz, + __be64 pattern[], + uint32_t bulk_size) +{ + uint16_t num_of_actions; + int ret; + + num_of_actions = pattern_sz / MLX5DR_MODIFY_ACTION_SIZE; + if (num_of_actions == 0) { + DR_LOG(ERR, "Invalid number of actions %u\n", num_of_actions); + rte_errno = EINVAL; + return rte_errno; + } + + action->modify_header.num_of_actions = num_of_actions; + + ret = mlx5dr_arg_create_modify_header_arg(ctx, action, + num_of_actions, + pattern, + bulk_size); + if (ret) { + DR_LOG(ERR, "Failed to allocate arg"); + return ret; + } + + ret = mlx5dr_pat_get_pattern(ctx, action, num_of_actions, pattern_sz, + pattern); + if (ret) { + DR_LOG(ERR, "Failed to allocate pattern"); + goto free_arg; + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + return rte_errno; +} + +void mlx5dr_pat_arg_destroy_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action) +{ + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + mlx5dr_pat_put_pattern(ctx->pattern_cache, action); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.h b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h new file mode 100644 index 0000000000..8a4670427f --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_PAT_ARG_H_ +#define MLX5DR_PAT_ARG_H_ + +/* Modify-header arg pool */ +enum mlx5dr_arg_chunk_size { + MLX5DR_ARG_CHUNK_SIZE_1, + /* Keep MIN updated when changing */ + MLX5DR_ARG_CHUNK_SIZE_MIN = MLX5DR_ARG_CHUNK_SIZE_1, + MLX5DR_ARG_CHUNK_SIZE_2, + MLX5DR_ARG_CHUNK_SIZE_3, + MLX5DR_ARG_CHUNK_SIZE_4, + MLX5DR_ARG_CHUNK_SIZE_MAX, +}; + +enum { + MLX5DR_MODIFY_ACTION_SIZE = 8, + MLX5DR_ARG_DATA_SIZE = 64, +}; + +struct mlx5dr_pattern_cache { + /* Protect pattern list */ + pthread_spinlock_t lock; + LIST_HEAD(pattern_head, mlx5dr_pat_cached_pattern) head; +}; + +struct mlx5dr_pat_cached_pattern { + enum mlx5dr_action_type type; + struct { + struct mlx5dr_devx_obj *pattern_obj; + struct dr_icm_chunk *chunk; + uint8_t *data; + uint16_t num_of_actions; + } mh_data; + rte_atomic32_t refcount; + LIST_ENTRY(mlx5dr_pat_cached_pattern) next; +}; + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_get_arg_log_size(uint16_t num_of_actions); + +uint32_t mlx5dr_arg_get_arg_size(uint16_t num_of_actions); + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_data_size_to_arg_log_size(uint16_t data_size); + +uint32_t mlx5dr_arg_data_size_to_arg_size(uint16_t data_size); + +int mlx5dr_pat_init_pattern_cache(struct mlx5dr_pattern_cache **cache); + +void mlx5dr_pat_uninit_pattern_cache(struct mlx5dr_pattern_cache *cache); + +int mlx5dr_pat_arg_create_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + size_t pattern_sz, + __be64 pattern[], + uint32_t bulk_size); + +void mlx5dr_pat_arg_destroy_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action); + +bool mlx5dr_arg_is_valid_arg_request_size(struct mlx5dr_context *ctx, + uint32_t arg_size); + +void mlx5dr_arg_write(struct mlx5dr_send_engine *queue, + void *comp_data, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size); + +void mlx5dr_arg_decapl3_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions); + +int mlx5dr_arg_write_inline_arg_data(struct mlx5dr_context *ctx, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size); +#endif /* MLX5DR_PAT_ARG_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 17/18] net/mlx5/hws: Add HWS debug layer 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (15 preceding siblings ...) 2022-10-14 11:48 ` [v3 16/18] net/mlx5/hws: Add HWS action object Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 2022-10-14 11:48 ` [v3 18/18] net/mlx5/hws: Enable HWS Alex Vesker 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Hamdan Igbaria From: Hamdan Igbaria <hamdani@nvidia.com> The debug layer is used to generate a debug CSV file containing details of the context, table, matcher, rules and other useful debug information. Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_debug.c | 462 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_debug.h | 28 ++ 2 files changed, 490 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.h diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c new file mode 100644 index 0000000000..890a761c48 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -0,0 +1,462 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +const char *mlx5dr_debug_action_type_str[] = { + [MLX5DR_ACTION_TYP_LAST] = "LAST", + [MLX5DR_ACTION_TYP_TNL_L2_TO_L2] = "TNL_L2_TO_L2", + [MLX5DR_ACTION_TYP_L2_TO_TNL_L2] = "L2_TO_TNL_L2", + [MLX5DR_ACTION_TYP_TNL_L3_TO_L2] = "TNL_L3_TO_L2", + [MLX5DR_ACTION_TYP_L2_TO_TNL_L3] = "L2_TO_TNL_L3", + [MLX5DR_ACTION_TYP_DROP] = "DROP", + [MLX5DR_ACTION_TYP_TIR] = "TIR", + [MLX5DR_ACTION_TYP_FT] = "FT", + [MLX5DR_ACTION_TYP_CTR] = "CTR", + [MLX5DR_ACTION_TYP_TAG] = "TAG", + [MLX5DR_ACTION_TYP_MODIFY_HDR] = "MODIFY_HDR", + [MLX5DR_ACTION_TYP_VPORT] = "VPORT", + [MLX5DR_ACTION_TYP_MISS] = "DEFAULT_MISS", + [MLX5DR_ACTION_TYP_POP_VLAN] = "POP_VLAN", + [MLX5DR_ACTION_TYP_PUSH_VLAN] = "PUSH_VLAN", + [MLX5DR_ACTION_TYP_ASO_METER] = "ASO_METER", + [MLX5DR_ACTION_TYP_ASO_CT] = "ASO_CT", +}; + +static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX, + "Missing mlx5dr_debug_action_type_str"); + +const char *mlx5dr_debug_action_type_to_str(enum mlx5dr_action_type action_type) +{ + return mlx5dr_debug_action_type_str[action_type]; +} + +static int +mlx5dr_debug_dump_matcher_template_definer(FILE *f, + struct mlx5dr_match_template *mt) +{ + struct mlx5dr_definer *definer = mt->definer; + int i, ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,", + MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_DEFINER, + (uint64_t)(uintptr_t)definer, + (uint64_t)(uintptr_t)mt, + definer->obj->id, + definer->type); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (i = 0; i < DW_SELECTORS; i++) { + ret = fprintf(f, "0x%x%s", definer->dw_selector[i], + (i == DW_SELECTORS - 1) ? "," : "-"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + for (i = 0; i < BYTE_SELECTORS; i++) { + ret = fprintf(f, "0x%x%s", definer->byte_selector[i], + (i == BYTE_SELECTORS - 1) ? "," : "-"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + for (i = 0; i < MLX5DR_JUMBO_TAG_SZ; i++) { + ret = fprintf(f, "%02x", definer->mask.jumbo[i]); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + ret = fprintf(f, "\n"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int +mlx5dr_debug_dump_matcher_match_template(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + int i, ret; + + for (i = 0; i < matcher->num_of_mt; i++) { + struct mlx5dr_match_template *mt = matcher->mt[i]; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_MATCHER_MATCH_TEMPLATE, + (uint64_t)(uintptr_t)mt, + (uint64_t)(uintptr_t)matcher, + is_root ? 0 : mt->fc_sz, + mt->flags); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + if (!is_root) { + ret = mlx5dr_debug_dump_matcher_template_definer(f, mt); + if (ret) + return ret; + } + } + + return 0; +} + +static int +mlx5dr_debug_dump_matcher_action_template(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + enum mlx5dr_action_type action_type; + int i, j, ret; + + for (i = 0; i < matcher->num_of_at; i++) { + struct mlx5dr_action_template *at = matcher->at[i]; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,%d", + MLX5DR_DEBUG_RES_TYPE_MATCHER_ACTION_TEMPLATE, + (uint64_t)(uintptr_t)at, + (uint64_t)(uintptr_t)matcher, + at->only_term ? 0 : 1, + is_root ? 0 : at->num_of_action_stes, + at->num_actions); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (j = 0; j < at->num_actions; j++) { + action_type = at->action_type_arr[j]; + ret = fprintf(f, ",%s", mlx5dr_debug_action_type_to_str(action_type)); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + fprintf(f, "\n"); + } + + return 0; +} + +static int +mlx5dr_debug_dump_matcher_attr(FILE *f, struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_matcher_attr *attr = &matcher->attr; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_MATCHER_ATTR, + (uint64_t)(uintptr_t)matcher, + attr->priority, + attr->mode, + attr->table.sz_row_log, + attr->table.sz_col_log, + attr->optimize_using_rule_idx, + attr->optimize_flow_src); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_matcher(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + enum mlx5dr_table_type tbl_type = matcher->tbl->type; + struct mlx5dr_devx_obj *ste_0, *ste_1 = NULL; + struct mlx5dr_pool_chunk *ste; + struct mlx5dr_pool *ste_pool; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,0x%" PRIx64, + MLX5DR_DEBUG_RES_TYPE_MATCHER, + (uint64_t)(uintptr_t)matcher, + (uint64_t)(uintptr_t)matcher->tbl, + matcher->num_of_mt, + is_root ? 0 : matcher->end_ft->id, + matcher->col_matcher ? (uint64_t)(uintptr_t)matcher->col_matcher : 0); + if (ret < 0) + goto out_err; + + ste = &matcher->match_ste.ste; + ste_pool = matcher->match_ste.pool; + if (ste_pool) { + ste_0 = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + ste_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + } else { + ste_0 = NULL; + ste_1 = NULL; + } + + ret = fprintf(f, ",%d,%d,%d,%d", + matcher->match_ste.rtc_0 ? matcher->match_ste.rtc_0->id : 0, + ste_0 ? (int)ste_0->id : -1, + matcher->match_ste.rtc_1 ? matcher->match_ste.rtc_1->id : 0, + ste_1 ? (int)ste_1->id : -1); + if (ret < 0) + goto out_err; + + ste = &matcher->action_ste.ste; + ste_pool = matcher->action_ste.pool; + if (ste_pool) { + ste_0 = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + ste_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + } else { + ste_0 = NULL; + ste_1 = NULL; + } + + ret = fprintf(f, ",%d,%d,%d,%d\n", + matcher->action_ste.rtc_0 ? matcher->action_ste.rtc_0->id : 0, + ste_0 ? (int)ste_0->id : -1, + matcher->action_ste.rtc_1 ? matcher->action_ste.rtc_1->id : 0, + ste_1 ? (int)ste_1->id : -1); + if (ret < 0) + goto out_err; + + ret = mlx5dr_debug_dump_matcher_attr(f, matcher); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_matcher_match_template(f, matcher); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_matcher_action_template(f, matcher); + if (ret) + return ret; + + return 0; + +out_err: + rte_errno = EINVAL; + return rte_errno; +} + +static int mlx5dr_debug_dump_table(FILE *f, struct mlx5dr_table *tbl) +{ + bool is_root = tbl->level == MLX5DR_ROOT_LEVEL; + struct mlx5dr_matcher *matcher; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_TABLE, + (uint64_t)(uintptr_t)tbl, + (uint64_t)(uintptr_t)tbl->ctx, + is_root ? 0 : tbl->ft->id, + tbl->type, + is_root ? 0 : tbl->fw_ft_type, + tbl->level); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + LIST_FOREACH(matcher, &tbl->head, next) { + ret = mlx5dr_debug_dump_matcher(f, matcher); + if (ret) + return ret; + } + + return 0; +} + +static int +mlx5dr_debug_dump_context_send_engine(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_send_engine *send_queue; + int ret, i, j; + + for (i = 0; i < (int)ctx->queues; i++) { + send_queue = &ctx->send_queue[i]; + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_ENGINE, + (uint64_t)(uintptr_t)ctx, + i, + send_queue->used_entries, + send_queue->th_entries, + send_queue->rings, + send_queue->num_entries, + send_queue->err, + send_queue->completed.ci, + send_queue->completed.pi, + send_queue->completed.mask); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (j = 0; j < MLX5DR_NUM_SEND_RINGS; j++) { + struct mlx5dr_send_ring *send_ring = &send_queue->send_ring[j]; + struct mlx5dr_send_ring_cq *cq = &send_ring->send_cq; + struct mlx5dr_send_ring_sq *sq = &send_ring->send_sq; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_RING, + (uint64_t)(uintptr_t)ctx, + j, + i, + cq->cqn, + cq->cons_index, + cq->ncqe_mask, + cq->buf_sz, + cq->ncqe, + cq->cqe_log_sz, + cq->poll_wqe, + cq->cqe_sz, + sq->sqn, + sq->obj->id, + sq->cur_post, + sq->buf_mask); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + } + + return 0; +} + +static int mlx5dr_debug_dump_context_caps(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_cmd_query_caps *caps = ctx->caps; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%s,%d,%d,%d,%d,", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_CAPS, + (uint64_t)(uintptr_t)ctx, + caps->fw_ver, + caps->wqe_based_update, + caps->ste_format, + caps->ste_alloc_log_max, + caps->log_header_modify_argument_max_alloc); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + ret = fprintf(f, "%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + caps->flex_protocols, + caps->rtc_reparse_mode, + caps->rtc_index_mode, + caps->ste_alloc_log_gran, + caps->stc_alloc_log_max, + caps->stc_alloc_log_gran, + caps->rtc_log_depth_max, + caps->format_select_gtpu_dw_0, + caps->format_select_gtpu_dw_1, + caps->format_select_gtpu_dw_2, + caps->format_select_gtpu_ext_dw_0, + caps->nic_ft.max_level, + caps->nic_ft.reparse, + caps->fdb_ft.max_level, + caps->fdb_ft.reparse, + caps->log_header_modify_argument_granularity); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_context_attr(FILE *f, struct mlx5dr_context *ctx) +{ + int ret; + + ret = fprintf(f, "%u,0x%" PRIx64 ",%d,%zu,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_ATTR, + (uint64_t)(uintptr_t)ctx, + ctx->pd_num, + ctx->queues, + ctx->send_queue->num_entries); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_context_info(FILE *f, struct mlx5dr_context *ctx) +{ + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%s,%s\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT, + (uint64_t)(uintptr_t)ctx, + ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT, + mlx5_glue->get_device_name(ctx->ibv_ctx->device), + DEBUG_VERSION); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + ret = mlx5dr_debug_dump_context_attr(f, ctx); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_context_caps(f, ctx); + if (ret) + return ret; + + return 0; +} + +static int mlx5dr_debug_dump_context(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_table *tbl; + int ret; + + ret = mlx5dr_debug_dump_context_info(f, ctx); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_context_send_engine(f, ctx); + if (ret) + return ret; + + LIST_FOREACH(tbl, &ctx->head, next) { + ret = mlx5dr_debug_dump_table(f, tbl); + if (ret) + return ret; + } + + return 0; +} + +int mlx5dr_debug_dump(struct mlx5dr_context *ctx, FILE *f) +{ + int ret; + + if (!f || !ctx) { + rte_errno = EINVAL; + return -rte_errno; + } + + pthread_spin_lock(&ctx->ctrl_lock); + ret = mlx5dr_debug_dump_context(f, ctx); + pthread_spin_unlock(&ctx->ctrl_lock); + + return -ret; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.h b/drivers/net/mlx5/hws/mlx5dr_debug.h new file mode 100644 index 0000000000..cf00170f7d --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_debug.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_DEBUG_H_ +#define MLX5DR_DEBUG_H_ + +#define DEBUG_VERSION "1.0.DPDK" + +enum mlx5dr_debug_res_type { + MLX5DR_DEBUG_RES_TYPE_CONTEXT = 4000, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_ATTR = 4001, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_CAPS = 4002, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_ENGINE = 4003, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_RING = 4004, + + MLX5DR_DEBUG_RES_TYPE_TABLE = 4100, + + MLX5DR_DEBUG_RES_TYPE_MATCHER = 4200, + MLX5DR_DEBUG_RES_TYPE_MATCHER_ATTR = 4201, + MLX5DR_DEBUG_RES_TYPE_MATCHER_MATCH_TEMPLATE = 4202, + MLX5DR_DEBUG_RES_TYPE_MATCHER_ACTION_TEMPLATE = 4204, + MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_DEFINER = 4203, +}; + +const char *mlx5dr_debug_action_type_to_str(enum mlx5dr_action_type action_type); + +#endif /* MLX5DR_DEBUG_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v3 18/18] net/mlx5/hws: Enable HWS 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (16 preceding siblings ...) 2022-10-14 11:48 ` [v3 17/18] net/mlx5/hws: Add HWS debug layer Alex Vesker @ 2022-10-14 11:48 ` Alex Vesker 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-14 11:48 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Replace stub implenation of HWS with mlx5dr code. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/common/mlx5/linux/meson.build | 2 + drivers/net/mlx5/hws/meson.build | 18 + drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} | 210 ++++++++-- drivers/net/mlx5/hws/mlx5dr_internal.h | 93 +++++ drivers/net/mlx5/meson.build | 5 +- drivers/net/mlx5/mlx5.h | 2 +- drivers/net/mlx5/mlx5_dr.c | 383 ------------------- drivers/net/mlx5/mlx5_flow.h | 11 +- drivers/net/mlx5/mlx5_flow_hw.c | 10 +- 9 files changed, 307 insertions(+), 427 deletions(-) create mode 100644 drivers/net/mlx5/hws/meson.build rename drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} (66%) create mode 100644 drivers/net/mlx5/hws/mlx5dr_internal.h delete mode 100644 drivers/net/mlx5/mlx5_dr.c diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build index f9d1937571..8c95e7ab56 100644 --- a/drivers/common/mlx5/linux/meson.build +++ b/drivers/common/mlx5/linux/meson.build @@ -229,6 +229,8 @@ foreach arg:has_member_args endforeach configure_file(output : 'mlx5_autoconf.h', configuration : config) +MLX5_HAVE_IBV_FLOW_DV_SUPPORT=config.get('HAVE_IBV_FLOW_DV_SUPPORT') + # Build Glue Library if dlopen_ibverbs dlopen_name = 'mlx5_glue' diff --git a/drivers/net/mlx5/hws/meson.build b/drivers/net/mlx5/hws/meson.build new file mode 100644 index 0000000000..f94798dd2d --- /dev/null +++ b/drivers/net/mlx5/hws/meson.build @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2022 NVIDIA Corporation & Affiliates + +includes += include_directories('.') +sources += files( + 'mlx5dr_context.c', + 'mlx5dr_table.c', + 'mlx5dr_matcher.c', + 'mlx5dr_rule.c', + 'mlx5dr_action.c', + 'mlx5dr_buddy.c', + 'mlx5dr_pool.c', + 'mlx5dr_cmd.c', + 'mlx5dr_send.c', + 'mlx5dr_definer.c', + 'mlx5dr_debug.c', + 'mlx5dr_pat_arg.c', +) diff --git a/drivers/net/mlx5/mlx5_dr.h b/drivers/net/mlx5/hws/mlx5dr.h similarity index 66% rename from drivers/net/mlx5/mlx5_dr.h rename to drivers/net/mlx5/hws/mlx5dr.h index d0b2c15652..980bda0d63 100644 --- a/drivers/net/mlx5/mlx5_dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved. + * Copyright (c) 2022 NVIDIA Corporation & Affiliates */ -#ifndef MLX5_DR_H_ -#define MLX5_DR_H_ +#ifndef MLX5DR_H_ +#define MLX5DR_H_ #include <rte_flow.h> @@ -26,6 +26,27 @@ enum mlx5dr_matcher_resource_mode { MLX5DR_MATCHER_RESOURCE_MODE_HTABLE, }; +enum mlx5dr_action_type { + MLX5DR_ACTION_TYP_LAST, + MLX5DR_ACTION_TYP_TNL_L2_TO_L2, + MLX5DR_ACTION_TYP_L2_TO_TNL_L2, + MLX5DR_ACTION_TYP_TNL_L3_TO_L2, + MLX5DR_ACTION_TYP_L2_TO_TNL_L3, + MLX5DR_ACTION_TYP_DROP, + MLX5DR_ACTION_TYP_TIR, + MLX5DR_ACTION_TYP_FT, + MLX5DR_ACTION_TYP_CTR, + MLX5DR_ACTION_TYP_TAG, + MLX5DR_ACTION_TYP_MODIFY_HDR, + MLX5DR_ACTION_TYP_VPORT, + MLX5DR_ACTION_TYP_MISS, + MLX5DR_ACTION_TYP_POP_VLAN, + MLX5DR_ACTION_TYP_PUSH_VLAN, + MLX5DR_ACTION_TYP_ASO_METER, + MLX5DR_ACTION_TYP_ASO_CT, + MLX5DR_ACTION_TYP_MAX, +}; + enum mlx5dr_action_flags { MLX5DR_ACTION_FLAG_ROOT_RX = 1 << 0, MLX5DR_ACTION_FLAG_ROOT_TX = 1 << 1, @@ -33,7 +54,10 @@ enum mlx5dr_action_flags { MLX5DR_ACTION_FLAG_HWS_RX = 1 << 3, MLX5DR_ACTION_FLAG_HWS_TX = 1 << 4, MLX5DR_ACTION_FLAG_HWS_FDB = 1 << 5, - MLX5DR_ACTION_FLAG_INLINE = 1 << 6, + /* Shared action can be used over a few threads, since data is written + * only once at the creation of the action. + */ + MLX5DR_ACTION_FLAG_SHARED = 1 << 6, }; enum mlx5dr_action_reformat_type { @@ -43,6 +67,18 @@ enum mlx5dr_action_reformat_type { MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3, }; +enum mlx5dr_action_aso_meter_color { + MLX5DR_ACTION_ASO_METER_COLOR_RED = 0x0, + MLX5DR_ACTION_ASO_METER_COLOR_YELLOW = 0x1, + MLX5DR_ACTION_ASO_METER_COLOR_GREEN = 0x2, + MLX5DR_ACTION_ASO_METER_COLOR_UNDEFINED = 0x3, +}; + +enum mlx5dr_action_aso_ct_flags { + MLX5DR_ACTION_ASO_CT_DIRECTION_INITIATOR = 0 << 0, + MLX5DR_ACTION_ASO_CT_DIRECTION_RESPONDER = 1 << 0, +}; + enum mlx5dr_match_template_flags { /* Allow relaxed matching by skipping derived dependent match fields. */ MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH = 1, @@ -56,7 +92,7 @@ enum mlx5dr_send_queue_actions { struct mlx5dr_context_attr { uint16_t queues; uint16_t queue_size; - size_t initial_log_ste_memory; + size_t initial_log_ste_memory; /* Currently not in use */ /* Optional PD used for allocating res ources */ struct ibv_pd *pd; }; @@ -66,9 +102,21 @@ struct mlx5dr_table_attr { uint32_t level; }; +enum mlx5dr_matcher_flow_src { + MLX5DR_MATCHER_FLOW_SRC_ANY = 0x0, + MLX5DR_MATCHER_FLOW_SRC_WIRE = 0x1, + MLX5DR_MATCHER_FLOW_SRC_VPORT = 0x2, +}; + struct mlx5dr_matcher_attr { + /* Processing priority inside table */ uint32_t priority; + /* Provide all rules with unique rule_idx in num_log range to reduce locking */ + bool optimize_using_rule_idx; + /* Resource mode and corresponding size */ enum mlx5dr_matcher_resource_mode mode; + /* Optimize insertion in case packet origin is the same for all rules */ + enum mlx5dr_matcher_flow_src optimize_flow_src; union { struct { uint8_t sz_row_log; @@ -84,6 +132,8 @@ struct mlx5dr_matcher_attr { struct mlx5dr_rule_attr { uint16_t queue_id; void *user_data; + /* Valid if matcher optimize_using_rule_idx is set */ + uint32_t rule_idx; uint32_t burst:1; }; @@ -92,6 +142,9 @@ struct mlx5dr_devx_obj { uint32_t id; }; +/* In actions that take offset, the offset is unique, and the user should not + * reuse the same index because data changing is not atomic. + */ struct mlx5dr_rule_action { struct mlx5dr_action *action; union { @@ -114,33 +167,19 @@ struct mlx5dr_rule_action { } reformat; struct { - rte_be32_t vlan_hdr; + __be32 vlan_hdr; } push_vlan; - }; -}; - -enum { - MLX5DR_MATCH_TAG_SZ = 32, - MLX5DR_JAMBO_TAG_SZ = 44, -}; -enum mlx5dr_rule_status { - MLX5DR_RULE_STATUS_UNKNOWN, - MLX5DR_RULE_STATUS_CREATING, - MLX5DR_RULE_STATUS_CREATED, - MLX5DR_RULE_STATUS_DELETING, - MLX5DR_RULE_STATUS_DELETED, - MLX5DR_RULE_STATUS_FAILED, -}; + struct { + uint32_t offset; + enum mlx5dr_action_aso_meter_color init_color; + } aso_meter; -struct mlx5dr_rule { - struct mlx5dr_matcher *matcher; - union { - uint8_t match_tag[MLX5DR_MATCH_TAG_SZ]; - struct ibv_flow *flow; + struct { + uint32_t offset; + enum mlx5dr_action_aso_ct_flags direction; + } aso_ct; }; - enum mlx5dr_rule_status status; - uint32_t rtc_used; /* The RTC into which the STE was inserted */ }; /* Open a context used for direct rule insertion using hardware steering. @@ -153,7 +192,7 @@ struct mlx5dr_rule { * @return pointer to mlx5dr_context on success NULL otherwise. */ struct mlx5dr_context * -mlx5dr_context_open(void *ibv_ctx, +mlx5dr_context_open(struct ibv_context *ibv_ctx, struct mlx5dr_context_attr *attr); /* Close a context used for direct hardware steering. @@ -205,6 +244,26 @@ mlx5dr_match_template_create(const struct rte_flow_item items[], */ int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt); +/* Create new action template based on action_type array, the action template + * will be used for matcher creation. + * + * @param[in] action_type + * An array of actions based on the order of actions which will be provided + * with rule_actions to mlx5dr_rule_create. The last action is marked + * using MLX5DR_ACTION_TYP_LAST. + * @return pointer to mlx5dr_action_template on success NULL otherwise + */ +struct mlx5dr_action_template * +mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[]); + +/* Destroy action template. + * + * @param[in] at + * Action template to destroy. + * @return zero on success non zero otherwise. + */ +int mlx5dr_action_template_destroy(struct mlx5dr_action_template *at); + /* Create a new direct rule matcher. Each matcher can contain multiple rules. * Matchers on the table will be processed by priority. Matching fields and * mask are described by the match template. In some cases multiple match @@ -216,6 +275,10 @@ int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt); * Array of match templates to be used on matcher. * @param[in] num_of_mt * Number of match templates in mt array. + * @param[in] at + * Array of action templates to be used on matcher. + * @param[in] num_of_at + * Number of action templates in mt array. * @param[in] attr * Attributes used for matcher creation. * @return pointer to mlx5dr_matcher on success NULL otherwise. @@ -224,6 +287,8 @@ struct mlx5dr_matcher * mlx5dr_matcher_create(struct mlx5dr_table *table, struct mlx5dr_match_template *mt[], uint8_t num_of_mt, + struct mlx5dr_action_template *at[], + uint8_t num_of_at, struct mlx5dr_matcher_attr *attr); /* Destroy direct rule matcher. @@ -245,11 +310,13 @@ size_t mlx5dr_rule_get_handle_size(void); * @param[in] matcher * The matcher in which the new rule will be created. * @param[in] mt_idx - * Match template index to create the rule with. + * Match template index to create the match with. * @param[in] items * The items used for the value matching. * @param[in] rule_actions * Rule action to be executed on match. + * @param[in] at_idx + * Action template index to apply the actions with. * @param[in] num_of_actions * Number of rule actions. * @param[in] attr @@ -261,8 +328,8 @@ size_t mlx5dr_rule_get_handle_size(void); int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, uint8_t mt_idx, const struct rte_flow_item items[], + uint8_t at_idx, struct mlx5dr_rule_action rule_actions[], - uint8_t num_of_actions, struct mlx5dr_rule_attr *attr, struct mlx5dr_rule *rule_handle); @@ -317,6 +384,21 @@ mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx, struct mlx5dr_table *tbl, uint32_t flags); +/* Create direct rule goto vport action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] ib_port_num + * Destination ib_port number. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_dest_vport(struct mlx5dr_context *ctx, + uint32_t ib_port_num, + uint32_t flags); + /* Create direct rule goto TIR action. * * @param[in] ctx @@ -400,10 +482,66 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, struct mlx5dr_action * mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, size_t pattern_sz, - rte_be64_t pattern[], + __be64 pattern[], uint32_t log_bulk_size, uint32_t flags); +/* Create direct rule ASO flow meter action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] devx_obj + * The DEVX ASO object. + * @param[in] return_reg_c + * Copy the ASO object value into this reg_c, after a packet hits a rule with this ASO object. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_aso_meter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_c, + uint32_t flags); + +/* Create direct rule ASO CT action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] devx_obj + * The DEVX ASO object. + * @param[in] return_reg_id + * Copy the ASO object value into this reg_id, after a packet hits a rule with this ASO object. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_aso_ct(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags); + +/* Create direct rule pop vlan action. + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_pop_vlan(struct mlx5dr_context *ctx, uint32_t flags); + +/* Create direct rule push vlan action. + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_push_vlan(struct mlx5dr_context *ctx, uint32_t flags); + /* Destroy direct rule action. * * @param[in] action @@ -432,11 +570,11 @@ int mlx5dr_send_queue_poll(struct mlx5dr_context *ctx, /* Perform an action on the queue * * @param[in] ctx - * The context to which the queue belong to. + * The context to which the queue belong to. * @param[in] queue_id - * The id of the queue to perform the action on. + * The id of the queue to perform the action on. * @param[in] actions - * Actions to perform on the queue. (enum mlx5dr_send_queue_actions) + * Actions to perform on the queue. (enum mlx5dr_send_queue_actions) * @return zero on success non zero otherwise. */ int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, @@ -448,7 +586,7 @@ int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, * @param[in] ctx * The context which to dump the info from. * @param[in] f - * The file to write the dump to. + * The file to write the dump to. * @return zero on success non zero otherwise. */ int mlx5dr_debug_dump(struct mlx5dr_context *ctx, FILE *f); diff --git a/drivers/net/mlx5/hws/mlx5dr_internal.h b/drivers/net/mlx5/hws/mlx5dr_internal.h new file mode 100644 index 0000000000..dbd77b9c66 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_internal.h @@ -0,0 +1,93 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_INTERNAL_H_ +#define MLX5DR_INTERNAL_H_ + +#include <stdint.h> +#include <sys/queue.h> +/* Verbs headers do not support -pedantic. */ +#ifdef PEDANTIC +#pragma GCC diagnostic ignored "-Wpedantic" +#endif +#include <infiniband/verbs.h> +#include <infiniband/mlx5dv.h> +#ifdef PEDANTIC +#pragma GCC diagnostic error "-Wpedantic" +#endif +#include <rte_flow.h> +#include <rte_gtp.h> + +#include "mlx5_prm.h" +#include "mlx5_glue.h" +#include "mlx5_flow.h" +#include "mlx5_utils.h" +#include "mlx5_malloc.h" + +#include "mlx5dr.h" +#include "mlx5dr_pool.h" +#include "mlx5dr_context.h" +#include "mlx5dr_table.h" +#include "mlx5dr_matcher.h" +#include "mlx5dr_send.h" +#include "mlx5dr_rule.h" +#include "mlx5dr_cmd.h" +#include "mlx5dr_action.h" +#include "mlx5dr_definer.h" +#include "mlx5dr_debug.h" +#include "mlx5dr_pat_arg.h" + +#define DW_SIZE 4 +#define BITS_IN_BYTE 8 +#define BITS_IN_DW (BITS_IN_BYTE * DW_SIZE) + +#define BIT(_bit) (1ULL << (_bit)) +#define IS_BIT_SET(_value, _bit) (_value & (1ULL << (_bit))) + +#ifndef ARRAY_SIZE +#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) +#endif + +#ifdef RTE_LIBRTE_MLX5_DEBUG +/* Prevent double function name print when debug is set */ +#define DR_LOG DRV_LOG +#else +/* Print function name as part of the log */ +#define DR_LOG(level, ...) \ + DRV_LOG(level, RTE_FMT("[%s]: " RTE_FMT_HEAD(__VA_ARGS__,), __func__, RTE_FMT_TAIL(__VA_ARGS__,))) +#endif + +static inline void *simple_malloc(size_t size) +{ + return mlx5_malloc(MLX5_MEM_SYS, + size, + MLX5_MALLOC_ALIGNMENT, + SOCKET_ID_ANY); +} + +static inline void *simple_calloc(size_t nmemb, size_t size) +{ + return mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO, + nmemb * size, + MLX5_MALLOC_ALIGNMENT, + SOCKET_ID_ANY); +} + +static inline void simple_free(void *addr) +{ + mlx5_free(addr); +} + +static inline bool is_mem_zero(const uint8_t *mem, size_t size) +{ + assert(size); + return (*mem == 0) && memcmp(mem, mem + 1, size - 1) == 0; +} + +static inline uint64_t roundup_pow_of_two(uint64_t n) +{ + return n == 1 ? 1 : 1ULL << log2above(n); +} + +#endif /* MLX5DR_INTERNAL_H_ */ diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index 6a84d96380..7d9e4c5025 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -14,7 +14,6 @@ sources = files( 'mlx5.c', 'mlx5_ethdev.c', 'mlx5_flow.c', - 'mlx5_dr.c', 'mlx5_flow_meter.c', 'mlx5_flow_dv.c', 'mlx5_flow_hw.c', @@ -72,3 +71,7 @@ endif testpmd_sources += files('mlx5_testpmd.c') subdir(exec_env) + +if MLX5_HAVE_IBV_FLOW_DV_SUPPORT + subdir('hws') +endif diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 741be2df98..05782a8804 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -34,7 +34,7 @@ #include "mlx5_os.h" #include "mlx5_autoconf.h" #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) -#include "mlx5_dr.h" +#include "hws/mlx5dr.h" #endif #define MLX5_SH(dev) (((struct mlx5_priv *)(dev)->data->dev_private)->sh) diff --git a/drivers/net/mlx5/mlx5_dr.c b/drivers/net/mlx5/mlx5_dr.c deleted file mode 100644 index 7218708986..0000000000 --- a/drivers/net/mlx5/mlx5_dr.c +++ /dev/null @@ -1,383 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved. - */ -#include <rte_flow.h> - -#include "mlx5_defs.h" -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) -#include "mlx5_dr.h" - -/* - * The following null stubs are prepared in order not to break the linkage - * before the HW steering low-level implementation is added. - */ - -/* Open a context used for direct rule insertion using hardware steering. - * Each context can contain multiple tables of different types. - * - * @param[in] ibv_ctx - * The ibv context to used for HWS. - * @param[in] attr - * Attributes used for context open. - * @return pointer to mlx5dr_context on success NULL otherwise. - */ -__rte_weak struct mlx5dr_context * -mlx5dr_context_open(void *ibv_ctx, - struct mlx5dr_context_attr *attr) -{ - (void)ibv_ctx; - (void)attr; - return NULL; -} - -/* Close a context used for direct hardware steering. - * - * @param[in] ctx - * mlx5dr context to close. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_context_close(struct mlx5dr_context *ctx) -{ - (void)ctx; - return 0; -} - -/* Create a new direct rule table. Each table can contain multiple matchers. - * - * @param[in] ctx - * The context in which the new table will be opened. - * @param[in] attr - * Attributes used for table creation. - * @return pointer to mlx5dr_table on success NULL otherwise. - */ -__rte_weak struct mlx5dr_table * -mlx5dr_table_create(struct mlx5dr_context *ctx, - struct mlx5dr_table_attr *attr) -{ - (void)ctx; - (void)attr; - return NULL; -} - -/* Destroy direct rule table. - * - * @param[in] tbl - * mlx5dr table to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int mlx5dr_table_destroy(struct mlx5dr_table *tbl) -{ - (void)tbl; - return 0; -} - -/* Create new match template based on items mask, the match template - * will be used for matcher creation. - * - * @param[in] items - * Describe the mask for template creation - * @param[in] flags - * Template creation flags - * @return pointer to mlx5dr_match_template on success NULL otherwise - */ -__rte_weak struct mlx5dr_match_template * -mlx5dr_match_template_create(const struct rte_flow_item items[], - enum mlx5dr_match_template_flags flags) -{ - (void)items; - (void)flags; - return NULL; -} - -/* Destroy match template. - * - * @param[in] mt - * Match template to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt) -{ - (void)mt; - return 0; -} - -/* Create a new direct rule matcher. Each matcher can contain multiple rules. - * Matchers on the table will be processed by priority. Matching fields and - * mask are described by the match template. In some cases multiple match - * templates can be used on the same matcher. - * - * @param[in] table - * The table in which the new matcher will be opened. - * @param[in] mt - * Array of match templates to be used on matcher. - * @param[in] num_of_mt - * Number of match templates in mt array. - * @param[in] attr - * Attributes used for matcher creation. - * @return pointer to mlx5dr_matcher on success NULL otherwise. - */ -__rte_weak struct mlx5dr_matcher * -mlx5dr_matcher_create(struct mlx5dr_table *table __rte_unused, - struct mlx5dr_match_template *mt[] __rte_unused, - uint8_t num_of_mt __rte_unused, - struct mlx5dr_matcher_attr *attr __rte_unused) -{ - return NULL; -} - -/* Destroy direct rule matcher. - * - * @param[in] matcher - * Matcher to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher __rte_unused) -{ - return 0; -} - -/* Enqueue create rule operation. - * - * @param[in] matcher - * The matcher in which the new rule will be created. - * @param[in] mt_idx - * Match template index to create the rule with. - * @param[in] items - * The items used for the value matching. - * @param[in] rule_actions - * Rule action to be executed on match. - * @param[in] num_of_actions - * Number of rule actions. - * @param[in] attr - * Rule creation attributes. - * @param[in, out] rule_handle - * A valid rule handle. The handle doesn't require any initialization. - * @return zero on successful enqueue non zero otherwise. - */ -__rte_weak int -mlx5dr_rule_create(struct mlx5dr_matcher *matcher __rte_unused, - uint8_t mt_idx __rte_unused, - const struct rte_flow_item items[] __rte_unused, - struct mlx5dr_rule_action rule_actions[] __rte_unused, - uint8_t num_of_actions __rte_unused, - struct mlx5dr_rule_attr *attr __rte_unused, - struct mlx5dr_rule *rule_handle __rte_unused) -{ - return 0; -} - -/* Enqueue destroy rule operation. - * - * @param[in] rule - * The rule destruction to enqueue. - * @param[in] attr - * Rule destruction attributes. - * @return zero on successful enqueue non zero otherwise. - */ -__rte_weak int -mlx5dr_rule_destroy(struct mlx5dr_rule *rule __rte_unused, - struct mlx5dr_rule_attr *attr __rte_unused) -{ - return 0; -} - -/* Create direct rule drop action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_drop(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule default miss action. - * Defaults are RX: Drop TX: Wire. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_default_miss(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule goto table action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] tbl - * Destination table. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx __rte_unused, - struct mlx5dr_table *tbl __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule goto TIR action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] obj - * Direct rule TIR devx object. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx __rte_unused, - struct mlx5dr_devx_obj *obj __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule TAG action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_tag(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule counter action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] obj - * Direct rule counter devx object. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_counter(struct mlx5dr_context *ctx, - struct mlx5dr_devx_obj *obj, - uint32_t flags); - -/* Create direct rule reformat action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] reformat_type - * Type of reformat. - * @param[in] data_sz - * Size in bytes of data. - * @param[in] inline_data - * Header data array in case of inline action. - * @param[in] log_bulk_size - * Number of unique values used with this pattern. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_reformat(struct mlx5dr_context *ctx __rte_unused, - enum mlx5dr_action_reformat_type reformat_type __rte_unused, - size_t data_sz __rte_unused, - void *inline_data __rte_unused, - uint32_t log_bulk_size __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule modify header action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] pattern_sz - * Byte size of the pattern array. - * @param[in] pattern - * PRM format modify pattern action array. - * @param[in] log_bulk_size - * Number of unique values used with this pattern. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, - size_t pattern_sz, - rte_be64_t pattern[], - uint32_t log_bulk_size, - uint32_t flags); - -/* Destroy direct rule action. - * - * @param[in] action - * The action to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_action_destroy(struct mlx5dr_action *action __rte_unused) -{ - return 0; -} - -/* Poll queue for rule creation and deletions completions. - * - * @param[in] ctx - * The context to which the queue belong to. - * @param[in] queue_id - * The id of the queue to poll. - * @param[in, out] res - * Completion array. - * @param[in] res_nb - * Maximum number of results to return. - * @return negative number on failure, the number of completions otherwise. - */ -__rte_weak int -mlx5dr_send_queue_poll(struct mlx5dr_context *ctx __rte_unused, - uint16_t queue_id __rte_unused, - struct rte_flow_op_result res[] __rte_unused, - uint32_t res_nb __rte_unused) -{ - return 0; -} - -/* Perform an action on the queue - * - * @param[in] ctx - * The context to which the queue belong to. - * @param[in] queue_id - * The id of the queue to perform the action on. - * @param[in] actions - * Actions to perform on the queue. (enum mlx5dr_send_queue_actions) - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_send_queue_action(struct mlx5dr_context *ctx __rte_unused, - uint16_t queue_id __rte_unused, - uint32_t actions __rte_unused) -{ - return 0; -} - -#endif diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2002f6ef4b..cde602d3a1 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -17,6 +17,7 @@ #include <mlx5_prm.h> #include "mlx5.h" +#include "hws/mlx5dr.h" /* E-Switch Manager port, used for rte_flow_item_port_id. */ #define MLX5_PORT_ESW_MGR UINT32_MAX @@ -1043,6 +1044,10 @@ struct rte_flow { #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#ifdef PEDANTIC +#pragma GCC diagnostic ignored "-Wpedantic" +#endif + /* HWS flow struct. */ struct rte_flow_hw { uint32_t idx; /* Flow index from indexed pool. */ @@ -1053,9 +1058,13 @@ struct rte_flow_hw { struct mlx5_hrxq *hrxq; /* TIR action. */ }; struct rte_flow_template_table *table; /* The table flow allcated from. */ - struct mlx5dr_rule rule; /* HWS layer data struct. */ + uint8_t rule[0]; /* HWS layer data struct. */ } __rte_packed; +#ifdef PEDANTIC +#pragma GCC diagnostic error "-Wpedantic" +#endif + /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { LIST_ENTRY(mlx5_action_construct_data) next; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 78c741bb91..fecf28c1ca 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -1107,8 +1107,8 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, actions, rule_acts, &acts_num); ret = mlx5dr_rule_create(table->matcher, pattern_template_index, items, - rule_acts, acts_num, - &rule_attr, &flow->rule); + action_template_index, rule_acts, + &rule_attr, (struct mlx5dr_rule *)flow->rule); if (likely(!ret)) return (struct rte_flow *)flow; /* Flow created fail, return the descriptor and flow memory. */ @@ -1171,7 +1171,7 @@ flow_hw_async_flow_destroy(struct rte_eth_dev *dev, job->user_data = user_data; job->flow = fh; rule_attr.user_data = job; - ret = mlx5dr_rule_destroy(&fh->rule, &rule_attr); + ret = mlx5dr_rule_destroy((struct mlx5dr_rule *)fh->rule, &rule_attr); if (likely(!ret)) return 0; priv->hw_q[queue].job_idx++; @@ -1437,7 +1437,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, .data = &flow_attr, }; struct mlx5_indexed_pool_config cfg = { - .size = sizeof(struct rte_flow_hw), + .size = sizeof(struct rte_flow_hw) + mlx5dr_rule_get_handle_size(), .trunk_size = 1 << 12, .per_core_cache = 1 << 13, .need_lock = 1, @@ -1498,7 +1498,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, tbl->its[i] = item_templates[i]; } tbl->matcher = mlx5dr_matcher_create - (tbl->grp->tbl, mt, nb_item_templates, &matcher_attr); + (tbl->grp->tbl, mt, nb_item_templates, NULL, 0, &matcher_attr); if (!tbl->matcher) goto it_error; tbl->nb_item_templates = nb_item_templates; -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 00/18] net/mlx5: Add HW steering low level support 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (20 preceding siblings ...) 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 01/18] net/mlx5: split flow item translation Alex Vesker ` (17 more replies) 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker 23 siblings, 18 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm; +Cc: dev, orika Mellanox ConnetX devices supports packet matching, packet modification and redirection. These functionalities are also referred to as flow-steering. To configure a steering rule, the rule is written to the device owned memory, this memory is accessed and cached by the device when processing a packet. The highlight of this patchset is supporting HW Steering (HWS) which is the new technology supported in new ConnectX devices, HWS allows configuring steering rules directly to the HW using special HW queues with minimal CPU effort. This patchset is the internal low layer implementation for HWS used by the mlx5 PMD. The mlx5dr (direct rule) is layer that bridges between the PMD and the HW by configuring the HW offloads based on the PMD logic v2: Fix check patch and cosmetic changes v3: -Fix unsupported items -Fix compilation with mlx5dv dependency v4: -Fix compile on Windows Alex Vesker (9): net/mlx5: Add additional glue functions for HWS net/mlx5/hws: Add HWS send layer net/mlx5/hws: Add HWS definer layer net/mlx5/hws: Add HWS context object net/mlx5/hws: Add HWS table object net/mlx5/hws: Add HWS matcher object net/mlx5/hws: Add HWS rule object net/mlx5/hws: Add HWS action object net/mlx5/hws: Enable HWS Bing Zhao (2): common/mlx5: query set capability of registers net/mlx5: provide the available tag registers Dariusz Sosnowski (1): net/mlx5: add port to metadata conversion Erez Shitrit (2): net/mlx5/hws: Add HWS command layer net/mlx5/hws: Add HWS pool and buddy Hamdan Igbaria (1): net/mlx5/hws: Add HWS debug layer Suanming Mou (3): net/mlx5: split flow item translation net/mlx5: split flow item matcher and value translation net/mlx5: add hardware steering item translation function doc/guides/nics/mlx5.rst | 5 +- doc/guides/rel_notes/release_22_11.rst | 4 + drivers/common/mlx5/linux/meson.build | 2 + drivers/common/mlx5/linux/mlx5_glue.c | 121 +- drivers/common/mlx5/linux/mlx5_glue.h | 17 + drivers/common/mlx5/mlx5_devx_cmds.c | 30 + drivers/common/mlx5/mlx5_devx_cmds.h | 2 + drivers/common/mlx5/mlx5_prm.h | 652 ++++- drivers/net/mlx5/hws/meson.build | 18 + drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} | 209 +- drivers/net/mlx5/hws/mlx5dr_action.c | 2222 +++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 253 ++ drivers/net/mlx5/hws/mlx5dr_buddy.c | 201 ++ drivers/net/mlx5/hws/mlx5dr_buddy.h | 22 + drivers/net/mlx5/hws/mlx5dr_cmd.c | 948 +++++++ drivers/net/mlx5/hws/mlx5dr_cmd.h | 230 ++ drivers/net/mlx5/hws/mlx5dr_context.c | 223 ++ drivers/net/mlx5/hws/mlx5dr_context.h | 40 + drivers/net/mlx5/hws/mlx5dr_debug.c | 462 +++ drivers/net/mlx5/hws/mlx5dr_debug.h | 28 + drivers/net/mlx5/hws/mlx5dr_definer.c | 1968 +++++++++++++ drivers/net/mlx5/hws/mlx5dr_definer.h | 585 ++++ drivers/net/mlx5/hws/mlx5dr_internal.h | 93 + drivers/net/mlx5/hws/mlx5dr_matcher.c | 922 ++++++ drivers/net/mlx5/hws/mlx5dr_matcher.h | 76 + drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 511 ++++ drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 83 + drivers/net/mlx5/hws/mlx5dr_pool.c | 672 +++++ drivers/net/mlx5/hws/mlx5dr_pool.h | 152 + drivers/net/mlx5/hws/mlx5dr_rule.c | 528 ++++ drivers/net/mlx5/hws/mlx5dr_rule.h | 50 + drivers/net/mlx5/hws/mlx5dr_send.c | 844 ++++++ drivers/net/mlx5/hws/mlx5dr_send.h | 275 ++ drivers/net/mlx5/hws/mlx5dr_table.c | 248 ++ drivers/net/mlx5/hws/mlx5dr_table.h | 44 + drivers/net/mlx5/linux/mlx5_os.c | 12 +- drivers/net/mlx5/meson.build | 7 +- drivers/net/mlx5/mlx5.c | 9 +- drivers/net/mlx5/mlx5.h | 8 +- drivers/net/mlx5/mlx5_defs.h | 2 + drivers/net/mlx5/mlx5_devx.c | 2 +- drivers/net/mlx5/mlx5_dr.c | 383 --- drivers/net/mlx5/mlx5_flow.c | 29 +- drivers/net/mlx5/mlx5_flow.h | 174 +- drivers/net/mlx5/mlx5_flow_dv.c | 2631 +++++++++--------- drivers/net/mlx5/mlx5_flow_hw.c | 115 +- 46 files changed, 14386 insertions(+), 1726 deletions(-) create mode 100644 drivers/net/mlx5/hws/meson.build rename drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} (66%) create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_internal.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.h delete mode 100644 drivers/net/mlx5/mlx5_dr.c -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 01/18] net/mlx5: split flow item translation 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 02/18] net/mlx5: split flow item matcher and value translation Alex Vesker ` (16 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> In order to share the item translation code with hardware steering mode, this commit splits flow item translation code to a dedicate function. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow_dv.c | 1915 ++++++++++++++++--------------- 1 file changed, 979 insertions(+), 936 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 91f287af5c..70a3279e2f 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -13029,8 +13029,7 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, } /** - * Fill the flow with DV spec, lock free - * (mutex should be acquired by caller). + * Translate the flow item to matcher. * * @param[in] dev * Pointer to rte_eth_dev structure. @@ -13040,8 +13039,8 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * Pointer to the flow attributes. * @param[in] items * Pointer to the list of items. - * @param[in] actions - * Pointer to the list of actions. + * @param[in] matcher + * Pointer to the flow matcher. * @param[out] error * Pointer to the error structure. * @@ -13049,1041 +13048,1086 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -flow_dv_translate(struct rte_eth_dev *dev, - struct mlx5_flow *dev_flow, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[], - struct rte_flow_error *error) +flow_dv_translate_items(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + struct mlx5_flow_dv_matcher *matcher, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_sh_config *dev_conf = &priv->sh->config; struct rte_flow *flow = dev_flow->flow; struct mlx5_flow_handle *handle = dev_flow->handle; struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); - struct mlx5_flow_rss_desc *rss_desc; + struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc; uint64_t item_flags = 0; uint64_t last_item = 0; - uint64_t action_flags = 0; - struct mlx5_flow_dv_matcher matcher = { - .mask = { - .size = sizeof(matcher.mask.buf), - }, - }; - int actions_n = 0; - bool actions_end = false; - union { - struct mlx5_flow_dv_modify_hdr_resource res; - uint8_t len[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + - sizeof(struct mlx5_modification_cmd) * - (MLX5_MAX_MODIFY_NUM + 1)]; - } mhdr_dummy; - struct mlx5_flow_dv_modify_hdr_resource *mhdr_res = &mhdr_dummy.res; - const struct rte_flow_action_count *count = NULL; - const struct rte_flow_action_age *non_shared_age = NULL; - union flow_dv_attr flow_attr = { .attr = 0 }; - uint32_t tag_be; - union mlx5_flow_tbl_key tbl_key; - uint32_t modify_action_position = UINT32_MAX; - void *match_mask = matcher.mask.buf; + void *match_mask = matcher->mask.buf; void *match_value = dev_flow->dv.value.buf; uint8_t next_protocol = 0xff; - struct rte_vlan_hdr vlan = { 0 }; - struct mlx5_flow_dv_dest_array_resource mdest_res; - struct mlx5_flow_dv_sample_resource sample_res; - void *sample_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; - const struct rte_flow_action_sample *sample = NULL; - struct mlx5_flow_sub_actions_list *sample_act; - uint32_t sample_act_pos = UINT32_MAX; - uint32_t age_act_pos = UINT32_MAX; - uint32_t num_of_dest = 0; - int tmp_actions_n = 0; - uint32_t table; - int ret = 0; - const struct mlx5_flow_tunnel *tunnel = NULL; - struct flow_grp_info grp_info = { - .external = !!dev_flow->external, - .transfer = !!attr->transfer, - .fdb_def_rule = !!priv->fdb_def_rule, - .skip_scale = dev_flow->skip_scale & - (1 << MLX5_SCALE_FLOW_GROUP_BIT), - .std_tbl_fix = true, - }; + uint16_t priority = 0; const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; const struct rte_flow_item *tunnel_item = NULL; const struct rte_flow_item *gre_item = NULL; + int ret = 0; - if (!wks) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "failed to push flow workspace"); - rss_desc = &wks->rss_desc; - memset(&mdest_res, 0, sizeof(struct mlx5_flow_dv_dest_array_resource)); - memset(&sample_res, 0, sizeof(struct mlx5_flow_dv_sample_resource)); - mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : - MLX5DV_FLOW_TABLE_TYPE_NIC_RX; - /* update normal path action resource into last index of array */ - sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM - 1]; - if (is_tunnel_offload_active(dev)) { - if (dev_flow->tunnel) { - RTE_VERIFY(dev_flow->tof_type == - MLX5_TUNNEL_OFFLOAD_MISS_RULE); - tunnel = dev_flow->tunnel; - } else { - tunnel = mlx5_get_tof(items, actions, - &dev_flow->tof_type); - dev_flow->tunnel = tunnel; - } - grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate - (dev, attr, tunnel, dev_flow->tof_type); - } - mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : - MLX5DV_FLOW_TABLE_TYPE_NIC_RX; - ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table, - &grp_info, error); - if (ret) - return ret; - dev_flow->dv.group = table; - if (attr->transfer) - mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; - /* number of actions must be set to 0 in case of dirty stack. */ - mhdr_res->actions_num = 0; - if (is_flow_tunnel_match_rule(dev_flow->tof_type)) { - /* - * do not add decap action if match rule drops packet - * HW rejects rules with decap & drop - * - * if tunnel match rule was inserted before matching tunnel set - * rule flow table used in the match rule must be registered. - * current implementation handles that in the - * flow_dv_match_register() at the function end. - */ - bool add_decap = true; - const struct rte_flow_action *ptr = actions; - - for (; ptr->type != RTE_FLOW_ACTION_TYPE_END; ptr++) { - if (ptr->type == RTE_FLOW_ACTION_TYPE_DROP) { - add_decap = false; - break; - } - } - if (add_decap) { - if (flow_dv_create_action_l2_decap(dev, dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_DECAP; - } - } - for (; !actions_end ; actions++) { - const struct rte_flow_action_queue *queue; - const struct rte_flow_action_rss *rss; - const struct rte_flow_action *action = actions; - const uint8_t *rss_key; - struct mlx5_flow_tbl_resource *tbl; - struct mlx5_aso_age_action *age_act; - struct mlx5_flow_counter *cnt_act; - uint32_t port_id = 0; - struct mlx5_flow_dv_port_id_action_resource port_id_resource; - int action_type = actions->type; - const struct rte_flow_action *found_action = NULL; - uint32_t jump_group = 0; - uint32_t owner_idx; - struct mlx5_aso_ct_action *ct; + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { + int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + int item_type = items->type; - if (!mlx5_flow_os_action_supported(action_type)) + if (!mlx5_flow_os_item_supported(item_type)) return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "action not supported"); - switch (action_type) { - case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET: - action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET; + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "item not supported"); + switch (item_type) { + case RTE_FLOW_ITEM_TYPE_ESP: + flow_dv_translate_item_esp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_ITEM_ESP; break; - case RTE_FLOW_ACTION_TYPE_VOID: + case RTE_FLOW_ITEM_TYPE_PORT_ID: + flow_dv_translate_item_port_id + (dev, match_mask, match_value, items, attr); + last_item = MLX5_FLOW_ITEM_PORT_ID; break; - case RTE_FLOW_ACTION_TYPE_PORT_ID: - case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: - if (flow_dv_translate_action_port_id(dev, action, - &port_id, error)) - return -rte_errno; - port_id_resource.port_id = port_id; - MLX5_ASSERT(!handle->rix_port_id_action); - if (flow_dv_port_id_action_resource_register - (dev, &port_id_resource, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.port_id_action->action; - action_flags |= MLX5_FLOW_ACTION_PORT_ID; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_PORT_ID; - sample_act->action_flags |= MLX5_FLOW_ACTION_PORT_ID; - num_of_dest++; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + flow_dv_translate_item_represented_port + (dev, match_mask, match_value, items, attr); + last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; break; - case RTE_FLOW_ACTION_TYPE_FLAG: - action_flags |= MLX5_FLOW_ACTION_FLAG; - wks->mark = 1; - if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { - struct rte_flow_action_mark mark = { - .id = MLX5_FLOW_MARK_DEFAULT, - }; - - if (flow_dv_convert_action_mark(dev, &mark, - mhdr_res, - error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MARK_EXT; - break; + case RTE_FLOW_ITEM_TYPE_ETH: + flow_dv_translate_item_eth(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = dev_flow->act_flags & + MLX5_FLOW_ACTION_DEFAULT_MISS && + !dev_flow->external ? + MLX5_PRIORITY_MAP_L3 : + MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + flow_dv_translate_item_vlan(dev_flow, + match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | + MLX5_FLOW_LAYER_INNER_VLAN) : + (MLX5_FLOW_LAYER_OUTER_L2 | + MLX5_FLOW_LAYER_OUTER_VLAN); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &item_flags, &tunnel); + flow_dv_translate_item_ipv4(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv4 *) + items->mask)->hdr.next_proto_id) { + next_protocol = + ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + next_protocol &= + ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; } - tag_be = mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT); - /* - * Only one FLAG or MARK is supported per device flow - * right now. So the pointer to the tag resource must be - * zero before the register process. - */ - MLX5_ASSERT(!handle->dvh.rix_tag); - if (flow_dv_tag_resource_register(dev, tag_be, - dev_flow, error)) - return -rte_errno; - MLX5_ASSERT(dev_flow->dv.tag_resource); - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.tag_resource->action; break; - case RTE_FLOW_ACTION_TYPE_MARK: - action_flags |= MLX5_FLOW_ACTION_MARK; - wks->mark = 1; - if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { - const struct rte_flow_action_mark *mark = - (const struct rte_flow_action_mark *) - actions->conf; - - if (flow_dv_convert_action_mark(dev, mark, - mhdr_res, - error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MARK_EXT; - break; + case RTE_FLOW_ITEM_TYPE_IPV6: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &item_flags, &tunnel); + flow_dv_translate_item_ipv6(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto) { + next_protocol = + ((const struct rte_flow_item_ipv6 *) + items->spec)->hdr.proto; + next_protocol &= + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; } - /* Fall-through */ - case MLX5_RTE_FLOW_ACTION_TYPE_MARK: - /* Legacy (non-extensive) MARK action. */ - tag_be = mlx5_flow_mark_set - (((const struct rte_flow_action_mark *) - (actions->conf))->id); - MLX5_ASSERT(!handle->dvh.rix_tag); - if (flow_dv_tag_resource_register(dev, tag_be, - dev_flow, error)) - return -rte_errno; - MLX5_ASSERT(dev_flow->dv.tag_resource); - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.tag_resource->action; break; - case RTE_FLOW_ACTION_TYPE_SET_META: - if (flow_dv_convert_action_set_meta - (dev, mhdr_res, attr, - (const struct rte_flow_action_set_meta *) - actions->conf, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_META; + case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: + flow_dv_translate_item_ipv6_frag_ext(match_mask, + match_value, + items, tunnel); + last_item = tunnel ? + MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : + MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header) { + next_protocol = + ((const struct rte_flow_item_ipv6_frag_ext *) + items->spec)->hdr.next_header; + next_protocol &= + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } break; - case RTE_FLOW_ACTION_TYPE_SET_TAG: - if (flow_dv_convert_action_set_tag - (dev, mhdr_res, - (const struct rte_flow_action_set_tag *) - actions->conf, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TAG; + case RTE_FLOW_ITEM_TYPE_TCP: + flow_dv_translate_item_tcp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; break; - case RTE_FLOW_ACTION_TYPE_DROP: - action_flags |= MLX5_FLOW_ACTION_DROP; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_DROP; + case RTE_FLOW_ITEM_TYPE_UDP: + flow_dv_translate_item_udp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; break; - case RTE_FLOW_ACTION_TYPE_QUEUE: - queue = actions->conf; - rss_desc->queue_num = 1; - rss_desc->queue[0] = queue->index; - action_flags |= MLX5_FLOW_ACTION_QUEUE; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_QUEUE; - sample_act->action_flags |= MLX5_FLOW_ACTION_QUEUE; - num_of_dest++; + case RTE_FLOW_ITEM_TYPE_GRE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; + gre_item = items; break; - case RTE_FLOW_ACTION_TYPE_RSS: - rss = actions->conf; - memcpy(rss_desc->queue, rss->queue, - rss->queue_num * sizeof(uint16_t)); - rss_desc->queue_num = rss->queue_num; - /* NULL RSS key indicates default RSS key. */ - rss_key = !rss->key ? rss_hash_default_key : rss->key; - memcpy(rss_desc->key, rss_key, MLX5_RSS_HASH_KEY_LEN); - /* - * rss->level and rss.types should be set in advance - * when expanding items for RSS. - */ - action_flags |= MLX5_FLOW_ACTION_RSS; - dev_flow->handle->fate_action = rss_desc->shared_rss ? - MLX5_FLOW_FATE_SHARED_RSS : - MLX5_FLOW_FATE_QUEUE; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + flow_dv_translate_item_gre_key(match_mask, + match_value, items); + last_item = MLX5_FLOW_LAYER_GRE_KEY; break; - case MLX5_RTE_FLOW_ACTION_TYPE_AGE: - owner_idx = (uint32_t)(uintptr_t)action->conf; - age_act = flow_aso_age_get_by_idx(dev, owner_idx); - if (flow->age == 0) { - flow->age = owner_idx; - __atomic_fetch_add(&age_act->refcnt, 1, - __ATOMIC_RELAXED); - } - age_act_pos = actions_n++; - action_flags |= MLX5_FLOW_ACTION_AGE; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_AGE: - non_shared_age = action->conf; - age_act_pos = actions_n++; - action_flags |= MLX5_FLOW_ACTION_AGE; + case RTE_FLOW_ITEM_TYPE_NVGRE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; break; - case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: - owner_idx = (uint32_t)(uintptr_t)action->conf; - cnt_act = flow_dv_counter_get_by_idx(dev, owner_idx, - NULL); - MLX5_ASSERT(cnt_act != NULL); - /** - * When creating meter drop flow in drop table, the - * counter should not overwrite the rte flow counter. - */ - if (attr->group == MLX5_FLOW_TABLE_LEVEL_METER && - dev_flow->dv.table_id == MLX5_MTR_TABLE_ID_DROP) { - dev_flow->dv.actions[actions_n++] = - cnt_act->action; - } else { - if (flow->counter == 0) { - flow->counter = owner_idx; - __atomic_fetch_add - (&cnt_act->shared_info.refcnt, - 1, __ATOMIC_RELAXED); - } - /* Save information first, will apply later. */ - action_flags |= MLX5_FLOW_ACTION_COUNT; - } + case RTE_FLOW_ITEM_TYPE_VXLAN: + flow_dv_translate_item_vxlan(dev, attr, + match_mask, match_value, + items, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN; break; - case RTE_FLOW_ACTION_TYPE_COUNT: - if (!priv->sh->cdev->config.devx) { - return rte_flow_error_set - (error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "count action not supported"); - } - /* Save information first, will apply later. */ - count = action->conf; - action_flags |= MLX5_FLOW_ACTION_COUNT; + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN_GPE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: - dev_flow->dv.actions[actions_n++] = - priv->sh->pop_vlan_action; - action_flags |= MLX5_FLOW_ACTION_OF_POP_VLAN; + case RTE_FLOW_ITEM_TYPE_GENEVE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GENEVE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: - if (!(action_flags & - MLX5_FLOW_ACTION_OF_SET_VLAN_VID)) - flow_dev_get_vlan_info_from_items(items, &vlan); - vlan.eth_proto = rte_be_to_cpu_16 - ((((const struct rte_flow_action_of_push_vlan *) - actions->conf)->ethertype)); - found_action = mlx5_flow_find_action - (actions + 1, - RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID); - if (found_action) - mlx5_update_vlan_vid_pcp(found_action, &vlan); - found_action = mlx5_flow_find_action - (actions + 1, - RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP); - if (found_action) - mlx5_update_vlan_vid_pcp(found_action, &vlan); - if (flow_dv_create_action_push_vlan - (dev, attr, &vlan, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.push_vlan_res->action; - action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN; + case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: + ret = flow_dv_translate_item_geneve_opt(dev, match_mask, + match_value, + items, error); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GENEVE TLV option"); + flow->geneve_tlv_option = 1; + last_item = MLX5_FLOW_LAYER_GENEVE_OPT; break; - case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP: - /* of_vlan_push action handled this action */ - MLX5_ASSERT(action_flags & - MLX5_FLOW_ACTION_OF_PUSH_VLAN); + case RTE_FLOW_ITEM_TYPE_MPLS: + flow_dv_translate_item_mpls(match_mask, match_value, + items, last_item, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_MPLS; break; - case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID: - if (action_flags & MLX5_FLOW_ACTION_OF_PUSH_VLAN) - break; - flow_dev_get_vlan_info_from_items(items, &vlan); - mlx5_update_vlan_vid_pcp(actions, &vlan); - /* If no VLAN push - this is a modify header action */ - if (flow_dv_convert_action_modify_vlan_vid - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID; + case RTE_FLOW_ITEM_TYPE_MARK: + flow_dv_translate_item_mark(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_MARK; break; - case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: - case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: - if (flow_dv_create_action_l2_encap(dev, actions, - dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_ENCAP; - if (action_flags & MLX5_FLOW_ACTION_SAMPLE) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ITEM_TYPE_META: + flow_dv_translate_item_meta(dev, match_mask, + match_value, attr, items); + last_item = MLX5_FLOW_ITEM_METADATA; break; - case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: - case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: - if (flow_dv_create_action_l2_decap(dev, dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_DECAP; + case RTE_FLOW_ITEM_TYPE_ICMP: + flow_dv_translate_item_icmp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP; break; - case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: - /* Handle encap with preceding decap. */ - if (action_flags & MLX5_FLOW_ACTION_DECAP) { - if (flow_dv_create_action_raw_encap - (dev, actions, dev_flow, attr, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } else { - /* Handle encap without preceding decap. */ - if (flow_dv_create_action_l2_encap - (dev, actions, dev_flow, attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } - action_flags |= MLX5_FLOW_ACTION_ENCAP; - if (action_flags & MLX5_FLOW_ACTION_SAMPLE) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ITEM_TYPE_ICMP6: + flow_dv_translate_item_icmp6(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP6; break; - case RTE_FLOW_ACTION_TYPE_RAW_DECAP: - while ((++action)->type == RTE_FLOW_ACTION_TYPE_VOID) - ; - if (action->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { - if (flow_dv_create_action_l2_decap - (dev, dev_flow, attr->transfer, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } - /* If decap is followed by encap, handle it at encap. */ - action_flags |= MLX5_FLOW_ACTION_DECAP; + case RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_item_tag(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_TAG; break; - case MLX5_RTE_FLOW_ACTION_TYPE_JUMP: - dev_flow->dv.actions[actions_n++] = - (void *)(uintptr_t)action->conf; - action_flags |= MLX5_FLOW_ACTION_JUMP; + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_mlx5_item_tag(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_TAG; break; - case RTE_FLOW_ACTION_TYPE_JUMP: - jump_group = ((const struct rte_flow_action_jump *) - action->conf)->group; - grp_info.std_tbl_fix = 0; - if (dev_flow->skip_scale & - (1 << MLX5_SCALE_JUMP_FLOW_GROUP_BIT)) - grp_info.skip_scale = 1; - else - grp_info.skip_scale = 0; - ret = mlx5_flow_group_to_table(dev, tunnel, - jump_group, - &table, - &grp_info, error); + case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + flow_dv_translate_item_tx_queue(dev, match_mask, + match_value, + items); + last_item = MLX5_FLOW_ITEM_TX_QUEUE; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + flow_dv_translate_item_gtp(match_mask, match_value, + items, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = flow_dv_translate_item_gtp_psc(match_mask, + match_value, + items); if (ret) - return ret; - tbl = flow_dv_tbl_resource_get(dev, table, attr->egress, - attr->transfer, - !!dev_flow->external, - tunnel, jump_group, 0, - 0, error); - if (!tbl) - return rte_flow_error_set - (error, errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "cannot create jump action."); - if (flow_dv_jump_tbl_resource_register - (dev, tbl, dev_flow, error)) { - flow_dv_tbl_resource_release(MLX5_SH(dev), tbl); - return rte_flow_error_set - (error, errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "cannot create jump action."); + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GTP PSC item"); + last_item = MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_ECPRI: + if (!mlx5_flex_parser_ecpri_exist(dev)) { + /* Create it only the first time to be used. */ + ret = mlx5_flex_parser_ecpri_alloc(dev); + if (ret) + return rte_flow_error_set + (error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, + "cannot create eCPRI parser"); + } + flow_dv_translate_item_ecpri(dev, match_mask, + match_value, items, + last_item); + /* No other protocol should follow eCPRI layer. */ + last_item = MLX5_FLOW_LAYER_ECPRI; + break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + flow_dv_translate_item_integrity(items, integrity_items, + &last_item); + break; + case RTE_FLOW_ITEM_TYPE_CONNTRACK: + flow_dv_translate_item_aso_ct(dev, match_mask, + match_value, items); + break; + case RTE_FLOW_ITEM_TYPE_FLEX: + flow_dv_translate_item_flex(dev, match_mask, + match_value, items, + dev_flow, tunnel != 0); + last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : + MLX5_FLOW_ITEM_OUTER_FLEX; + break; + default: + break; + } + item_flags |= last_item; + } + /* + * When E-Switch mode is enabled, we have two cases where we need to + * set the source port manually. + * The first one, is in case of NIC ingress steering rule, and the + * second is E-Switch rule where no port_id item was found. + * In both cases the source port is set according the current port + * in use. + */ + if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && + !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && + !(attr->egress && !attr->transfer)) { + if (flow_dv_translate_item_port_id(dev, match_mask, + match_value, NULL, attr)) + return -rte_errno; + } + if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + flow_dv_translate_item_integrity_post(match_mask, match_value, + integrity_items, + item_flags); + } + if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) + flow_dv_translate_item_vxlan_gpe(match_mask, match_value, + tunnel_item, item_flags); + else if (item_flags & MLX5_FLOW_LAYER_GENEVE) + flow_dv_translate_item_geneve(match_mask, match_value, + tunnel_item, item_flags); + else if (item_flags & MLX5_FLOW_LAYER_GRE) { + if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) + flow_dv_translate_item_gre(match_mask, match_value, + tunnel_item, item_flags); + else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) + flow_dv_translate_item_nvgre(match_mask, match_value, + tunnel_item, item_flags); + else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) + flow_dv_translate_item_gre_option(match_mask, match_value, + tunnel_item, gre_item, item_flags); + else + MLX5_ASSERT(false); + } + matcher->priority = priority; +#ifdef RTE_LIBRTE_MLX5_DEBUG + MLX5_ASSERT(!flow_dv_check_valid_spec(matcher->mask.buf, + dev_flow->dv.value.buf)); +#endif + /* + * Layers may be already initialized from prefix flow if this dev_flow + * is the suffix flow. + */ + handle->layers |= item_flags; + return ret; +} + +/** + * Fill the flow with DV spec, lock free + * (mutex should be acquired by caller). + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in, out] dev_flow + * Pointer to the sub flow. + * @param[in] attr + * Pointer to the flow attributes. + * @param[in] items + * Pointer to the list of items. + * @param[in] actions + * Pointer to the list of actions. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_translate(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_sh_config *dev_conf = &priv->sh->config; + struct rte_flow *flow = dev_flow->flow; + struct mlx5_flow_handle *handle = dev_flow->handle; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); + struct mlx5_flow_rss_desc *rss_desc; + uint64_t action_flags = 0; + struct mlx5_flow_dv_matcher matcher = { + .mask = { + .size = sizeof(matcher.mask.buf), + }, + }; + int actions_n = 0; + bool actions_end = false; + union { + struct mlx5_flow_dv_modify_hdr_resource res; + uint8_t len[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * + (MLX5_MAX_MODIFY_NUM + 1)]; + } mhdr_dummy; + struct mlx5_flow_dv_modify_hdr_resource *mhdr_res = &mhdr_dummy.res; + const struct rte_flow_action_count *count = NULL; + const struct rte_flow_action_age *non_shared_age = NULL; + union flow_dv_attr flow_attr = { .attr = 0 }; + uint32_t tag_be; + union mlx5_flow_tbl_key tbl_key; + uint32_t modify_action_position = UINT32_MAX; + struct rte_vlan_hdr vlan = { 0 }; + struct mlx5_flow_dv_dest_array_resource mdest_res; + struct mlx5_flow_dv_sample_resource sample_res; + void *sample_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; + const struct rte_flow_action_sample *sample = NULL; + struct mlx5_flow_sub_actions_list *sample_act; + uint32_t sample_act_pos = UINT32_MAX; + uint32_t age_act_pos = UINT32_MAX; + uint32_t num_of_dest = 0; + int tmp_actions_n = 0; + uint32_t table; + int ret = 0; + const struct mlx5_flow_tunnel *tunnel = NULL; + struct flow_grp_info grp_info = { + .external = !!dev_flow->external, + .transfer = !!attr->transfer, + .fdb_def_rule = !!priv->fdb_def_rule, + .skip_scale = dev_flow->skip_scale & + (1 << MLX5_SCALE_FLOW_GROUP_BIT), + .std_tbl_fix = true, + }; + + if (!wks) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "failed to push flow workspace"); + rss_desc = &wks->rss_desc; + memset(&mdest_res, 0, sizeof(struct mlx5_flow_dv_dest_array_resource)); + memset(&sample_res, 0, sizeof(struct mlx5_flow_dv_sample_resource)); + mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : + MLX5DV_FLOW_TABLE_TYPE_NIC_RX; + /* update normal path action resource into last index of array */ + sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM - 1]; + if (is_tunnel_offload_active(dev)) { + if (dev_flow->tunnel) { + RTE_VERIFY(dev_flow->tof_type == + MLX5_TUNNEL_OFFLOAD_MISS_RULE); + tunnel = dev_flow->tunnel; + } else { + tunnel = mlx5_get_tof(items, actions, + &dev_flow->tof_type); + dev_flow->tunnel = tunnel; + } + grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate + (dev, attr, tunnel, dev_flow->tof_type); + } + mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : + MLX5DV_FLOW_TABLE_TYPE_NIC_RX; + ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table, + &grp_info, error); + if (ret) + return ret; + dev_flow->dv.group = table; + if (attr->transfer) + mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; + /* number of actions must be set to 0 in case of dirty stack. */ + mhdr_res->actions_num = 0; + if (is_flow_tunnel_match_rule(dev_flow->tof_type)) { + /* + * do not add decap action if match rule drops packet + * HW rejects rules with decap & drop + * + * if tunnel match rule was inserted before matching tunnel set + * rule flow table used in the match rule must be registered. + * current implementation handles that in the + * flow_dv_match_register() at the function end. + */ + bool add_decap = true; + const struct rte_flow_action *ptr = actions; + + for (; ptr->type != RTE_FLOW_ACTION_TYPE_END; ptr++) { + if (ptr->type == RTE_FLOW_ACTION_TYPE_DROP) { + add_decap = false; + break; } - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.jump->action; - action_flags |= MLX5_FLOW_ACTION_JUMP; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_JUMP; - sample_act->action_flags |= MLX5_FLOW_ACTION_JUMP; - num_of_dest++; - break; - case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC: - case RTE_FLOW_ACTION_TYPE_SET_MAC_DST: - if (flow_dv_convert_action_modify_mac - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_MAC_SRC ? - MLX5_FLOW_ACTION_SET_MAC_SRC : - MLX5_FLOW_ACTION_SET_MAC_DST; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: - case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST: - if (flow_dv_convert_action_modify_ipv4 - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC ? - MLX5_FLOW_ACTION_SET_IPV4_SRC : - MLX5_FLOW_ACTION_SET_IPV4_DST; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC: - case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST: - if (flow_dv_convert_action_modify_ipv6 - (mhdr_res, actions, error)) + } + if (add_decap) { + if (flow_dv_create_action_l2_decap(dev, dev_flow, + attr->transfer, + error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC ? - MLX5_FLOW_ACTION_SET_IPV6_SRC : - MLX5_FLOW_ACTION_SET_IPV6_DST; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_DECAP; + } + } + for (; !actions_end ; actions++) { + const struct rte_flow_action_queue *queue; + const struct rte_flow_action_rss *rss; + const struct rte_flow_action *action = actions; + const uint8_t *rss_key; + struct mlx5_flow_tbl_resource *tbl; + struct mlx5_aso_age_action *age_act; + struct mlx5_flow_counter *cnt_act; + uint32_t port_id = 0; + struct mlx5_flow_dv_port_id_action_resource port_id_resource; + int action_type = actions->type; + const struct rte_flow_action *found_action = NULL; + uint32_t jump_group = 0; + uint32_t owner_idx; + struct mlx5_aso_ct_action *ct; + + if (!mlx5_flow_os_action_supported(action_type)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, + "action not supported"); + switch (action_type) { + case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET: + action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET; break; - case RTE_FLOW_ACTION_TYPE_SET_TP_SRC: - case RTE_FLOW_ACTION_TYPE_SET_TP_DST: - if (flow_dv_convert_action_modify_tp - (mhdr_res, actions, items, - &flow_attr, dev_flow, !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_TP_SRC ? - MLX5_FLOW_ACTION_SET_TP_SRC : - MLX5_FLOW_ACTION_SET_TP_DST; + case RTE_FLOW_ACTION_TYPE_VOID: break; - case RTE_FLOW_ACTION_TYPE_DEC_TTL: - if (flow_dv_convert_action_modify_dec_ttl - (mhdr_res, items, &flow_attr, dev_flow, - !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) + case RTE_FLOW_ACTION_TYPE_PORT_ID: + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + if (flow_dv_translate_action_port_id(dev, action, + &port_id, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_DEC_TTL; - break; - case RTE_FLOW_ACTION_TYPE_SET_TTL: - if (flow_dv_convert_action_modify_ttl - (mhdr_res, actions, items, &flow_attr, - dev_flow, !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) + port_id_resource.port_id = port_id; + MLX5_ASSERT(!handle->rix_port_id_action); + if (flow_dv_port_id_action_resource_register + (dev, &port_id_resource, dev_flow, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TTL; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.port_id_action->action; + action_flags |= MLX5_FLOW_ACTION_PORT_ID; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_PORT_ID; + sample_act->action_flags |= MLX5_FLOW_ACTION_PORT_ID; + num_of_dest++; break; - case RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ: - case RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ: - if (flow_dv_convert_action_modify_tcp_seq - (mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_FLAG: + action_flags |= MLX5_FLOW_ACTION_FLAG; + wks->mark = 1; + if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { + struct rte_flow_action_mark mark = { + .id = MLX5_FLOW_MARK_DEFAULT, + }; + + if (flow_dv_convert_action_mark(dev, &mark, + mhdr_res, + error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MARK_EXT; + break; + } + tag_be = mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT); + /* + * Only one FLAG or MARK is supported per device flow + * right now. So the pointer to the tag resource must be + * zero before the register process. + */ + MLX5_ASSERT(!handle->dvh.rix_tag); + if (flow_dv_tag_resource_register(dev, tag_be, + dev_flow, error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ ? - MLX5_FLOW_ACTION_INC_TCP_SEQ : - MLX5_FLOW_ACTION_DEC_TCP_SEQ; + MLX5_ASSERT(dev_flow->dv.tag_resource); + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.tag_resource->action; break; + case RTE_FLOW_ACTION_TYPE_MARK: + action_flags |= MLX5_FLOW_ACTION_MARK; + wks->mark = 1; + if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { + const struct rte_flow_action_mark *mark = + (const struct rte_flow_action_mark *) + actions->conf; - case RTE_FLOW_ACTION_TYPE_INC_TCP_ACK: - case RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK: - if (flow_dv_convert_action_modify_tcp_ack - (mhdr_res, actions, error)) + if (flow_dv_convert_action_mark(dev, mark, + mhdr_res, + error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MARK_EXT; + break; + } + /* Fall-through */ + case MLX5_RTE_FLOW_ACTION_TYPE_MARK: + /* Legacy (non-extensive) MARK action. */ + tag_be = mlx5_flow_mark_set + (((const struct rte_flow_action_mark *) + (actions->conf))->id); + MLX5_ASSERT(!handle->dvh.rix_tag); + if (flow_dv_tag_resource_register(dev, tag_be, + dev_flow, error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_INC_TCP_ACK ? - MLX5_FLOW_ACTION_INC_TCP_ACK : - MLX5_FLOW_ACTION_DEC_TCP_ACK; + MLX5_ASSERT(dev_flow->dv.tag_resource); + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.tag_resource->action; break; - case MLX5_RTE_FLOW_ACTION_TYPE_TAG: - if (flow_dv_convert_action_set_reg - (mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_SET_META: + if (flow_dv_convert_action_set_meta + (dev, mhdr_res, attr, + (const struct rte_flow_action_set_meta *) + actions->conf, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TAG; + action_flags |= MLX5_FLOW_ACTION_SET_META; break; - case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG: - if (flow_dv_convert_action_copy_mreg - (dev, mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_SET_TAG: + if (flow_dv_convert_action_set_tag + (dev, mhdr_res, + (const struct rte_flow_action_set_tag *) + actions->conf, error)) return -rte_errno; action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case MLX5_RTE_FLOW_ACTION_TYPE_DEFAULT_MISS: - action_flags |= MLX5_FLOW_ACTION_DEFAULT_MISS; - dev_flow->handle->fate_action = - MLX5_FLOW_FATE_DEFAULT_MISS; - break; - case RTE_FLOW_ACTION_TYPE_METER: - if (!wks->fm) - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, "Failed to get meter in flow."); - /* Set the meter action. */ - dev_flow->dv.actions[actions_n++] = - wks->fm->meter_action_g; - action_flags |= MLX5_FLOW_ACTION_METER; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV4_DSCP: - if (flow_dv_convert_action_modify_ipv4_dscp(mhdr_res, - actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_IPV4_DSCP; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV6_DSCP: - if (flow_dv_convert_action_modify_ipv6_dscp(mhdr_res, - actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_IPV6_DSCP; + case RTE_FLOW_ACTION_TYPE_DROP: + action_flags |= MLX5_FLOW_ACTION_DROP; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_DROP; break; - case RTE_FLOW_ACTION_TYPE_SAMPLE: - sample_act_pos = actions_n; - sample = (const struct rte_flow_action_sample *) - action->conf; - actions_n++; - action_flags |= MLX5_FLOW_ACTION_SAMPLE; - /* put encap action into group if work with port id */ - if ((action_flags & MLX5_FLOW_ACTION_ENCAP) && - (action_flags & MLX5_FLOW_ACTION_PORT_ID)) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ACTION_TYPE_QUEUE: + queue = actions->conf; + rss_desc->queue_num = 1; + rss_desc->queue[0] = queue->index; + action_flags |= MLX5_FLOW_ACTION_QUEUE; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_QUEUE; + sample_act->action_flags |= MLX5_FLOW_ACTION_QUEUE; + num_of_dest++; break; - case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: - if (flow_dv_convert_action_modify_field - (dev, mhdr_res, actions, attr, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; + case RTE_FLOW_ACTION_TYPE_RSS: + rss = actions->conf; + memcpy(rss_desc->queue, rss->queue, + rss->queue_num * sizeof(uint16_t)); + rss_desc->queue_num = rss->queue_num; + /* NULL RSS key indicates default RSS key. */ + rss_key = !rss->key ? rss_hash_default_key : rss->key; + memcpy(rss_desc->key, rss_key, MLX5_RSS_HASH_KEY_LEN); + /* + * rss->level and rss.types should be set in advance + * when expanding items for RSS. + */ + action_flags |= MLX5_FLOW_ACTION_RSS; + dev_flow->handle->fate_action = rss_desc->shared_rss ? + MLX5_FLOW_FATE_SHARED_RSS : + MLX5_FLOW_FATE_QUEUE; break; - case RTE_FLOW_ACTION_TYPE_CONNTRACK: + case MLX5_RTE_FLOW_ACTION_TYPE_AGE: owner_idx = (uint32_t)(uintptr_t)action->conf; - ct = flow_aso_ct_get_by_idx(dev, owner_idx); - if (!ct) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "Failed to get CT object."); - if (mlx5_aso_ct_available(priv->sh, ct)) - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "CT is unavailable."); - if (ct->is_original) - dev_flow->dv.actions[actions_n] = - ct->dr_action_orig; - else - dev_flow->dv.actions[actions_n] = - ct->dr_action_rply; - if (flow->ct == 0) { - flow->indirect_type = - MLX5_INDIRECT_ACTION_TYPE_CT; - flow->ct = owner_idx; - __atomic_fetch_add(&ct->refcnt, 1, + age_act = flow_aso_age_get_by_idx(dev, owner_idx); + if (flow->age == 0) { + flow->age = owner_idx; + __atomic_fetch_add(&age_act->refcnt, 1, __ATOMIC_RELAXED); } - actions_n++; - action_flags |= MLX5_FLOW_ACTION_CT; + age_act_pos = actions_n++; + action_flags |= MLX5_FLOW_ACTION_AGE; break; - case RTE_FLOW_ACTION_TYPE_END: - actions_end = true; - if (mhdr_res->actions_num) { - /* create modify action if needed. */ - if (flow_dv_modify_hdr_resource_register - (dev, mhdr_res, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[modify_action_position] = - handle->dvh.modify_hdr->action; - } - /* - * Handle AGE and COUNT action by single HW counter - * when they are not shared. + case RTE_FLOW_ACTION_TYPE_AGE: + non_shared_age = action->conf; + age_act_pos = actions_n++; + action_flags |= MLX5_FLOW_ACTION_AGE; + break; + case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: + owner_idx = (uint32_t)(uintptr_t)action->conf; + cnt_act = flow_dv_counter_get_by_idx(dev, owner_idx, + NULL); + MLX5_ASSERT(cnt_act != NULL); + /** + * When creating meter drop flow in drop table, the + * counter should not overwrite the rte flow counter. */ - if (action_flags & MLX5_FLOW_ACTION_AGE) { - if ((non_shared_age && count) || - !flow_hit_aso_supported(priv->sh, attr)) { - /* Creates age by counters. */ - cnt_act = flow_dv_prepare_counter - (dev, dev_flow, - flow, count, - non_shared_age, - error); - if (!cnt_act) - return -rte_errno; - dev_flow->dv.actions[age_act_pos] = - cnt_act->action; - break; - } - if (!flow->age && non_shared_age) { - flow->age = flow_dv_aso_age_alloc - (dev, error); - if (!flow->age) - return -rte_errno; - flow_dv_aso_age_params_init - (dev, flow->age, - non_shared_age->context ? - non_shared_age->context : - (void *)(uintptr_t) - (dev_flow->flow_idx), - non_shared_age->timeout); - } - age_act = flow_aso_age_get_by_idx(dev, - flow->age); - dev_flow->dv.actions[age_act_pos] = - age_act->dr_action; - } - if (action_flags & MLX5_FLOW_ACTION_COUNT) { - /* - * Create one count action, to be used - * by all sub-flows. - */ - cnt_act = flow_dv_prepare_counter(dev, dev_flow, - flow, count, - NULL, error); - if (!cnt_act) - return -rte_errno; + if (attr->group == MLX5_FLOW_TABLE_LEVEL_METER && + dev_flow->dv.table_id == MLX5_MTR_TABLE_ID_DROP) { dev_flow->dv.actions[actions_n++] = - cnt_act->action; + cnt_act->action; + } else { + if (flow->counter == 0) { + flow->counter = owner_idx; + __atomic_fetch_add + (&cnt_act->shared_info.refcnt, + 1, __ATOMIC_RELAXED); + } + /* Save information first, will apply later. */ + action_flags |= MLX5_FLOW_ACTION_COUNT; } - default: break; - } - if (mhdr_res->actions_num && - modify_action_position == UINT32_MAX) - modify_action_position = actions_n++; - } - for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { - int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); - int item_type = items->type; - - if (!mlx5_flow_os_item_supported(item_type)) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "item not supported"); - switch (item_type) { - case RTE_FLOW_ITEM_TYPE_ESP: - flow_dv_translate_item_esp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_ITEM_ESP; + case RTE_FLOW_ACTION_TYPE_COUNT: + if (!priv->sh->cdev->config.devx) { + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "count action not supported"); + } + /* Save information first, will apply later. */ + count = action->conf; + action_flags |= MLX5_FLOW_ACTION_COUNT; break; - case RTE_FLOW_ITEM_TYPE_PORT_ID: - flow_dv_translate_item_port_id - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_PORT_ID; + case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: + dev_flow->dv.actions[actions_n++] = + priv->sh->pop_vlan_action; + action_flags |= MLX5_FLOW_ACTION_OF_POP_VLAN; break; - case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: - flow_dv_translate_item_represented_port - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; + case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: + if (!(action_flags & + MLX5_FLOW_ACTION_OF_SET_VLAN_VID)) + flow_dev_get_vlan_info_from_items(items, &vlan); + vlan.eth_proto = rte_be_to_cpu_16 + ((((const struct rte_flow_action_of_push_vlan *) + actions->conf)->ethertype)); + found_action = mlx5_flow_find_action + (actions + 1, + RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID); + if (found_action) + mlx5_update_vlan_vid_pcp(found_action, &vlan); + found_action = mlx5_flow_find_action + (actions + 1, + RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP); + if (found_action) + mlx5_update_vlan_vid_pcp(found_action, &vlan); + if (flow_dv_create_action_push_vlan + (dev, attr, &vlan, dev_flow, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.push_vlan_res->action; + action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN; break; - case RTE_FLOW_ITEM_TYPE_ETH: - flow_dv_translate_item_eth(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = action_flags & - MLX5_FLOW_ACTION_DEFAULT_MISS && - !dev_flow->external ? - MLX5_PRIORITY_MAP_L3 : - MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : - MLX5_FLOW_LAYER_OUTER_L2; + case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP: + /* of_vlan_push action handled this action */ + MLX5_ASSERT(action_flags & + MLX5_FLOW_ACTION_OF_PUSH_VLAN); break; - case RTE_FLOW_ITEM_TYPE_VLAN: - flow_dv_translate_item_vlan(dev_flow, - match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | - MLX5_FLOW_LAYER_INNER_VLAN) : - (MLX5_FLOW_LAYER_OUTER_L2 | - MLX5_FLOW_LAYER_OUTER_VLAN); + case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID: + if (action_flags & MLX5_FLOW_ACTION_OF_PUSH_VLAN) + break; + flow_dev_get_vlan_info_from_items(items, &vlan); + mlx5_update_vlan_vid_pcp(actions, &vlan); + /* If no VLAN push - this is a modify header action */ + if (flow_dv_convert_action_modify_vlan_vid + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID; break; - case RTE_FLOW_ITEM_TYPE_IPV4: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv4(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : - MLX5_FLOW_LAYER_OUTER_L3_IPV4; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv4 *) - items->mask)->hdr.next_proto_id) { - next_protocol = - ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - next_protocol &= - ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: + if (flow_dv_create_action_l2_encap(dev, actions, + dev_flow, + attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_ENCAP; + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_IPV6: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv6(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : - MLX5_FLOW_LAYER_OUTER_L3_IPV6; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto) { - next_protocol = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol &= - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: + if (flow_dv_create_action_l2_decap(dev, dev_flow, + attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_DECAP; break; - case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: - flow_dv_translate_item_ipv6_frag_ext(match_mask, - match_value, - items, tunnel); - last_item = tunnel ? - MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : - MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header) { - next_protocol = - ((const struct rte_flow_item_ipv6_frag_ext *) - items->spec)->hdr.next_header; - next_protocol &= - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + /* Handle encap with preceding decap. */ + if (action_flags & MLX5_FLOW_ACTION_DECAP) { + if (flow_dv_create_action_raw_encap + (dev, actions, dev_flow, attr, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; } else { - /* Reset for inner layer. */ - next_protocol = 0xff; + /* Handle encap without preceding decap. */ + if (flow_dv_create_action_l2_encap + (dev, actions, dev_flow, attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; } + action_flags |= MLX5_FLOW_ACTION_ENCAP; + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_TCP: - flow_dv_translate_item_tcp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : - MLX5_FLOW_LAYER_OUTER_L4_TCP; - break; - case RTE_FLOW_ITEM_TYPE_UDP: - flow_dv_translate_item_udp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : - MLX5_FLOW_LAYER_OUTER_L4_UDP; - break; - case RTE_FLOW_ITEM_TYPE_GRE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - gre_item = items; + case RTE_FLOW_ACTION_TYPE_RAW_DECAP: + while ((++action)->type == RTE_FLOW_ACTION_TYPE_VOID) + ; + if (action->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { + if (flow_dv_create_action_l2_decap + (dev, dev_flow, attr->transfer, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + } + /* If decap is followed by encap, handle it at encap. */ + action_flags |= MLX5_FLOW_ACTION_DECAP; break; - case RTE_FLOW_ITEM_TYPE_GRE_KEY: - flow_dv_translate_item_gre_key(match_mask, - match_value, items); - last_item = MLX5_FLOW_LAYER_GRE_KEY; + case MLX5_RTE_FLOW_ACTION_TYPE_JUMP: + dev_flow->dv.actions[actions_n++] = + (void *)(uintptr_t)action->conf; + action_flags |= MLX5_FLOW_ACTION_JUMP; break; - case RTE_FLOW_ITEM_TYPE_GRE_OPTION: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_JUMP: + jump_group = ((const struct rte_flow_action_jump *) + action->conf)->group; + grp_info.std_tbl_fix = 0; + if (dev_flow->skip_scale & + (1 << MLX5_SCALE_JUMP_FLOW_GROUP_BIT)) + grp_info.skip_scale = 1; + else + grp_info.skip_scale = 0; + ret = mlx5_flow_group_to_table(dev, tunnel, + jump_group, + &table, + &grp_info, error); + if (ret) + return ret; + tbl = flow_dv_tbl_resource_get(dev, table, attr->egress, + attr->transfer, + !!dev_flow->external, + tunnel, jump_group, 0, + 0, error); + if (!tbl) + return rte_flow_error_set + (error, errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create jump action."); + if (flow_dv_jump_tbl_resource_register + (dev, tbl, dev_flow, error)) { + flow_dv_tbl_resource_release(MLX5_SH(dev), tbl); + return rte_flow_error_set + (error, errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create jump action."); + } + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.jump->action; + action_flags |= MLX5_FLOW_ACTION_JUMP; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_JUMP; + sample_act->action_flags |= MLX5_FLOW_ACTION_JUMP; + num_of_dest++; break; - case RTE_FLOW_ITEM_TYPE_NVGRE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC: + case RTE_FLOW_ACTION_TYPE_SET_MAC_DST: + if (flow_dv_convert_action_modify_mac + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_MAC_SRC ? + MLX5_FLOW_ACTION_SET_MAC_SRC : + MLX5_FLOW_ACTION_SET_MAC_DST; break; - case RTE_FLOW_ITEM_TYPE_VXLAN: - flow_dv_translate_item_vxlan(dev, attr, - match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN; + case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: + case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST: + if (flow_dv_convert_action_modify_ipv4 + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC ? + MLX5_FLOW_ACTION_SET_IPV4_SRC : + MLX5_FLOW_ACTION_SET_IPV4_DST; break; - case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN_GPE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC: + case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST: + if (flow_dv_convert_action_modify_ipv6 + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC ? + MLX5_FLOW_ACTION_SET_IPV6_SRC : + MLX5_FLOW_ACTION_SET_IPV6_DST; break; - case RTE_FLOW_ITEM_TYPE_GENEVE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GENEVE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_TP_SRC: + case RTE_FLOW_ACTION_TYPE_SET_TP_DST: + if (flow_dv_convert_action_modify_tp + (mhdr_res, actions, items, + &flow_attr, dev_flow, !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_TP_SRC ? + MLX5_FLOW_ACTION_SET_TP_SRC : + MLX5_FLOW_ACTION_SET_TP_DST; break; - case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: - ret = flow_dv_translate_item_geneve_opt(dev, match_mask, - match_value, - items, error); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GENEVE TLV option"); - flow->geneve_tlv_option = 1; - last_item = MLX5_FLOW_LAYER_GENEVE_OPT; + case RTE_FLOW_ACTION_TYPE_DEC_TTL: + if (flow_dv_convert_action_modify_dec_ttl + (mhdr_res, items, &flow_attr, dev_flow, + !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_DEC_TTL; break; - case RTE_FLOW_ITEM_TYPE_MPLS: - flow_dv_translate_item_mpls(match_mask, match_value, - items, last_item, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_MPLS; + case RTE_FLOW_ACTION_TYPE_SET_TTL: + if (flow_dv_convert_action_modify_ttl + (mhdr_res, actions, items, &flow_attr, + dev_flow, !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TTL; break; - case RTE_FLOW_ITEM_TYPE_MARK: - flow_dv_translate_item_mark(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_MARK; + case RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ: + case RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ: + if (flow_dv_convert_action_modify_tcp_seq + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ ? + MLX5_FLOW_ACTION_INC_TCP_SEQ : + MLX5_FLOW_ACTION_DEC_TCP_SEQ; break; - case RTE_FLOW_ITEM_TYPE_META: - flow_dv_translate_item_meta(dev, match_mask, - match_value, attr, items); - last_item = MLX5_FLOW_ITEM_METADATA; + + case RTE_FLOW_ACTION_TYPE_INC_TCP_ACK: + case RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK: + if (flow_dv_convert_action_modify_tcp_ack + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_INC_TCP_ACK ? + MLX5_FLOW_ACTION_INC_TCP_ACK : + MLX5_FLOW_ACTION_DEC_TCP_ACK; break; - case RTE_FLOW_ITEM_TYPE_ICMP: - flow_dv_translate_item_icmp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP; + case MLX5_RTE_FLOW_ACTION_TYPE_TAG: + if (flow_dv_convert_action_set_reg + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case RTE_FLOW_ITEM_TYPE_ICMP6: - flow_dv_translate_item_icmp6(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP6; + case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG: + if (flow_dv_convert_action_copy_mreg + (dev, mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; + case MLX5_RTE_FLOW_ACTION_TYPE_DEFAULT_MISS: + action_flags |= MLX5_FLOW_ACTION_DEFAULT_MISS; + dev_flow->handle->fate_action = + MLX5_FLOW_FATE_DEFAULT_MISS; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_mlx5_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; + case RTE_FLOW_ACTION_TYPE_METER: + if (!wks->fm) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "Failed to get meter in flow."); + /* Set the meter action. */ + dev_flow->dv.actions[actions_n++] = + wks->fm->meter_action_g; + action_flags |= MLX5_FLOW_ACTION_METER; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - flow_dv_translate_item_tx_queue(dev, match_mask, - match_value, - items); - last_item = MLX5_FLOW_ITEM_TX_QUEUE; + case RTE_FLOW_ACTION_TYPE_SET_IPV4_DSCP: + if (flow_dv_convert_action_modify_ipv4_dscp(mhdr_res, + actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_IPV4_DSCP; break; - case RTE_FLOW_ITEM_TYPE_GTP: - flow_dv_translate_item_gtp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GTP; + case RTE_FLOW_ACTION_TYPE_SET_IPV6_DSCP: + if (flow_dv_convert_action_modify_ipv6_dscp(mhdr_res, + actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_IPV6_DSCP; break; - case RTE_FLOW_ITEM_TYPE_GTP_PSC: - ret = flow_dv_translate_item_gtp_psc(match_mask, - match_value, - items); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GTP PSC item"); - last_item = MLX5_FLOW_LAYER_GTP_PSC; + case RTE_FLOW_ACTION_TYPE_SAMPLE: + sample_act_pos = actions_n; + sample = (const struct rte_flow_action_sample *) + action->conf; + actions_n++; + action_flags |= MLX5_FLOW_ACTION_SAMPLE; + /* put encap action into group if work with port id */ + if ((action_flags & MLX5_FLOW_ACTION_ENCAP) && + (action_flags & MLX5_FLOW_ACTION_PORT_ID)) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_ECPRI: - if (!mlx5_flex_parser_ecpri_exist(dev)) { - /* Create it only the first time to be used. */ - ret = mlx5_flex_parser_ecpri_alloc(dev); - if (ret) - return rte_flow_error_set - (error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, + case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: + if (flow_dv_convert_action_modify_field + (dev, mhdr_res, actions, attr, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; + break; + case RTE_FLOW_ACTION_TYPE_CONNTRACK: + owner_idx = (uint32_t)(uintptr_t)action->conf; + ct = flow_aso_ct_get_by_idx(dev, owner_idx); + if (!ct) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, - "cannot create eCPRI parser"); + "Failed to get CT object."); + if (mlx5_aso_ct_available(priv->sh, ct)) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "CT is unavailable."); + if (ct->is_original) + dev_flow->dv.actions[actions_n] = + ct->dr_action_orig; + else + dev_flow->dv.actions[actions_n] = + ct->dr_action_rply; + if (flow->ct == 0) { + flow->indirect_type = + MLX5_INDIRECT_ACTION_TYPE_CT; + flow->ct = owner_idx; + __atomic_fetch_add(&ct->refcnt, 1, + __ATOMIC_RELAXED); } - flow_dv_translate_item_ecpri(dev, match_mask, - match_value, items, - last_item); - /* No other protocol should follow eCPRI layer. */ - last_item = MLX5_FLOW_LAYER_ECPRI; - break; - case RTE_FLOW_ITEM_TYPE_INTEGRITY: - flow_dv_translate_item_integrity(items, integrity_items, - &last_item); - break; - case RTE_FLOW_ITEM_TYPE_CONNTRACK: - flow_dv_translate_item_aso_ct(dev, match_mask, - match_value, items); - break; - case RTE_FLOW_ITEM_TYPE_FLEX: - flow_dv_translate_item_flex(dev, match_mask, - match_value, items, - dev_flow, tunnel != 0); - last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : - MLX5_FLOW_ITEM_OUTER_FLEX; + actions_n++; + action_flags |= MLX5_FLOW_ACTION_CT; break; + case RTE_FLOW_ACTION_TYPE_END: + actions_end = true; + if (mhdr_res->actions_num) { + /* create modify action if needed. */ + if (flow_dv_modify_hdr_resource_register + (dev, mhdr_res, dev_flow, error)) + return -rte_errno; + dev_flow->dv.actions[modify_action_position] = + handle->dvh.modify_hdr->action; + } + /* + * Handle AGE and COUNT action by single HW counter + * when they are not shared. + */ + if (action_flags & MLX5_FLOW_ACTION_AGE) { + if ((non_shared_age && count) || + !flow_hit_aso_supported(priv->sh, attr)) { + /* Creates age by counters. */ + cnt_act = flow_dv_prepare_counter + (dev, dev_flow, + flow, count, + non_shared_age, + error); + if (!cnt_act) + return -rte_errno; + dev_flow->dv.actions[age_act_pos] = + cnt_act->action; + break; + } + if (!flow->age && non_shared_age) { + flow->age = flow_dv_aso_age_alloc + (dev, error); + if (!flow->age) + return -rte_errno; + flow_dv_aso_age_params_init + (dev, flow->age, + non_shared_age->context ? + non_shared_age->context : + (void *)(uintptr_t) + (dev_flow->flow_idx), + non_shared_age->timeout); + } + age_act = flow_aso_age_get_by_idx(dev, + flow->age); + dev_flow->dv.actions[age_act_pos] = + age_act->dr_action; + } + if (action_flags & MLX5_FLOW_ACTION_COUNT) { + /* + * Create one count action, to be used + * by all sub-flows. + */ + cnt_act = flow_dv_prepare_counter(dev, dev_flow, + flow, count, + NULL, error); + if (!cnt_act) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + cnt_act->action; + } default: break; } - item_flags |= last_item; - } - /* - * When E-Switch mode is enabled, we have two cases where we need to - * set the source port manually. - * The first one, is in case of NIC ingress steering rule, and the - * second is E-Switch rule where no port_id item was found. - * In both cases the source port is set according the current port - * in use. - */ - if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && - !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && - !(attr->egress && !attr->transfer)) { - if (flow_dv_translate_item_port_id(dev, match_mask, - match_value, NULL, attr)) - return -rte_errno; - } - if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { - flow_dv_translate_item_integrity_post(match_mask, match_value, - integrity_items, - item_flags); - } - if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) - flow_dv_translate_item_vxlan_gpe(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GENEVE) - flow_dv_translate_item_geneve(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GRE) { - if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) - flow_dv_translate_item_gre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) - flow_dv_translate_item_nvgre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) - flow_dv_translate_item_gre_option(match_mask, match_value, - tunnel_item, gre_item, item_flags); - else - MLX5_ASSERT(false); + if (mhdr_res->actions_num && + modify_action_position == UINT32_MAX) + modify_action_position = actions_n++; } -#ifdef RTE_LIBRTE_MLX5_DEBUG - MLX5_ASSERT(!flow_dv_check_valid_spec(matcher.mask.buf, - dev_flow->dv.value.buf)); -#endif - /* - * Layers may be already initialized from prefix flow if this dev_flow - * is the suffix flow. - */ - handle->layers |= item_flags; + dev_flow->act_flags = action_flags; + ret = flow_dv_translate_items(dev, dev_flow, attr, items, &matcher, + error); + if (ret) + return -rte_errno; if (action_flags & MLX5_FLOW_ACTION_RSS) flow_dv_hashfields_set(dev_flow->handle->layers, rss_desc, @@ -14153,7 +14197,6 @@ flow_dv_translate(struct rte_eth_dev *dev, actions_n = tmp_actions_n; } dev_flow->dv.actions_n = actions_n; - dev_flow->act_flags = action_flags; if (wks->skip_matcher_reg) return 0; /* Register matcher. */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 02/18] net/mlx5: split flow item matcher and value translation 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker 2022-10-19 14:42 ` [v4 01/18] net/mlx5: split flow item translation Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 03/18] net/mlx5: add hardware steering item translation function Alex Vesker ` (15 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> As hardware steering mode translates flow matcher and value in two different stages, split the flow item matcher and value translation to help reuse the code. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow.h | 32 + drivers/net/mlx5/mlx5_flow_dv.c | 2314 +++++++++++++++---------------- 2 files changed, 1185 insertions(+), 1161 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 0fa1735b1a..2ebb8496f2 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1264,6 +1264,38 @@ struct mlx5_flow_workspace { uint32_t skip_matcher_reg:1; /* Indicates if need to skip matcher register in translate. */ uint32_t mark:1; /* Indicates if flow contains mark action. */ + uint32_t vport_meta_tag; /* Used for vport index match. */ +}; + +/* Matcher translate type. */ +enum MLX5_SET_MATCHER { + MLX5_SET_MATCHER_SW_V = 1 << 0, + MLX5_SET_MATCHER_SW_M = 1 << 1, + MLX5_SET_MATCHER_HS_V = 1 << 2, + MLX5_SET_MATCHER_HS_M = 1 << 3, +}; + +#define MLX5_SET_MATCHER_SW (MLX5_SET_MATCHER_SW_V | MLX5_SET_MATCHER_SW_M) +#define MLX5_SET_MATCHER_HS (MLX5_SET_MATCHER_HS_V | MLX5_SET_MATCHER_HS_M) +#define MLX5_SET_MATCHER_V (MLX5_SET_MATCHER_SW_V | MLX5_SET_MATCHER_HS_V) +#define MLX5_SET_MATCHER_M (MLX5_SET_MATCHER_SW_M | MLX5_SET_MATCHER_HS_M) + +/* Flow matcher workspace intermediate data. */ +struct mlx5_dv_matcher_workspace { + uint8_t priority; /* Flow priority. */ + uint64_t last_item; /* Last item in pattern. */ + uint64_t item_flags; /* Flow item pattern flags. */ + uint64_t action_flags; /* Flow action flags. */ + bool external; /* External flow or not. */ + uint32_t vlan_tag:12; /* Flow item VLAN tag. */ + uint8_t next_protocol; /* Tunnel next protocol */ + uint32_t geneve_tlv_option; /* Flow item Geneve TLV option. */ + uint32_t group; /* Flow group. */ + uint16_t udp_dport; /* Flow item UDP port. */ + const struct rte_flow_attr *attr; /* Flow attribute. */ + struct mlx5_flow_rss_desc *rss_desc; /* RSS descriptor. */ + const struct rte_flow_item *tunnel_item; /* Flow tunnel item. */ + const struct rte_flow_item *gre_item; /* Flow GRE item. */ }; struct mlx5_flow_split_info { diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 70a3279e2f..0589cafc30 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -63,6 +63,25 @@ #define MLX5DV_FLOW_VLAN_PCP_MASK_BE RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK) #define MLX5DV_FLOW_VLAN_VID_MASK_BE RTE_BE16(MLX5DV_FLOW_VLAN_VID_MASK) +#define MLX5_ITEM_VALID(item, key_type) \ + (((MLX5_SET_MATCHER_SW & (key_type)) && !((item)->spec)) || \ + ((MLX5_SET_MATCHER_HS_V == (key_type)) && !((item)->spec)) || \ + ((MLX5_SET_MATCHER_HS_M == (key_type)) && !((item)->mask))) + +#define MLX5_ITEM_UPDATE(item, key_type, v, m, gm) \ + do { \ + if ((key_type) == MLX5_SET_MATCHER_SW_V) { \ + v = (item)->spec; \ + m = (item)->mask ? (item)->mask : (gm); \ + } else if ((key_type) == MLX5_SET_MATCHER_HS_V) { \ + v = (item)->spec; \ + m = (v); \ + } else { \ + v = (item)->mask ? (item)->mask : (gm); \ + m = (v); \ + } \ + } while (0) + union flow_dv_attr { struct { uint32_t valid:1; @@ -8323,70 +8342,61 @@ flow_dv_check_valid_spec(void *match_mask, void *match_value) static inline void flow_dv_set_match_ip_version(uint32_t group, void *headers_v, - void *headers_m, + uint32_t key_type, uint8_t ip_version) { - if (group == 0) - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, 0xf); + if (group == 0 && (key_type & MLX5_SET_MATCHER_M)) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, 0xf); else - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, ip_version); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, ip_version); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, 0); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ethertype, 0); } /** - * Add Ethernet item to matcher and to the value. + * Add Ethernet item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] grpup + * Flow matcher group. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_eth(void *matcher, void *key, - const struct rte_flow_item *item, int inner, - uint32_t group) +flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_eth *eth_m = item->mask; - const struct rte_flow_item_eth *eth_v = item->spec; + const struct rte_flow_item_eth *eth_vv = item->spec; + const struct rte_flow_item_eth *eth_m; + const struct rte_flow_item_eth *eth_v; const struct rte_flow_item_eth nic_mask = { .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", .src.addr_bytes = "\xff\xff\xff\xff\xff\xff", .type = RTE_BE16(0xffff), .has_vlan = 0, }; - void *hdrs_m; void *hdrs_v; char *l24_v; unsigned int i; - if (!eth_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!eth_m) - eth_m = &nic_mask; - if (inner) { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); + MLX5_ITEM_UPDATE(item, key_type, eth_v, eth_m, &nic_mask); + if (!eth_vv) + eth_vv = eth_v; + if (inner) hdrs_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); + else hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, dmac_47_16), - ð_m->dst, sizeof(eth_m->dst)); /* The value must be in the range of the mask. */ l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, dmac_47_16); for (i = 0; i < sizeof(eth_m->dst); ++i) l24_v[i] = eth_m->dst.addr_bytes[i] & eth_v->dst.addr_bytes[i]; - memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, smac_47_16), - ð_m->src, sizeof(eth_m->src)); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, smac_47_16); /* The value must be in the range of the mask. */ for (i = 0; i < sizeof(eth_m->dst); ++i) @@ -8400,145 +8410,149 @@ flow_dv_translate_item_eth(void *matcher, void *key, * eCPRI over Ether layer will use type value 0xAEFE. */ if (eth_m->type == 0xFFFF) { + rte_be16_t type = eth_v->type; + + /* + * When set the matcher mask, refer to the original spec + * value. + */ + if (key_type == MLX5_SET_MATCHER_SW_M) { + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + type = eth_vv->type; + } /* Set cvlan_tag mask for any single\multi\un-tagged case. */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); - switch (eth_v->type) { + switch (type) { case RTE_BE16(RTE_ETHER_TYPE_VLAN): MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); return; case RTE_BE16(RTE_ETHER_TYPE_QINQ): - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); return; case RTE_BE16(RTE_ETHER_TYPE_IPV4): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 4); + flow_dv_set_match_ip_version(group, hdrs_v, key_type, + 4); return; case RTE_BE16(RTE_ETHER_TYPE_IPV6): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 6); + flow_dv_set_match_ip_version(group, hdrs_v, key_type, + 6); return; default: break; } } - if (eth_m->has_vlan) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); - if (eth_v->has_vlan) { - /* - * Here, when also has_more_vlan field in VLAN item is - * not set, only single-tagged packets will be matched. - */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + /* + * Only SW steering value should refer to the mask value. + * Other cases are using the fake masks, just ignore the mask. + */ + if (eth_v->has_vlan && eth_m->has_vlan) { + /* + * Here, when also has_more_vlan field in VLAN item is + * not set, only single-tagged packets will be matched. + */ + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + if (key_type != MLX5_SET_MATCHER_HS_M && eth_vv->has_vlan) return; - } } - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, ethertype, - rte_be_to_cpu_16(eth_m->type)); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype); *(uint16_t *)(l24_v) = eth_m->type & eth_v->type; } /** - * Add VLAN item to matcher and to the value. + * Add VLAN item to the value. * - * @param[in, out] dev_flow - * Flow descriptor. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] wks + * Item workspace. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow, - void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_vlan *vlan_m = item->mask; - const struct rte_flow_item_vlan *vlan_v = item->spec; - void *hdrs_m; + const struct rte_flow_item_vlan *vlan_m; + const struct rte_flow_item_vlan *vlan_v; + const struct rte_flow_item_vlan *vlan_vv = item->spec; void *hdrs_v; - uint16_t tci_m; uint16_t tci_v; if (inner) { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); } else { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); /* * This is workaround, masks are not supported, * and pre-validated. */ - if (vlan_v) - dev_flow->handle->vf_vlan.tag = - rte_be_to_cpu_16(vlan_v->tci) & 0x0fff; + if (vlan_vv) + wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->tci) & 0x0fff; } /* * When VLAN item exists in flow, mark packet as tagged, * even if TCI is not specified. */ - if (!MLX5_GET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag)) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); + if (!MLX5_GET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag)) MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); - } - if (!vlan_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!vlan_m) - vlan_m = &rte_flow_item_vlan_mask; - tci_m = rte_be_to_cpu_16(vlan_m->tci); + MLX5_ITEM_UPDATE(item, key_type, vlan_v, vlan_m, + &rte_flow_item_vlan_mask); tci_v = rte_be_to_cpu_16(vlan_m->tci & vlan_v->tci); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_vid, tci_m); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_vid, tci_v); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_cfi, tci_m >> 12); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_cfi, tci_v >> 12); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_prio, tci_m >> 13); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_prio, tci_v >> 13); /* * HW is optimized for IPv4/IPv6. In such cases, avoid setting * ethertype, and use ip_version field instead. */ if (vlan_m->inner_type == 0xFFFF) { - switch (vlan_v->inner_type) { + rte_be16_t inner_type = vlan_v->inner_type; + + /* + * When set the matcher mask, refer to the original spec + * value. + */ + if (key_type == MLX5_SET_MATCHER_SW_M) + inner_type = vlan_vv->inner_type; + switch (inner_type) { case RTE_BE16(RTE_ETHER_TYPE_VLAN): - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); + if (key_type & MLX5_SET_MATCHER_V) + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, + cvlan_tag, 0); return; case RTE_BE16(RTE_ETHER_TYPE_IPV4): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 4); + flow_dv_set_match_ip_version + (wks->group, hdrs_v, key_type, 4); return; case RTE_BE16(RTE_ETHER_TYPE_IPV6): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 6); + flow_dv_set_match_ip_version + (wks->group, hdrs_v, key_type, 6); return; default: break; } } if (vlan_m->has_more_vlan && vlan_v->has_more_vlan) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); /* Only one vlan_tag bit can be set. */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); + if (key_type & MLX5_SET_MATCHER_V) + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); return; } - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, ethertype, - rte_be_to_cpu_16(vlan_m->inner_type)); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, ethertype, rte_be_to_cpu_16(vlan_m->inner_type & vlan_v->inner_type)); } /** - * Add IPV4 item to matcher and to the value. + * Add IPV4 item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8547,14 +8561,15 @@ flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow, * Item is inner pattern. * @param[in] group * The group to insert the rule. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv4(void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_ipv4(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_ipv4 *ipv4_m = item->mask; - const struct rte_flow_item_ipv4 *ipv4_v = item->spec; + const struct rte_flow_item_ipv4 *ipv4_m; + const struct rte_flow_item_ipv4 *ipv4_v; const struct rte_flow_item_ipv4 nic_mask = { .hdr = { .src_addr = RTE_BE32(0xffffffff), @@ -8564,68 +8579,41 @@ flow_dv_translate_item_ipv4(void *matcher, void *key, .time_to_live = 0xff, }, }; - void *headers_m; void *headers_v; - char *l24_m; char *l24_v; - uint8_t tos, ihl_m, ihl_v; + uint8_t tos; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - flow_dv_set_match_ip_version(group, headers_v, headers_m, 4); - if (!ipv4_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + flow_dv_set_match_ip_version(group, headers_v, key_type, 4); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv4_m) - ipv4_m = &nic_mask; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - dst_ipv4_dst_ipv6.ipv4_layout.ipv4); + MLX5_ITEM_UPDATE(item, key_type, ipv4_v, ipv4_m, &nic_mask); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, dst_ipv4_dst_ipv6.ipv4_layout.ipv4); - *(uint32_t *)l24_m = ipv4_m->hdr.dst_addr; *(uint32_t *)l24_v = ipv4_m->hdr.dst_addr & ipv4_v->hdr.dst_addr; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - src_ipv4_src_ipv6.ipv4_layout.ipv4); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, src_ipv4_src_ipv6.ipv4_layout.ipv4); - *(uint32_t *)l24_m = ipv4_m->hdr.src_addr; *(uint32_t *)l24_v = ipv4_m->hdr.src_addr & ipv4_v->hdr.src_addr; tos = ipv4_m->hdr.type_of_service & ipv4_v->hdr.type_of_service; - ihl_m = ipv4_m->hdr.version_ihl & RTE_IPV4_HDR_IHL_MASK; - ihl_v = ipv4_v->hdr.version_ihl & RTE_IPV4_HDR_IHL_MASK; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_ihl, ihl_m); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_ihl, ihl_m & ihl_v); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ecn, - ipv4_m->hdr.type_of_service); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_ihl, + ipv4_v->hdr.ihl & ipv4_m->hdr.ihl); + if (key_type == MLX5_SET_MATCHER_SW_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, + ipv4_v->hdr.type_of_service); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, tos); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_dscp, - ipv4_m->hdr.type_of_service >> 2); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp, tos >> 2); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv4_m->hdr.next_proto_id); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv4_v->hdr.next_proto_id & ipv4_m->hdr.next_proto_id); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ttl_hoplimit, - ipv4_m->hdr.time_to_live); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit, ipv4_v->hdr.time_to_live & ipv4_m->hdr.time_to_live); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, - !!(ipv4_m->hdr.fragment_offset)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, !!(ipv4_v->hdr.fragment_offset & ipv4_m->hdr.fragment_offset)); } /** - * Add IPV6 item to matcher and to the value. + * Add IPV6 item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8634,14 +8622,15 @@ flow_dv_translate_item_ipv4(void *matcher, void *key, * Item is inner pattern. * @param[in] group * The group to insert the rule. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv6(void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_ipv6(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_ipv6 *ipv6_m = item->mask; - const struct rte_flow_item_ipv6 *ipv6_v = item->spec; + const struct rte_flow_item_ipv6 *ipv6_m; + const struct rte_flow_item_ipv6 *ipv6_v; const struct rte_flow_item_ipv6 nic_mask = { .hdr = { .src_addr = @@ -8655,287 +8644,217 @@ flow_dv_translate_item_ipv6(void *matcher, void *key, .hop_limits = 0xff, }, }; - void *headers_m; void *headers_v; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - char *l24_m; char *l24_v; - uint32_t vtc_m; uint32_t vtc_v; int i; int size; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - flow_dv_set_match_ip_version(group, headers_v, headers_m, 6); - if (!ipv6_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + flow_dv_set_match_ip_version(group, headers_v, key_type, 6); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv6_m) - ipv6_m = &nic_mask; + MLX5_ITEM_UPDATE(item, key_type, ipv6_v, ipv6_m, &nic_mask); size = sizeof(ipv6_m->hdr.dst_addr); - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - dst_ipv4_dst_ipv6.ipv6_layout.ipv6); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, dst_ipv4_dst_ipv6.ipv6_layout.ipv6); - memcpy(l24_m, ipv6_m->hdr.dst_addr, size); for (i = 0; i < size; ++i) - l24_v[i] = l24_m[i] & ipv6_v->hdr.dst_addr[i]; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - src_ipv4_src_ipv6.ipv6_layout.ipv6); + l24_v[i] = ipv6_m->hdr.dst_addr[i] & ipv6_v->hdr.dst_addr[i]; l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, src_ipv4_src_ipv6.ipv6_layout.ipv6); - memcpy(l24_m, ipv6_m->hdr.src_addr, size); for (i = 0; i < size; ++i) - l24_v[i] = l24_m[i] & ipv6_v->hdr.src_addr[i]; + l24_v[i] = ipv6_m->hdr.src_addr[i] & ipv6_v->hdr.src_addr[i]; /* TOS. */ - vtc_m = rte_be_to_cpu_32(ipv6_m->hdr.vtc_flow); vtc_v = rte_be_to_cpu_32(ipv6_m->hdr.vtc_flow & ipv6_v->hdr.vtc_flow); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ecn, vtc_m >> 20); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, vtc_v >> 20); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_dscp, vtc_m >> 22); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp, vtc_v >> 22); /* Label. */ - if (inner) { - MLX5_SET(fte_match_set_misc, misc_m, inner_ipv6_flow_label, - vtc_m); + if (inner) MLX5_SET(fte_match_set_misc, misc_v, inner_ipv6_flow_label, vtc_v); - } else { - MLX5_SET(fte_match_set_misc, misc_m, outer_ipv6_flow_label, - vtc_m); + else MLX5_SET(fte_match_set_misc, misc_v, outer_ipv6_flow_label, vtc_v); - } /* Protocol. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv6_m->hdr.proto); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv6_v->hdr.proto & ipv6_m->hdr.proto); /* Hop limit. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ttl_hoplimit, - ipv6_m->hdr.hop_limits); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit, ipv6_v->hdr.hop_limits & ipv6_m->hdr.hop_limits); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, - !!(ipv6_m->has_frag_ext)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, !!(ipv6_v->has_frag_ext & ipv6_m->has_frag_ext)); } /** - * Add IPV6 fragment extension item to matcher and to the value. + * Add IPV6 fragment extension item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv6_frag_ext(void *matcher, void *key, +flow_dv_translate_item_ipv6_frag_ext(void *key, const struct rte_flow_item *item, - int inner) + int inner, uint32_t key_type) { - const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_m = item->mask; - const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_v = item->spec; + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_m; + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_v; const struct rte_flow_item_ipv6_frag_ext nic_mask = { .hdr = { .next_header = 0xff, .frag_data = RTE_BE16(0xffff), }, }; - void *headers_m; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); /* IPv6 fragment extension item exists, so packet is IP fragment. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 1); - if (!ipv6_frag_ext_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv6_frag_ext_m) - ipv6_frag_ext_m = &nic_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv6_frag_ext_m->hdr.next_header); + MLX5_ITEM_UPDATE(item, key_type, ipv6_frag_ext_v, + ipv6_frag_ext_m, &nic_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv6_frag_ext_v->hdr.next_header & ipv6_frag_ext_m->hdr.next_header); } /** - * Add TCP item to matcher and to the value. + * Add TCP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_tcp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_tcp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_tcp *tcp_m = item->mask; - const struct rte_flow_item_tcp *tcp_v = item->spec; - void *headers_m; + const struct rte_flow_item_tcp *tcp_m; + const struct rte_flow_item_tcp *tcp_v; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_TCP); - if (!tcp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_TCP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!tcp_m) - tcp_m = &rte_flow_item_tcp_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_sport, - rte_be_to_cpu_16(tcp_m->hdr.src_port)); + MLX5_ITEM_UPDATE(item, key_type, tcp_v, tcp_m, + &rte_flow_item_tcp_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_sport, rte_be_to_cpu_16(tcp_v->hdr.src_port & tcp_m->hdr.src_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_dport, - rte_be_to_cpu_16(tcp_m->hdr.dst_port)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_dport, rte_be_to_cpu_16(tcp_v->hdr.dst_port & tcp_m->hdr.dst_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_flags, - tcp_m->hdr.tcp_flags); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_flags, - (tcp_v->hdr.tcp_flags & tcp_m->hdr.tcp_flags)); + tcp_v->hdr.tcp_flags & tcp_m->hdr.tcp_flags); } /** - * Add ESP item to matcher and to the value. + * Add ESP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_esp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_esp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_esp *esp_m = item->mask; - const struct rte_flow_item_esp *esp_v = item->spec; - void *headers_m; + const struct rte_flow_item_esp *esp_m; + const struct rte_flow_item_esp *esp_v; void *headers_v; - char *spi_m; char *spi_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ESP); - if (!esp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_ESP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!esp_m) - esp_m = &rte_flow_item_esp_mask; - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + MLX5_ITEM_UPDATE(item, key_type, esp_v, esp_m, + &rte_flow_item_esp_mask); headers_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - if (inner) { - spi_m = MLX5_ADDR_OF(fte_match_set_misc, headers_m, inner_esp_spi); - spi_v = MLX5_ADDR_OF(fte_match_set_misc, headers_v, inner_esp_spi); - } else { - spi_m = MLX5_ADDR_OF(fte_match_set_misc, headers_m, outer_esp_spi); - spi_v = MLX5_ADDR_OF(fte_match_set_misc, headers_v, outer_esp_spi); - } - *(uint32_t *)spi_m = esp_m->hdr.spi; + spi_v = inner ? MLX5_ADDR_OF(fte_match_set_misc, headers_v, + inner_esp_spi) : MLX5_ADDR_OF(fte_match_set_misc + , headers_v, outer_esp_spi); *(uint32_t *)spi_v = esp_m->hdr.spi & esp_v->hdr.spi; } /** - * Add UDP item to matcher and to the value. + * Add UDP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_udp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_udp(void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_udp *udp_m = item->mask; - const struct rte_flow_item_udp *udp_v = item->spec; - void *headers_m; + const struct rte_flow_item_udp *udp_m; + const struct rte_flow_item_udp *udp_v; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_UDP); - if (!udp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_UDP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!udp_m) - udp_m = &rte_flow_item_udp_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_sport, - rte_be_to_cpu_16(udp_m->hdr.src_port)); + MLX5_ITEM_UPDATE(item, key_type, udp_v, udp_m, + &rte_flow_item_udp_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_sport, rte_be_to_cpu_16(udp_v->hdr.src_port & udp_m->hdr.src_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, - rte_be_to_cpu_16(udp_m->hdr.dst_port)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, rte_be_to_cpu_16(udp_v->hdr.dst_port & udp_m->hdr.dst_port)); + /* Force get UDP dport in case to be used in VXLAN translate. */ + if (key_type & MLX5_SET_MATCHER_SW) { + udp_v = item->spec; + wks->udp_dport = rte_be_to_cpu_16(udp_v->hdr.dst_port & + udp_m->hdr.dst_port); + } } /** - * Add GRE optional Key item to matcher and to the value. + * Add GRE optional Key item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8944,55 +8863,46 @@ flow_dv_translate_item_udp(void *matcher, void *key, * Item is inner pattern. */ static void -flow_dv_translate_item_gre_key(void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_gre_key(void *key, const struct rte_flow_item *item, + uint32_t key_type) { - const rte_be32_t *key_m = item->mask; - const rte_be32_t *key_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const rte_be32_t *key_m; + const rte_be32_t *key_v; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); rte_be32_t gre_key_default_mask = RTE_BE32(UINT32_MAX); /* GRE K bit must be on and should already be validated */ - MLX5_SET(fte_match_set_misc, misc_m, gre_k_present, 1); MLX5_SET(fte_match_set_misc, misc_v, gre_k_present, 1); - if (!key_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!key_m) - key_m = &gre_key_default_mask; - MLX5_SET(fte_match_set_misc, misc_m, gre_key_h, - rte_be_to_cpu_32(*key_m) >> 8); + MLX5_ITEM_UPDATE(item, key_type, key_v, key_m, + &gre_key_default_mask); MLX5_SET(fte_match_set_misc, misc_v, gre_key_h, rte_be_to_cpu_32((*key_v) & (*key_m)) >> 8); - MLX5_SET(fte_match_set_misc, misc_m, gre_key_l, - rte_be_to_cpu_32(*key_m) & 0xFF); MLX5_SET(fte_match_set_misc, misc_v, gre_key_l, rte_be_to_cpu_32((*key_v) & (*key_m)) & 0xFF); } /** - * Add GRE item to matcher and to the value. + * Add GRE item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gre(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t pattern_flags) +flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item, + uint64_t pattern_flags, uint32_t key_type) { static const struct rte_flow_item_gre empty_gre = {0,}; const struct rte_flow_item_gre *gre_m = item->mask; const struct rte_flow_item_gre *gre_v = item->spec; - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); struct { union { @@ -9010,8 +8920,11 @@ flow_dv_translate_item_gre(void *matcher, void *key, } gre_crks_rsvd0_ver_m, gre_crks_rsvd0_ver_v; uint16_t protocol_m, protocol_v; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_GRE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, + IPPROTO_GRE); if (!gre_v) { gre_v = &empty_gre; gre_m = &empty_gre; @@ -9019,20 +8932,18 @@ flow_dv_translate_item_gre(void *matcher, void *key, if (!gre_m) gre_m = &rte_flow_item_gre_mask; } + if (key_type & MLX5_SET_MATCHER_M) + gre_v = gre_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + gre_m = gre_v; gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->c_rsvd0_ver); gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->c_rsvd0_ver); - MLX5_SET(fte_match_set_misc, misc_m, gre_c_present, - gre_crks_rsvd0_ver_m.c_present); MLX5_SET(fte_match_set_misc, misc_v, gre_c_present, gre_crks_rsvd0_ver_v.c_present & gre_crks_rsvd0_ver_m.c_present); - MLX5_SET(fte_match_set_misc, misc_m, gre_k_present, - gre_crks_rsvd0_ver_m.k_present); MLX5_SET(fte_match_set_misc, misc_v, gre_k_present, gre_crks_rsvd0_ver_v.k_present & gre_crks_rsvd0_ver_m.k_present); - MLX5_SET(fte_match_set_misc, misc_m, gre_s_present, - gre_crks_rsvd0_ver_m.s_present); MLX5_SET(fte_match_set_misc, misc_v, gre_s_present, gre_crks_rsvd0_ver_v.s_present & gre_crks_rsvd0_ver_m.s_present); @@ -9043,17 +8954,17 @@ flow_dv_translate_item_gre(void *matcher, void *key, protocol_v = mlx5_translate_tunnel_etypes(pattern_flags); if (protocol_v) protocol_m = 0xFFFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + protocol_v = protocol_m; } - MLX5_SET(fte_match_set_misc, misc_m, gre_protocol, protocol_m); MLX5_SET(fte_match_set_misc, misc_v, gre_protocol, protocol_m & protocol_v); } /** - * Add GRE optional items to matcher and to the value. + * Add GRE optional items to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -9062,13 +8973,16 @@ flow_dv_translate_item_gre(void *matcher, void *key, * Pointer to gre_item. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gre_option(void *matcher, void *key, +flow_dv_translate_item_gre_option(void *key, const struct rte_flow_item *item, const struct rte_flow_item *gre_item, - uint64_t pattern_flags) + uint64_t pattern_flags, uint32_t key_type) { + void *misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); const struct rte_flow_item_gre_opt *option_m = item->mask; const struct rte_flow_item_gre_opt *option_v = item->spec; const struct rte_flow_item_gre *gre_m = gre_item->mask; @@ -9077,8 +8991,6 @@ flow_dv_translate_item_gre_option(void *matcher, void *key, struct rte_flow_item gre_key_item; uint16_t c_rsvd0_ver_m, c_rsvd0_ver_v; uint16_t protocol_m, protocol_v; - void *misc5_m; - void *misc5_v; /* * If only match key field, keep using misc for matching. @@ -9087,11 +8999,10 @@ flow_dv_translate_item_gre_option(void *matcher, void *key, */ if (!(option_m->sequence.sequence || option_m->checksum_rsvd.checksum)) { - flow_dv_translate_item_gre(matcher, key, gre_item, - pattern_flags); + flow_dv_translate_item_gre(key, gre_item, pattern_flags, key_type); gre_key_item.spec = &option_v->key.key; gre_key_item.mask = &option_m->key.key; - flow_dv_translate_item_gre_key(matcher, key, &gre_key_item); + flow_dv_translate_item_gre_key(key, &gre_key_item, key_type); return; } if (!gre_v) { @@ -9126,57 +9037,49 @@ flow_dv_translate_item_gre_option(void *matcher, void *key, c_rsvd0_ver_v |= RTE_BE16(0x8000); c_rsvd0_ver_m |= RTE_BE16(0x8000); } + if (key_type & MLX5_SET_MATCHER_M) { + c_rsvd0_ver_v = c_rsvd0_ver_m; + protocol_v = protocol_m; + option_v = option_m; + } /* * Hardware parses GRE optional field into the fixed location, * do not need to adjust the tunnel dword indices. */ - misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); - misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_0, rte_be_to_cpu_32((c_rsvd0_ver_v | protocol_v << 16) & (c_rsvd0_ver_m | protocol_m << 16))); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_0, - rte_be_to_cpu_32(c_rsvd0_ver_m | protocol_m << 16)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_1, rte_be_to_cpu_32(option_v->checksum_rsvd.checksum & option_m->checksum_rsvd.checksum)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_1, - rte_be_to_cpu_32(option_m->checksum_rsvd.checksum)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_2, rte_be_to_cpu_32(option_v->key.key & option_m->key.key)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_2, - rte_be_to_cpu_32(option_m->key.key)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_3, rte_be_to_cpu_32(option_v->sequence.sequence & option_m->sequence.sequence)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_3, - rte_be_to_cpu_32(option_m->sequence.sequence)); } /** * Add NVGRE item to matcher and to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_nvgre(void *matcher, void *key, - const struct rte_flow_item *item, - unsigned long pattern_flags) +flow_dv_translate_item_nvgre(void *key, const struct rte_flow_item *item, + unsigned long pattern_flags, uint32_t key_type) { - const struct rte_flow_item_nvgre *nvgre_m = item->mask; - const struct rte_flow_item_nvgre *nvgre_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const struct rte_flow_item_nvgre *nvgre_m; + const struct rte_flow_item_nvgre *nvgre_v; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); const char *tni_flow_id_m; const char *tni_flow_id_v; - char *gre_key_m; char *gre_key_v; int size; int i; @@ -9195,158 +9098,145 @@ flow_dv_translate_item_nvgre(void *matcher, void *key, .mask = &gre_mask, .last = NULL, }; - flow_dv_translate_item_gre(matcher, key, &gre_item, pattern_flags); - if (!nvgre_v) + flow_dv_translate_item_gre(key, &gre_item, pattern_flags, key_type); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!nvgre_m) - nvgre_m = &rte_flow_item_nvgre_mask; + MLX5_ITEM_UPDATE(item, key_type, nvgre_v, nvgre_m, + &rte_flow_item_nvgre_mask); tni_flow_id_m = (const char *)nvgre_m->tni; tni_flow_id_v = (const char *)nvgre_v->tni; size = sizeof(nvgre_m->tni) + sizeof(nvgre_m->flow_id); - gre_key_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, gre_key_h); gre_key_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, gre_key_h); - memcpy(gre_key_m, tni_flow_id_m, size); for (i = 0; i < size; ++i) - gre_key_v[i] = gre_key_m[i] & tni_flow_id_v[i]; + gre_key_v[i] = tni_flow_id_m[i] & tni_flow_id_v[i]; } /** - * Add VXLAN item to matcher and to the value. + * Add VXLAN item to the value. * * @param[in] dev * Pointer to the Ethernet device structure. * @param[in] attr * Flow rule attributes. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] wks + * Matcher workspace. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_vxlan(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, - void *matcher, void *key, - const struct rte_flow_item *item, - int inner) + void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_vxlan *vxlan_m = item->mask; - const struct rte_flow_item_vxlan *vxlan_v = item->spec; - void *headers_m; + const struct rte_flow_item_vxlan *vxlan_m; + const struct rte_flow_item_vxlan *vxlan_v; + const struct rte_flow_item_vxlan *vxlan_vv = item->spec; void *headers_v; - void *misc5_m; + void *misc_v; void *misc5_v; + uint32_t tunnel_v; uint32_t *tunnel_header_v; - uint32_t *tunnel_header_m; + char *vni_v; uint16_t dport; + int size; + int i; struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_item_vxlan nic_mask = { .vni = "\xff\xff\xff", .rsvd1 = 0xff, }; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); dport = item->type == RTE_FLOW_ITEM_TYPE_VXLAN ? MLX5_UDP_PORT_VXLAN : MLX5_UDP_PORT_VXLAN_GPE; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, dport); - } - dport = MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport); - if (!vxlan_v) - return; - if (!vxlan_m) { - if ((!attr->group && !priv->sh->tunnel_header_0_1) || - (attr->group && !priv->sh->misc5_cap)) - vxlan_m = &rte_flow_item_vxlan_mask; + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xFFFF); else - vxlan_m = &nic_mask; + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, dport); } + /* + * Read the UDP dport to check if the value satisfies the VXLAN + * matching with MISC5 for CX5. + */ + if (wks->udp_dport) + dport = wks->udp_dport; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, vxlan_v, vxlan_m, &nic_mask); + if (item->mask == &nic_mask && + ((!attr->group && !priv->sh->tunnel_header_0_1) || + (attr->group && !priv->sh->misc5_cap))) + vxlan_m = &rte_flow_item_vxlan_mask; if ((priv->sh->steering_format_version == - MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 && - dport != MLX5_UDP_PORT_VXLAN) || - (!attr->group && !attr->transfer && !priv->sh->tunnel_header_0_1) || + MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 && + dport != MLX5_UDP_PORT_VXLAN) || + (!attr->group && !attr->transfer) || ((attr->group || attr->transfer) && !priv->sh->misc5_cap)) { - void *misc_m; - void *misc_v; - char *vni_m; - char *vni_v; - int size; - int i; - misc_m = MLX5_ADDR_OF(fte_match_param, - matcher, misc_parameters); misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); size = sizeof(vxlan_m->vni); - vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni); vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni); - memcpy(vni_m, vxlan_m->vni, size); for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & vxlan_v->vni[i]; + vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i]; return; } - misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5); - misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5, misc5_v, tunnel_header_1); - tunnel_header_m = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5, - misc5_m, - tunnel_header_1); - *tunnel_header_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) | - (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 | - (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16; - if (*tunnel_header_v) - *tunnel_header_m = vxlan_m->vni[0] | - vxlan_m->vni[1] << 8 | - vxlan_m->vni[2] << 16; - else - *tunnel_header_m = 0x0; - *tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24; - if (vxlan_v->rsvd1 & vxlan_m->rsvd1) - *tunnel_header_m |= vxlan_m->rsvd1 << 24; + tunnel_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) | + (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 | + (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16; + *tunnel_header_v = tunnel_v; + if (key_type == MLX5_SET_MATCHER_SW_M) { + tunnel_v = (vxlan_vv->vni[0] & vxlan_m->vni[0]) | + (vxlan_vv->vni[1] & vxlan_m->vni[1]) << 8 | + (vxlan_vv->vni[2] & vxlan_m->vni[2]) << 16; + if (!tunnel_v) + *tunnel_header_v = 0x0; + if (vxlan_vv->rsvd1 & vxlan_m->rsvd1) + *tunnel_header_v |= vxlan_v->rsvd1 << 24; + } else { + *tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24; + } } /** - * Add VXLAN-GPE item to matcher and to the value. + * Add VXLAN-GPE item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] pattern_flags + * Item pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, - const struct rte_flow_item *item, - const uint64_t pattern_flags) +flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item, + const uint64_t pattern_flags, + uint32_t key_type) { static const struct rte_flow_item_vxlan_gpe dummy_vxlan_gpe_hdr = {0, }; const struct rte_flow_item_vxlan_gpe *vxlan_m = item->mask; const struct rte_flow_item_vxlan_gpe *vxlan_v = item->spec; /* The item was validated to be on the outer side */ - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = - MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_3); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - char *vni_m = - MLX5_ADDR_OF(fte_match_set_misc3, misc_m, outer_vxlan_gpe_vni); char *vni_v = MLX5_ADDR_OF(fte_match_set_misc3, misc_v, outer_vxlan_gpe_vni); int i, size = sizeof(vxlan_m->vni); @@ -9355,9 +9245,12 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, uint8_t m_protocol, v_protocol; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_VXLAN_GPE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + MLX5_UDP_PORT_VXLAN_GPE); } if (!vxlan_v) { vxlan_v = &dummy_vxlan_gpe_hdr; @@ -9366,15 +9259,18 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, if (!vxlan_m) vxlan_m = &rte_flow_item_vxlan_gpe_mask; } - memcpy(vni_m, vxlan_m->vni, size); + if (key_type & MLX5_SET_MATCHER_M) + vxlan_v = vxlan_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + vxlan_m = vxlan_v; for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & vxlan_v->vni[i]; + vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i]; if (vxlan_m->flags) { flags_m = vxlan_m->flags; flags_v = vxlan_v->flags; } - MLX5_SET(fte_match_set_misc3, misc_m, outer_vxlan_gpe_flags, flags_m); - MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags, flags_v); + MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags, + flags_m & flags_v); m_protocol = vxlan_m->protocol; v_protocol = vxlan_v->protocol; if (!m_protocol) { @@ -9387,50 +9283,50 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, v_protocol = RTE_VXLAN_GPE_TYPE_IPV6; if (v_protocol) m_protocol = 0xFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + v_protocol = m_protocol; } - MLX5_SET(fte_match_set_misc3, misc_m, - outer_vxlan_gpe_next_protocol, m_protocol); MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_next_protocol, m_protocol & v_protocol); } /** - * Add Geneve item to matcher and to the value. + * Add Geneve item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] pattern_flags + * Item pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_geneve(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t pattern_flags) +flow_dv_translate_item_geneve(void *key, const struct rte_flow_item *item, + uint64_t pattern_flags, uint32_t key_type) { static const struct rte_flow_item_geneve empty_geneve = {0,}; const struct rte_flow_item_geneve *geneve_m = item->mask; const struct rte_flow_item_geneve *geneve_v = item->spec; /* GENEVE flow item validation allows single tunnel item */ - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); uint16_t gbhdr_m; uint16_t gbhdr_v; - char *vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, geneve_vni); char *vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, geneve_vni); size_t size = sizeof(geneve_m->vni), i; uint16_t protocol_m, protocol_v; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_GENEVE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + MLX5_UDP_PORT_GENEVE); } if (!geneve_v) { geneve_v = &empty_geneve; @@ -9439,17 +9335,16 @@ flow_dv_translate_item_geneve(void *matcher, void *key, if (!geneve_m) geneve_m = &rte_flow_item_geneve_mask; } - memcpy(vni_m, geneve_m->vni, size); + if (key_type & MLX5_SET_MATCHER_M) + geneve_v = geneve_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + geneve_m = geneve_v; for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & geneve_v->vni[i]; + vni_v[i] = geneve_m->vni[i] & geneve_v->vni[i]; gbhdr_m = rte_be_to_cpu_16(geneve_m->ver_opt_len_o_c_rsvd0); gbhdr_v = rte_be_to_cpu_16(geneve_v->ver_opt_len_o_c_rsvd0); - MLX5_SET(fte_match_set_misc, misc_m, geneve_oam, - MLX5_GENEVE_OAMF_VAL(gbhdr_m)); MLX5_SET(fte_match_set_misc, misc_v, geneve_oam, MLX5_GENEVE_OAMF_VAL(gbhdr_v) & MLX5_GENEVE_OAMF_VAL(gbhdr_m)); - MLX5_SET(fte_match_set_misc, misc_m, geneve_opt_len, - MLX5_GENEVE_OPTLEN_VAL(gbhdr_m)); MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, MLX5_GENEVE_OPTLEN_VAL(gbhdr_v) & MLX5_GENEVE_OPTLEN_VAL(gbhdr_m)); @@ -9460,8 +9355,10 @@ flow_dv_translate_item_geneve(void *matcher, void *key, protocol_v = mlx5_translate_tunnel_etypes(pattern_flags); if (protocol_v) protocol_m = 0xFFFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + protocol_v = protocol_m; } - MLX5_SET(fte_match_set_misc, misc_m, geneve_protocol_type, protocol_m); MLX5_SET(fte_match_set_misc, misc_v, geneve_protocol_type, protocol_m & protocol_v); } @@ -9471,10 +9368,8 @@ flow_dv_translate_item_geneve(void *matcher, void *key, * * @param dev[in, out] * Pointer to rte_eth_dev structure. - * @param[in, out] tag_be24 - * Tag value in big endian then R-shift 8. - * @parm[in, out] dev_flow - * Pointer to the dev_flow. + * @param[in] item + * Flow pattern to translate. * @param[out] error * pointer to error structure. * @@ -9551,38 +9446,38 @@ flow_dev_geneve_tlv_option_resource_register(struct rte_eth_dev *dev, } /** - * Add Geneve TLV option item to matcher. + * Add Geneve TLV option item to value. * * @param[in, out] dev * Pointer to rte_eth_dev structure. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. * @param[out] error * Pointer to error structure. */ static int -flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, +flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type, struct rte_flow_error *error) { - const struct rte_flow_item_geneve_opt *geneve_opt_m = item->mask; - const struct rte_flow_item_geneve_opt *geneve_opt_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const struct rte_flow_item_geneve_opt *geneve_opt_m; + const struct rte_flow_item_geneve_opt *geneve_opt_v; + const struct rte_flow_item_geneve_opt *geneve_opt_vv = item->spec; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); rte_be32_t opt_data_key = 0, opt_data_mask = 0; + uint32_t *data; int ret = 0; - if (!geneve_opt_v) + if (MLX5_ITEM_VALID(item, key_type)) return -1; - if (!geneve_opt_m) - geneve_opt_m = &rte_flow_item_geneve_opt_mask; + MLX5_ITEM_UPDATE(item, key_type, geneve_opt_v, geneve_opt_m, + &rte_flow_item_geneve_opt_mask); ret = flow_dev_geneve_tlv_option_resource_register(dev, item, error); if (ret) { @@ -9596,17 +9491,21 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, * If the option length was not requested but the GENEVE TLV option item * is present we set the option length field implicitly. */ - if (!MLX5_GET16(fte_match_set_misc, misc_m, geneve_opt_len)) { - MLX5_SET(fte_match_set_misc, misc_m, geneve_opt_len, - MLX5_GENEVE_OPTLEN_MASK); - MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, - geneve_opt_v->option_len + 1); - } - MLX5_SET(fte_match_set_misc, misc_m, geneve_tlv_option_0_exist, 1); - MLX5_SET(fte_match_set_misc, misc_v, geneve_tlv_option_0_exist, 1); + if (!MLX5_GET16(fte_match_set_misc, misc_v, geneve_opt_len)) { + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, + MLX5_GENEVE_OPTLEN_MASK); + else + MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, + geneve_opt_v->option_len + 1); + } /* Set the data. */ - if (geneve_opt_v->data) { - memcpy(&opt_data_key, geneve_opt_v->data, + if (key_type == MLX5_SET_MATCHER_SW_V) + data = geneve_opt_vv->data; + else + data = geneve_opt_v->data; + if (data) { + memcpy(&opt_data_key, data, RTE_MIN((uint32_t)(geneve_opt_v->option_len * 4), sizeof(opt_data_key))); MLX5_ASSERT((uint32_t)(geneve_opt_v->option_len * 4) <= @@ -9616,9 +9515,6 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, sizeof(opt_data_mask))); MLX5_ASSERT((uint32_t)(geneve_opt_v->option_len * 4) <= sizeof(opt_data_mask)); - MLX5_SET(fte_match_set_misc3, misc3_m, - geneve_tlv_option_0_data, - rte_be_to_cpu_32(opt_data_mask)); MLX5_SET(fte_match_set_misc3, misc3_v, geneve_tlv_option_0_data, rte_be_to_cpu_32(opt_data_key & opt_data_mask)); @@ -9627,10 +9523,8 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, } /** - * Add MPLS item to matcher and to the value. + * Add MPLS item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -9639,93 +9533,78 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, * The protocol layer indicated in previous item. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_mpls(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t prev_layer, - int inner) +flow_dv_translate_item_mpls(void *key, const struct rte_flow_item *item, + uint64_t prev_layer, int inner, + uint32_t key_type) { - const uint32_t *in_mpls_m = item->mask; - const uint32_t *in_mpls_v = item->spec; - uint32_t *out_mpls_m = 0; + const uint32_t *in_mpls_m; + const uint32_t *in_mpls_v; uint32_t *out_mpls_v = 0; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - void *misc2_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_2); void *misc2_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2); - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); switch (prev_layer) { case MLX5_FLOW_LAYER_OUTER_L4_UDP: if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, - 0xffff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_MPLS); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xffff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, MLX5_UDP_PORT_MPLS); } break; case MLX5_FLOW_LAYER_GRE: /* Fall-through. */ case MLX5_FLOW_LAYER_GRE_KEY: if (!MLX5_GET16(fte_match_set_misc, misc_v, gre_protocol)) { - MLX5_SET(fte_match_set_misc, misc_m, gre_protocol, - 0xffff); - MLX5_SET(fte_match_set_misc, misc_v, gre_protocol, - RTE_ETHER_TYPE_MPLS); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_misc, misc_v, + gre_protocol, 0xffff); + else + MLX5_SET(fte_match_set_misc, misc_v, + gre_protocol, RTE_ETHER_TYPE_MPLS); } break; default: break; } - if (!in_mpls_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!in_mpls_m) - in_mpls_m = (const uint32_t *)&rte_flow_item_mpls_mask; + MLX5_ITEM_UPDATE(item, key_type, in_mpls_v, in_mpls_m, + &rte_flow_item_mpls_mask); switch (prev_layer) { case MLX5_FLOW_LAYER_OUTER_L4_UDP: - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_m, - outer_first_mpls_over_udp); out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls_over_udp); break; case MLX5_FLOW_LAYER_GRE: - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_m, - outer_first_mpls_over_gre); out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls_over_gre); break; default: /* Inner MPLS not over GRE is not supported. */ - if (!inner) { - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, - misc2_m, - outer_first_mpls); + if (!inner) out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls); - } break; } - if (out_mpls_m && out_mpls_v) { - *out_mpls_m = *in_mpls_m; + if (out_mpls_v) *out_mpls_v = *in_mpls_v & *in_mpls_m; - } } /** * Add metadata register item to matcher * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] reg_type @@ -9736,12 +9615,9 @@ flow_dv_translate_item_mpls(void *matcher, void *key, * Register mask */ static void -flow_dv_match_meta_reg(void *matcher, void *key, - enum modify_reg reg_type, +flow_dv_match_meta_reg(void *key, enum modify_reg reg_type, uint32_t data, uint32_t mask) { - void *misc2_m = - MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2); void *misc2_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2); uint32_t temp; @@ -9749,11 +9625,9 @@ flow_dv_match_meta_reg(void *matcher, void *key, data &= mask; switch (reg_type) { case REG_A: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_a, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_a, data); break; case REG_B: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_b, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_b, data); break; case REG_C_0: @@ -9762,40 +9636,31 @@ flow_dv_match_meta_reg(void *matcher, void *key, * source vport index and META item value, we should set * this field according to specified mask, not as whole one. */ - temp = MLX5_GET(fte_match_set_misc2, misc2_m, metadata_reg_c_0); - temp |= mask; - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_0, temp); temp = MLX5_GET(fte_match_set_misc2, misc2_v, metadata_reg_c_0); - temp &= ~mask; + if (mask) + temp &= ~mask; temp |= data; MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_0, temp); break; case REG_C_1: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_1, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_1, data); break; case REG_C_2: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_2, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_2, data); break; case REG_C_3: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_3, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_3, data); break; case REG_C_4: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_4, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_4, data); break; case REG_C_5: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_5, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_5, data); break; case REG_C_6: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_6, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_6, data); break; case REG_C_7: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_7, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_7, data); break; default: @@ -9804,34 +9669,71 @@ flow_dv_match_meta_reg(void *matcher, void *key, } } +/** + * Add metadata register item to matcher + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] reg_type + * Type of device metadata register + * @param[in] value + * Register value + * @param[in] mask + * Register mask + */ +static void +flow_dv_match_meta_reg_all(void *matcher, void *key, enum modify_reg reg_type, + uint32_t data, uint32_t mask) +{ + flow_dv_match_meta_reg(key, reg_type, data, mask); + flow_dv_match_meta_reg(matcher, reg_type, mask, mask); +} + /** * Add MARK item to matcher * * @param[in] dev * The device to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_mark(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_mark(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_item_mark *mark; uint32_t value; - uint32_t mask; - - mark = item->mask ? (const void *)item->mask : - &rte_flow_item_mark_mask; - mask = mark->id & priv->sh->dv_mark_mask; - mark = (const void *)item->spec; - MLX5_ASSERT(mark); - value = mark->id & priv->sh->dv_mark_mask & mask; + uint32_t mask = 0; + + if (key_type & MLX5_SET_MATCHER_SW) { + mark = item->mask ? (const void *)item->mask : + &rte_flow_item_mark_mask; + mask = mark->id; + if (key_type == MLX5_SET_MATCHER_SW_M) { + value = mask; + } else { + mark = (const void *)item->spec; + MLX5_ASSERT(mark); + value = mark->id; + } + } else { + mark = (key_type == MLX5_SET_MATCHER_HS_V) ? + (const void *)item->spec : (const void *)item->mask; + MLX5_ASSERT(mark); + value = mark->id; + if (key_type == MLX5_SET_MATCHER_HS_M) + mask = value; + } + mask &= priv->sh->dv_mark_mask; + value &= mask; if (mask) { enum modify_reg reg; @@ -9847,7 +9749,7 @@ flow_dv_translate_item_mark(struct rte_eth_dev *dev, mask <<= shl_c0; value <<= shl_c0; } - flow_dv_match_meta_reg(matcher, key, reg, value, mask); + flow_dv_match_meta_reg(key, reg, value, mask); } } @@ -9856,65 +9758,66 @@ flow_dv_translate_item_mark(struct rte_eth_dev *dev, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] attr * Attributes of flow that includes this item. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_meta(struct rte_eth_dev *dev, - void *matcher, void *key, + void *key, const struct rte_flow_attr *attr, - const struct rte_flow_item *item) + const struct rte_flow_item *item, + uint32_t key_type) { const struct rte_flow_item_meta *meta_m; const struct rte_flow_item_meta *meta_v; + uint32_t value; + uint32_t mask = 0; + int reg; - meta_m = (const void *)item->mask; - if (!meta_m) - meta_m = &rte_flow_item_meta_mask; - meta_v = (const void *)item->spec; - if (meta_v) { - int reg; - uint32_t value = meta_v->data; - uint32_t mask = meta_m->data; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, meta_v, meta_m, + &rte_flow_item_meta_mask); + value = meta_v->data; + mask = meta_m->data; + if (key_type == MLX5_SET_MATCHER_HS_M) + mask = value; + reg = flow_dv_get_metadata_reg(dev, attr, NULL); + if (reg < 0) + return; + MLX5_ASSERT(reg != REG_NON); + if (reg == REG_C_0) { + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t msk_c0 = priv->sh->dv_regc0_mask; + uint32_t shl_c0 = rte_bsf32(msk_c0); - reg = flow_dv_get_metadata_reg(dev, attr, NULL); - if (reg < 0) - return; - MLX5_ASSERT(reg != REG_NON); - if (reg == REG_C_0) { - struct mlx5_priv *priv = dev->data->dev_private; - uint32_t msk_c0 = priv->sh->dv_regc0_mask; - uint32_t shl_c0 = rte_bsf32(msk_c0); - - mask &= msk_c0; - mask <<= shl_c0; - value <<= shl_c0; - } - flow_dv_match_meta_reg(matcher, key, reg, value, mask); + mask &= msk_c0; + mask <<= shl_c0; + value <<= shl_c0; } + flow_dv_match_meta_reg(key, reg, value, mask); } /** * Add vport metadata Reg C0 item to matcher * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. - * @param[in] reg - * Flow pattern to translate. + * @param[in] value + * Register value + * @param[in] mask + * Register mask */ static void -flow_dv_translate_item_meta_vport(void *matcher, void *key, - uint32_t value, uint32_t mask) +flow_dv_translate_item_meta_vport(void *key, uint32_t value, uint32_t mask) { - flow_dv_match_meta_reg(matcher, key, REG_C_0, value, mask); + flow_dv_match_meta_reg(key, REG_C_0, value, mask); } /** @@ -9922,17 +9825,17 @@ flow_dv_translate_item_meta_vport(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { const struct mlx5_rte_flow_item_tag *tag_v = item->spec; const struct mlx5_rte_flow_item_tag *tag_m = item->mask; @@ -9941,6 +9844,8 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, MLX5_ASSERT(tag_v); value = tag_v->data; mask = tag_m ? tag_m->data : UINT32_MAX; + if (key_type & MLX5_SET_MATCHER_M) + value = mask; if (tag_v->id == REG_C_0) { struct mlx5_priv *priv = dev->data->dev_private; uint32_t msk_c0 = priv->sh->dv_regc0_mask; @@ -9950,7 +9855,7 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, mask <<= shl_c0; value <<= shl_c0; } - flow_dv_match_meta_reg(matcher, key, tag_v->id, value, mask); + flow_dv_match_meta_reg(key, tag_v->id, value, mask); } /** @@ -9958,50 +9863,50 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_tag(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_tag(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { - const struct rte_flow_item_tag *tag_v = item->spec; - const struct rte_flow_item_tag *tag_m = item->mask; + const struct rte_flow_item_tag *tag_vv = item->spec; + const struct rte_flow_item_tag *tag_v; + const struct rte_flow_item_tag *tag_m; enum modify_reg reg; + uint32_t index; - MLX5_ASSERT(tag_v); - tag_m = tag_m ? tag_m : &rte_flow_item_tag_mask; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, tag_v, tag_m, + &rte_flow_item_tag_mask); + /* When set mask, the index should be from spec. */ + index = tag_vv ? tag_vv->index : tag_v->index; /* Get the metadata register index for the tag. */ - reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, tag_v->index, NULL); + reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, index, NULL); MLX5_ASSERT(reg > 0); - flow_dv_match_meta_reg(matcher, key, reg, tag_v->data, tag_m->data); + flow_dv_match_meta_reg(key, reg, tag_v->data, tag_m->data); } /** * Add source vport match to the specified matcher. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] port * Source vport value to match - * @param[in] mask - * Mask */ static void -flow_dv_translate_item_source_vport(void *matcher, void *key, - int16_t port, uint16_t mask) +flow_dv_translate_item_source_vport(void *key, + int16_t port) { - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - MLX5_SET(fte_match_set_misc, misc_m, source_port, mask); MLX5_SET(fte_match_set_misc, misc_v, source_port, port); } @@ -10010,31 +9915,34 @@ flow_dv_translate_item_source_vport(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] + * @param[in] attr * Flow attributes. + * @param[in] key_type + * Set flow matcher mask or value. * * @return * 0 on success, a negative errno value otherwise. */ static int -flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, - const struct rte_flow_attr *attr) +flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + const struct rte_flow_attr *attr, + uint32_t key_type) { const struct rte_flow_item_port_id *pid_m = item ? item->mask : NULL; const struct rte_flow_item_port_id *pid_v = item ? item->spec : NULL; struct mlx5_priv *priv; uint16_t mask, id; + uint32_t vport_meta; if (pid_v && pid_v->id == MLX5_PORT_ESW_MGR) { - flow_dv_translate_item_source_vport(matcher, key, - mlx5_flow_get_esw_manager_vport_id(dev), 0xffff); + flow_dv_translate_item_source_vport(key, + key_type & MLX5_SET_MATCHER_V ? + mlx5_flow_get_esw_manager_vport_id(dev) : 0xffff); return 0; } mask = pid_m ? pid_m->id : 0xffff; @@ -10042,6 +9950,13 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, priv = mlx5_port_to_eswitch_info(id, item == NULL); if (!priv) return -rte_errno; + if (key_type & MLX5_SET_MATCHER_M) { + id = mask; + vport_meta = priv->vport_meta_mask; + } else { + id = priv->vport_id; + vport_meta = priv->vport_meta_tag; + } /* * Translate to vport field or to metadata, depending on mode. * Kernel can use either misc.source_port or half of C0 metadata @@ -10055,20 +9970,17 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, */ if (mask == 0xffff && priv->vport_id == 0xffff && priv->pf_bond < 0 && attr->transfer) - flow_dv_translate_item_source_vport - (matcher, key, priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); /* * We should always set the vport metadata register, * otherwise the SW steering library can drop * the rule if wire vport metadata value is not zero, * it depends on kernel configuration. */ - flow_dv_translate_item_meta_vport(matcher, key, - priv->vport_meta_tag, - priv->vport_meta_mask); + flow_dv_translate_item_meta_vport + (key, vport_meta, priv->vport_meta_mask); } else { - flow_dv_translate_item_source_vport(matcher, key, - priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); } return 0; } @@ -10078,8 +9990,6 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -10091,21 +10001,25 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, * 0 on success, a negative errno value otherwise. */ static int -flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, - void *key, +flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *key, const struct rte_flow_item *item, - const struct rte_flow_attr *attr) + const struct rte_flow_attr *attr, + uint32_t key_type) { const struct rte_flow_item_ethdev *pid_m = item ? item->mask : NULL; const struct rte_flow_item_ethdev *pid_v = item ? item->spec : NULL; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); struct mlx5_priv *priv; uint16_t mask, id; + uint32_t vport_meta; + MLX5_ASSERT(wks); if (!pid_m && !pid_v) return 0; if (pid_v && pid_v->port_id == UINT16_MAX) { - flow_dv_translate_item_source_vport(matcher, key, - mlx5_flow_get_esw_manager_vport_id(dev), UINT16_MAX); + flow_dv_translate_item_source_vport(key, + key_type & MLX5_SET_MATCHER_V ? + mlx5_flow_get_esw_manager_vport_id(dev) : 0xffff); return 0; } mask = pid_m ? pid_m->port_id : UINT16_MAX; @@ -10113,6 +10027,14 @@ flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, priv = mlx5_port_to_eswitch_info(id, item == NULL); if (!priv) return -rte_errno; + if (key_type & MLX5_SET_MATCHER_M) { + id = mask; + vport_meta = priv->vport_meta_mask; + } else { + id = priv->vport_id; + vport_meta = priv->vport_meta_tag; + wks->vport_meta_tag = vport_meta; + } /* * Translate to vport field or to metadata, depending on mode. * Kernel can use either misc.source_port or half of C0 metadata @@ -10125,119 +10047,133 @@ flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, * save the extra vport match. */ if (mask == UINT16_MAX && priv->vport_id == UINT16_MAX && - priv->pf_bond < 0 && attr->transfer) - flow_dv_translate_item_source_vport - (matcher, key, priv->vport_id, mask); + priv->pf_bond < 0 && attr->transfer && + priv->sh->config.dv_flow_en != 2) + flow_dv_translate_item_source_vport(key, id); /* * We should always set the vport metadata register, * otherwise the SW steering library can drop * the rule if wire vport metadata value is not zero, * it depends on kernel configuration. */ - flow_dv_translate_item_meta_vport(matcher, key, - priv->vport_meta_tag, + flow_dv_translate_item_meta_vport(key, vport_meta, priv->vport_meta_mask); } else { - flow_dv_translate_item_source_vport(matcher, key, - priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); } return 0; } /** - * Add ICMP6 item to matcher and to the value. + * Translate port-id item to eswitch match on port-id. * + * @param[in] dev + * The devich to configure through. * @param[in, out] matcher * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] attr + * Flow attributes. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +static int +flow_dv_translate_item_port_id_all(struct rte_eth_dev *dev, + void *matcher, void *key, + const struct rte_flow_item *item, + const struct rte_flow_attr *attr) +{ + int ret; + + ret = flow_dv_translate_item_port_id + (dev, matcher, item, attr, MLX5_SET_MATCHER_SW_M); + if (ret) + return ret; + ret = flow_dv_translate_item_port_id + (dev, key, item, attr, MLX5_SET_MATCHER_SW_V); + return ret; +} + + +/** + * Add ICMP6 item to the value. + * + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_icmp6(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_icmp6(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_icmp6 *icmp6_m = item->mask; - const struct rte_flow_item_icmp6 *icmp6_v = item->spec; - void *headers_m; + const struct rte_flow_item_icmp6 *icmp6_m; + const struct rte_flow_item_icmp6 *icmp6_v; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ICMPV6); - if (!icmp6_v) + + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, 0xFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, + IPPROTO_ICMPV6); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!icmp6_m) - icmp6_m = &rte_flow_item_icmp6_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, icmpv6_type, icmp6_m->type); + MLX5_ITEM_UPDATE(item, key_type, icmp6_v, icmp6_m, + &rte_flow_item_icmp6_mask); MLX5_SET(fte_match_set_misc3, misc3_v, icmpv6_type, icmp6_v->type & icmp6_m->type); - MLX5_SET(fte_match_set_misc3, misc3_m, icmpv6_code, icmp6_m->code); MLX5_SET(fte_match_set_misc3, misc3_v, icmpv6_code, icmp6_v->code & icmp6_m->code); } /** - * Add ICMP item to matcher and to the value. + * Add ICMP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_icmp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_icmp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_icmp *icmp_m = item->mask; - const struct rte_flow_item_icmp *icmp_v = item->spec; + const struct rte_flow_item_icmp *icmp_m; + const struct rte_flow_item_icmp *icmp_v; uint32_t icmp_header_data_m = 0; uint32_t icmp_header_data_v = 0; - void *headers_m; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ICMP); - if (!icmp_v) + + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_ICMP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!icmp_m) - icmp_m = &rte_flow_item_icmp_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_type, - icmp_m->hdr.icmp_type); + MLX5_ITEM_UPDATE(item, key_type, icmp_v, icmp_m, + &rte_flow_item_icmp_mask); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_type, icmp_v->hdr.icmp_type & icmp_m->hdr.icmp_type); - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_code, - icmp_m->hdr.icmp_code); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_code, icmp_v->hdr.icmp_code & icmp_m->hdr.icmp_code); icmp_header_data_m = rte_be_to_cpu_16(icmp_m->hdr.icmp_seq_nb); @@ -10246,64 +10182,51 @@ flow_dv_translate_item_icmp(void *matcher, void *key, icmp_header_data_v = rte_be_to_cpu_16(icmp_v->hdr.icmp_seq_nb); icmp_header_data_v |= rte_be_to_cpu_16(icmp_v->hdr.icmp_ident) << 16; - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_header_data, - icmp_header_data_m); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_header_data, icmp_header_data_v & icmp_header_data_m); } } /** - * Add GTP item to matcher and to the value. + * Add GTP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gtp(void *matcher, void *key, - const struct rte_flow_item *item, int inner) +flow_dv_translate_item_gtp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_gtp *gtp_m = item->mask; - const struct rte_flow_item_gtp *gtp_v = item->spec; - void *headers_m; + const struct rte_flow_item_gtp *gtp_m; + const struct rte_flow_item_gtp *gtp_v; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); uint16_t dport = RTE_GTPU_UDP_PORT; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, dport); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, dport); } - if (!gtp_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!gtp_m) - gtp_m = &rte_flow_item_gtp_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_flags, - gtp_m->v_pt_rsv_flags); + MLX5_ITEM_UPDATE(item, key_type, gtp_v, gtp_m, + &rte_flow_item_gtp_mask); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags, gtp_v->v_pt_rsv_flags & gtp_m->v_pt_rsv_flags); - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_type, gtp_m->msg_type); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_type, gtp_v->msg_type & gtp_m->msg_type); - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_teid, - rte_be_to_cpu_32(gtp_m->teid)); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_teid, rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid)); } @@ -10311,21 +10234,19 @@ flow_dv_translate_item_gtp(void *matcher, void *key, /** * Add GTP PSC item to matcher. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static int -flow_dv_translate_item_gtp_psc(void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_gtp_psc(void *key, const struct rte_flow_item *item, + uint32_t key_type) { - const struct rte_flow_item_gtp_psc *gtp_psc_m = item->mask; - const struct rte_flow_item_gtp_psc *gtp_psc_v = item->spec; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); + const struct rte_flow_item_gtp_psc *gtp_psc_m; + const struct rte_flow_item_gtp_psc *gtp_psc_v; void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); union { uint32_t w32; @@ -10335,52 +10256,40 @@ flow_dv_translate_item_gtp_psc(void *matcher, void *key, uint8_t next_ext_header_type; }; } dw_2; + union { + uint32_t w32; + struct { + uint8_t len; + uint8_t type_flags; + uint8_t qfi; + uint8_t reserved; + }; + } dw_0; uint8_t gtp_flags; /* Always set E-flag match on one, regardless of GTP item settings. */ - gtp_flags = MLX5_GET(fte_match_set_misc3, misc3_m, gtpu_msg_flags); - gtp_flags |= MLX5_GTP_EXT_HEADER_FLAG; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_flags, gtp_flags); gtp_flags = MLX5_GET(fte_match_set_misc3, misc3_v, gtpu_msg_flags); gtp_flags |= MLX5_GTP_EXT_HEADER_FLAG; MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags, gtp_flags); /*Set next extension header type. */ dw_2.seq_num = 0; dw_2.npdu_num = 0; - dw_2.next_ext_header_type = 0xff; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_dw_2, - rte_cpu_to_be_32(dw_2.w32)); - dw_2.seq_num = 0; - dw_2.npdu_num = 0; - dw_2.next_ext_header_type = 0x85; + if (key_type & MLX5_SET_MATCHER_M) + dw_2.next_ext_header_type = 0xff; + else + dw_2.next_ext_header_type = 0x85; MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_dw_2, rte_cpu_to_be_32(dw_2.w32)); - if (gtp_psc_v) { - union { - uint32_t w32; - struct { - uint8_t len; - uint8_t type_flags; - uint8_t qfi; - uint8_t reserved; - }; - } dw_0; - - /*Set extension header PDU type and Qos. */ - if (!gtp_psc_m) - gtp_psc_m = &rte_flow_item_gtp_psc_mask; - dw_0.w32 = 0; - dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_m->hdr.type); - dw_0.qfi = gtp_psc_m->hdr.qfi; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_first_ext_dw_0, - rte_cpu_to_be_32(dw_0.w32)); - dw_0.w32 = 0; - dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_v->hdr.type & - gtp_psc_m->hdr.type); - dw_0.qfi = gtp_psc_v->hdr.qfi & gtp_psc_m->hdr.qfi; - MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_first_ext_dw_0, - rte_cpu_to_be_32(dw_0.w32)); - } + if (MLX5_ITEM_VALID(item, key_type)) + return 0; + MLX5_ITEM_UPDATE(item, key_type, gtp_psc_v, + gtp_psc_m, &rte_flow_item_gtp_psc_mask); + dw_0.w32 = 0; + dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_v->hdr.type & + gtp_psc_m->hdr.type); + dw_0.qfi = gtp_psc_v->hdr.qfi & gtp_psc_m->hdr.qfi; + MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_first_ext_dw_0, + rte_cpu_to_be_32(dw_0.w32)); return 0; } @@ -10389,29 +10298,27 @@ flow_dv_translate_item_gtp_psc(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] last_item * Last item flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, - uint64_t last_item) +flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint64_t last_item, uint32_t key_type) { struct mlx5_priv *priv = dev->data->dev_private; - const struct rte_flow_item_ecpri *ecpri_m = item->mask; - const struct rte_flow_item_ecpri *ecpri_v = item->spec; + const struct rte_flow_item_ecpri *ecpri_m; + const struct rte_flow_item_ecpri *ecpri_v; + const struct rte_flow_item_ecpri *ecpri_vv = item->spec; struct rte_ecpri_common_hdr common; - void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_4); void *misc4_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_4); uint32_t *samples; - void *dw_m; void *dw_v; /* @@ -10419,21 +10326,22 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, * match on eCPRI EtherType implicitly. */ if (last_item & MLX5_FLOW_LAYER_OUTER_L2) { - void *hdrs_m, *hdrs_v, *l2m, *l2v; + void *hdrs_v, *l2v; - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - l2m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, ethertype); l2v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype); - if (*(uint16_t *)l2m == 0 && *(uint16_t *)l2v == 0) { - *(uint16_t *)l2m = UINT16_MAX; - *(uint16_t *)l2v = RTE_BE16(RTE_ETHER_TYPE_ECPRI); + if (*(uint16_t *)l2v == 0) { + if (key_type & MLX5_SET_MATCHER_M) + *(uint16_t *)l2v = UINT16_MAX; + else + *(uint16_t *)l2v = + RTE_BE16(RTE_ETHER_TYPE_ECPRI); } } - if (!ecpri_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ecpri_m) - ecpri_m = &rte_flow_item_ecpri_mask; + MLX5_ITEM_UPDATE(item, key_type, ecpri_v, ecpri_m, + &rte_flow_item_ecpri_mask); /* * Maximal four DW samples are supported in a single matching now. * Two are used now for a eCPRI matching: @@ -10445,16 +10353,11 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, return; samples = priv->sh->ecpri_parser.ids; /* Need to take the whole DW as the mask to fill the entry. */ - dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m, - prog_sample_field_value_0); dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v, prog_sample_field_value_0); /* Already big endian (network order) in the header. */ - *(uint32_t *)dw_m = ecpri_m->hdr.common.u32; *(uint32_t *)dw_v = ecpri_v->hdr.common.u32 & ecpri_m->hdr.common.u32; /* Sample#0, used for matching type, offset 0. */ - MLX5_SET(fte_match_set_misc4, misc4_m, - prog_sample_field_id_0, samples[0]); /* It makes no sense to set the sample ID in the mask field. */ MLX5_SET(fte_match_set_misc4, misc4_v, prog_sample_field_id_0, samples[0]); @@ -10463,21 +10366,19 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, * Some wildcard rules only matching type field should be supported. */ if (ecpri_m->hdr.dummy[0]) { - common.u32 = rte_be_to_cpu_32(ecpri_v->hdr.common.u32); + if (key_type == MLX5_SET_MATCHER_SW_M) + common.u32 = rte_be_to_cpu_32(ecpri_vv->hdr.common.u32); + else + common.u32 = rte_be_to_cpu_32(ecpri_v->hdr.common.u32); switch (common.type) { case RTE_ECPRI_MSG_TYPE_IQ_DATA: case RTE_ECPRI_MSG_TYPE_RTC_CTRL: case RTE_ECPRI_MSG_TYPE_DLY_MSR: - dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m, - prog_sample_field_value_1); dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v, prog_sample_field_value_1); - *(uint32_t *)dw_m = ecpri_m->hdr.dummy[0]; *(uint32_t *)dw_v = ecpri_v->hdr.dummy[0] & ecpri_m->hdr.dummy[0]; /* Sample#1, to match message body, offset 4. */ - MLX5_SET(fte_match_set_misc4, misc4_m, - prog_sample_field_id_1, samples[1]); MLX5_SET(fte_match_set_misc4, misc4_v, prog_sample_field_id_1, samples[1]); break; @@ -10542,7 +10443,7 @@ flow_dv_translate_item_aso_ct(struct rte_eth_dev *dev, reg_id = mlx5_flow_get_reg_id(dev, MLX5_ASO_CONNTRACK, 0, &error); if (reg_id == REG_NON) return; - flow_dv_match_meta_reg(matcher, key, (enum modify_reg)reg_id, + flow_dv_match_meta_reg_all(matcher, key, (enum modify_reg)reg_id, reg_value, reg_mask); } @@ -11328,42 +11229,48 @@ flow_dv_translate_create_counter(struct rte_eth_dev *dev, * * @param[in] dev * Pointer to the dev struct. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) + void *key, + const struct rte_flow_item *item, + uint32_t key_type) { const struct mlx5_rte_flow_item_tx_queue *queue_m; const struct mlx5_rte_flow_item_tx_queue *queue_v; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); - void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - struct mlx5_txq_ctrl *txq; - uint32_t queue, mask; + const struct mlx5_rte_flow_item_tx_queue queue_mask = { + .queue = UINT32_MAX, + }; + void *misc_v = + MLX5_ADDR_OF(fte_match_param, key, misc_parameters); + struct mlx5_txq_ctrl *txq = NULL; + uint32_t queue; - queue_m = (const void *)item->mask; - queue_v = (const void *)item->spec; - if (!queue_v) + MLX5_ITEM_UPDATE(item, key_type, queue_v, queue_m, &queue_mask); + if (!queue_m || !queue_v) return; - txq = mlx5_txq_get(dev, queue_v->queue); - if (!txq) - return; - if (txq->is_hairpin) - queue = txq->obj->sq->id; - else - queue = txq->obj->sq_obj.sq->id; - mask = queue_m == NULL ? UINT32_MAX : queue_m->queue; - MLX5_SET(fte_match_set_misc, misc_m, source_sqn, mask); - MLX5_SET(fte_match_set_misc, misc_v, source_sqn, queue & mask); - mlx5_txq_release(dev, queue_v->queue); + if (key_type & MLX5_SET_MATCHER_V) { + txq = mlx5_txq_get(dev, queue_v->queue); + if (!txq) + return; + if (txq->is_hairpin) + queue = txq->obj->sq->id; + else + queue = txq->obj->sq_obj.sq->id; + if (key_type == MLX5_SET_MATCHER_SW_V) + queue &= queue_m->queue; + } else { + queue = queue_m->queue; + } + MLX5_SET(fte_match_set_misc, misc_v, source_sqn, queue); + if (txq) + mlx5_txq_release(dev, queue_v->queue); } /** @@ -13029,7 +12936,298 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, } /** - * Translate the flow item to matcher. + * Fill the flow matcher with DV spec. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] items + * Pointer to the list of items. + * @param[in] wks + * Pointer to the matcher workspace. + * @param[in] key + * Pointer to the flow matcher key. + * @param[in] key_type + * Key type. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_translate_items(struct rte_eth_dev *dev, + const struct rte_flow_item *items, + struct mlx5_dv_matcher_workspace *wks, + void *key, uint32_t key_type, + struct rte_flow_error *error) +{ + struct mlx5_flow_rss_desc *rss_desc = wks->rss_desc; + uint8_t next_protocol = wks->next_protocol; + int tunnel = !!(wks->item_flags & MLX5_FLOW_LAYER_TUNNEL); + int item_type = items->type; + uint64_t last_item = wks->last_item; + int ret; + + switch (item_type) { + case RTE_FLOW_ITEM_TYPE_ESP: + flow_dv_translate_item_esp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_ITEM_ESP; + break; + case RTE_FLOW_ITEM_TYPE_PORT_ID: + flow_dv_translate_item_port_id + (dev, key, items, wks->attr, key_type); + last_item = MLX5_FLOW_ITEM_PORT_ID; + break; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + flow_dv_translate_item_represented_port + (dev, key, items, wks->attr, key_type); + last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; + break; + case RTE_FLOW_ITEM_TYPE_ETH: + flow_dv_translate_item_eth(key, items, tunnel, + wks->group, key_type); + wks->priority = wks->action_flags & + MLX5_FLOW_ACTION_DEFAULT_MISS && + !wks->external ? + MLX5_PRIORITY_MAP_L3 : + MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + flow_dv_translate_item_vlan(key, items, tunnel, wks, key_type); + wks->priority = MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | + MLX5_FLOW_LAYER_INNER_VLAN) : + (MLX5_FLOW_LAYER_OUTER_L2 | + MLX5_FLOW_LAYER_OUTER_VLAN); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &wks->item_flags, &tunnel); + flow_dv_translate_item_ipv4(key, items, tunnel, + wks->group, key_type); + wks->priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv4 *) + items->mask)->hdr.next_proto_id) { + next_protocol = + ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + next_protocol &= + ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &wks->item_flags, &tunnel); + flow_dv_translate_item_ipv6(key, items, tunnel, + wks->group, key_type); + wks->priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto) { + next_protocol = + ((const struct rte_flow_item_ipv6 *) + items->spec)->hdr.proto; + next_protocol &= + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6 *) + (items->mask))->hdr.proto; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6 *) + (items->spec))->hdr.proto; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: + flow_dv_translate_item_ipv6_frag_ext + (key, items, tunnel, key_type); + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : + MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header) { + next_protocol = + ((const struct rte_flow_item_ipv6_frag_ext *) + items->spec)->hdr.next_header; + next_protocol &= + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *) + (items->mask))->hdr.next_header; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *) + (items->spec))->hdr.next_header; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_TCP: + flow_dv_translate_item_tcp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; + break; + case RTE_FLOW_ITEM_TYPE_UDP: + flow_dv_translate_item_udp(key, items, tunnel, wks, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; + break; + case RTE_FLOW_ITEM_TYPE_GRE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + wks->gre_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + flow_dv_translate_item_gre_key(key, items, key_type); + last_item = MLX5_FLOW_LAYER_GRE_KEY; + break; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_NVGRE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + flow_dv_translate_item_vxlan(dev, wks->attr, key, + items, tunnel, wks, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_VXLAN_GPE; + break; + case RTE_FLOW_ITEM_TYPE_GENEVE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GENEVE; + break; + case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: + ret = flow_dv_translate_item_geneve_opt + (dev, key, items, key_type, error); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GENEVE TLV option"); + wks->geneve_tlv_option = 1; + last_item = MLX5_FLOW_LAYER_GENEVE_OPT; + break; + case RTE_FLOW_ITEM_TYPE_MPLS: + flow_dv_translate_item_mpls(key, items, last_item, + tunnel, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_MPLS; + break; + case RTE_FLOW_ITEM_TYPE_MARK: + flow_dv_translate_item_mark(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_MARK; + break; + case RTE_FLOW_ITEM_TYPE_META: + flow_dv_translate_item_meta + (dev, key, wks->attr, items, key_type); + last_item = MLX5_FLOW_ITEM_METADATA; + break; + case RTE_FLOW_ITEM_TYPE_ICMP: + flow_dv_translate_item_icmp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP; + break; + case RTE_FLOW_ITEM_TYPE_ICMP6: + flow_dv_translate_item_icmp6(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP6; + break; + case RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_item_tag(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TAG; + break; + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_mlx5_item_tag(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TAG; + break; + case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + flow_dv_translate_item_tx_queue(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TX_QUEUE; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + flow_dv_translate_item_gtp(key, items, tunnel, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = flow_dv_translate_item_gtp_psc(key, items, key_type); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GTP PSC item"); + last_item = MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_ECPRI: + if (!mlx5_flex_parser_ecpri_exist(dev)) { + /* Create it only the first time to be used. */ + ret = mlx5_flex_parser_ecpri_alloc(dev); + if (ret) + return rte_flow_error_set + (error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, + "cannot create eCPRI parser"); + } + flow_dv_translate_item_ecpri + (dev, key, items, last_item, key_type); + /* No other protocol should follow eCPRI layer. */ + last_item = MLX5_FLOW_LAYER_ECPRI; + break; + default: + break; + } + wks->item_flags |= last_item; + wks->last_item = last_item; + wks->next_protocol = next_protocol; + return 0; +} + +/** + * Fill the SW steering flow with DV spec. * * @param[in] dev * Pointer to rte_eth_dev structure. @@ -13039,7 +13237,7 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * Pointer to the flow attributes. * @param[in] items * Pointer to the list of items. - * @param[in] matcher + * @param[in, out] matcher * Pointer to the flow matcher. * @param[out] error * Pointer to the error structure. @@ -13048,287 +13246,41 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -flow_dv_translate_items(struct rte_eth_dev *dev, - struct mlx5_flow *dev_flow, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - struct mlx5_flow_dv_matcher *matcher, - struct rte_flow_error *error) +flow_dv_translate_items_sws(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item *items, + struct mlx5_flow_dv_matcher *matcher, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow *flow = dev_flow->flow; - struct mlx5_flow_handle *handle = dev_flow->handle; - struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); - struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc; - uint64_t item_flags = 0; - uint64_t last_item = 0; void *match_mask = matcher->mask.buf; void *match_value = dev_flow->dv.value.buf; - uint8_t next_protocol = 0xff; - uint16_t priority = 0; + struct mlx5_dv_matcher_workspace wks = { + .action_flags = dev_flow->act_flags, + .item_flags = 0, + .external = dev_flow->external, + .next_protocol = 0xff, + .group = dev_flow->dv.group, + .attr = attr, + .rss_desc = &((struct mlx5_flow_workspace *) + mlx5_flow_get_thread_workspace())->rss_desc, + }; + struct mlx5_dv_matcher_workspace wks_m = wks; const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; - const struct rte_flow_item *tunnel_item = NULL; - const struct rte_flow_item *gre_item = NULL; int ret = 0; + int tunnel; for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { - int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); - int item_type = items->type; - - if (!mlx5_flow_os_item_supported(item_type)) + if (!mlx5_flow_os_item_supported(items->type)) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, NULL, "item not supported"); - switch (item_type) { - case RTE_FLOW_ITEM_TYPE_ESP: - flow_dv_translate_item_esp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_ITEM_ESP; - break; - case RTE_FLOW_ITEM_TYPE_PORT_ID: - flow_dv_translate_item_port_id - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_PORT_ID; - break; - case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: - flow_dv_translate_item_represented_port - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; - break; - case RTE_FLOW_ITEM_TYPE_ETH: - flow_dv_translate_item_eth(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = dev_flow->act_flags & - MLX5_FLOW_ACTION_DEFAULT_MISS && - !dev_flow->external ? - MLX5_PRIORITY_MAP_L3 : - MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : - MLX5_FLOW_LAYER_OUTER_L2; - break; - case RTE_FLOW_ITEM_TYPE_VLAN: - flow_dv_translate_item_vlan(dev_flow, - match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | - MLX5_FLOW_LAYER_INNER_VLAN) : - (MLX5_FLOW_LAYER_OUTER_L2 | - MLX5_FLOW_LAYER_OUTER_VLAN); - break; - case RTE_FLOW_ITEM_TYPE_IPV4: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv4(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : - MLX5_FLOW_LAYER_OUTER_L3_IPV4; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv4 *) - items->mask)->hdr.next_proto_id) { - next_protocol = - ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - next_protocol &= - ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_IPV6: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv6(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : - MLX5_FLOW_LAYER_OUTER_L3_IPV6; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto) { - next_protocol = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol &= - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: - flow_dv_translate_item_ipv6_frag_ext(match_mask, - match_value, - items, tunnel); - last_item = tunnel ? - MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : - MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header) { - next_protocol = - ((const struct rte_flow_item_ipv6_frag_ext *) - items->spec)->hdr.next_header; - next_protocol &= - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_TCP: - flow_dv_translate_item_tcp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : - MLX5_FLOW_LAYER_OUTER_L4_TCP; - break; - case RTE_FLOW_ITEM_TYPE_UDP: - flow_dv_translate_item_udp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : - MLX5_FLOW_LAYER_OUTER_L4_UDP; - break; - case RTE_FLOW_ITEM_TYPE_GRE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - gre_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GRE_KEY: - flow_dv_translate_item_gre_key(match_mask, - match_value, items); - last_item = MLX5_FLOW_LAYER_GRE_KEY; - break; - case RTE_FLOW_ITEM_TYPE_GRE_OPTION: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_NVGRE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_VXLAN: - flow_dv_translate_item_vxlan(dev, attr, - match_mask, match_value, - items, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN; - break; - case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN_GPE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GENEVE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GENEVE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: - ret = flow_dv_translate_item_geneve_opt(dev, match_mask, - match_value, - items, error); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GENEVE TLV option"); - flow->geneve_tlv_option = 1; - last_item = MLX5_FLOW_LAYER_GENEVE_OPT; - break; - case RTE_FLOW_ITEM_TYPE_MPLS: - flow_dv_translate_item_mpls(match_mask, match_value, - items, last_item, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_MPLS; - break; - case RTE_FLOW_ITEM_TYPE_MARK: - flow_dv_translate_item_mark(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_MARK; - break; - case RTE_FLOW_ITEM_TYPE_META: - flow_dv_translate_item_meta(dev, match_mask, - match_value, attr, items); - last_item = MLX5_FLOW_ITEM_METADATA; - break; - case RTE_FLOW_ITEM_TYPE_ICMP: - flow_dv_translate_item_icmp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP; - break; - case RTE_FLOW_ITEM_TYPE_ICMP6: - flow_dv_translate_item_icmp6(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP6; - break; - case RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; - break; - case MLX5_RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_mlx5_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; - break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - flow_dv_translate_item_tx_queue(dev, match_mask, - match_value, - items); - last_item = MLX5_FLOW_ITEM_TX_QUEUE; - break; - case RTE_FLOW_ITEM_TYPE_GTP: - flow_dv_translate_item_gtp(match_mask, match_value, - items, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GTP; - break; - case RTE_FLOW_ITEM_TYPE_GTP_PSC: - ret = flow_dv_translate_item_gtp_psc(match_mask, - match_value, - items); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GTP PSC item"); - last_item = MLX5_FLOW_LAYER_GTP_PSC; - break; - case RTE_FLOW_ITEM_TYPE_ECPRI: - if (!mlx5_flex_parser_ecpri_exist(dev)) { - /* Create it only the first time to be used. */ - ret = mlx5_flex_parser_ecpri_alloc(dev); - if (ret) - return rte_flow_error_set - (error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, - "cannot create eCPRI parser"); - } - flow_dv_translate_item_ecpri(dev, match_mask, - match_value, items, - last_item); - /* No other protocol should follow eCPRI layer. */ - last_item = MLX5_FLOW_LAYER_ECPRI; - break; + tunnel = !!(wks.item_flags & MLX5_FLOW_LAYER_TUNNEL); + switch (items->type) { case RTE_FLOW_ITEM_TYPE_INTEGRITY: flow_dv_translate_item_integrity(items, integrity_items, - &last_item); + &wks.last_item); break; case RTE_FLOW_ITEM_TYPE_CONNTRACK: flow_dv_translate_item_aso_ct(dev, match_mask, @@ -13338,13 +13290,22 @@ flow_dv_translate_items(struct rte_eth_dev *dev, flow_dv_translate_item_flex(dev, match_mask, match_value, items, dev_flow, tunnel != 0); - last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : - MLX5_FLOW_ITEM_OUTER_FLEX; + wks.last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : + MLX5_FLOW_ITEM_OUTER_FLEX; break; + default: + ret = flow_dv_translate_items(dev, items, &wks_m, + match_mask, MLX5_SET_MATCHER_SW_M, error); + if (ret) + return ret; + ret = flow_dv_translate_items(dev, items, &wks, + match_value, MLX5_SET_MATCHER_SW_V, error); + if (ret) + return ret; break; } - item_flags |= last_item; + wks.item_flags |= wks.last_item; } /* * When E-Switch mode is enabled, we have two cases where we need to @@ -13354,48 +13315,82 @@ flow_dv_translate_items(struct rte_eth_dev *dev, * In both cases the source port is set according the current port * in use. */ - if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && - !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && + if (!(wks.item_flags & MLX5_FLOW_ITEM_PORT_ID) && + !(wks.item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && !(attr->egress && !attr->transfer)) { - if (flow_dv_translate_item_port_id(dev, match_mask, + if (flow_dv_translate_item_port_id_all(dev, match_mask, match_value, NULL, attr)) return -rte_errno; } - if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + if (wks.item_flags & MLX5_FLOW_ITEM_INTEGRITY) { flow_dv_translate_item_integrity_post(match_mask, match_value, integrity_items, - item_flags); - } - if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) - flow_dv_translate_item_vxlan_gpe(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GENEVE) - flow_dv_translate_item_geneve(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GRE) { - if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) - flow_dv_translate_item_gre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) - flow_dv_translate_item_nvgre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) - flow_dv_translate_item_gre_option(match_mask, match_value, - tunnel_item, gre_item, item_flags); - else + wks.item_flags); + } + if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { + flow_dv_translate_item_vxlan_gpe(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_vxlan_gpe(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GENEVE) { + flow_dv_translate_item_geneve(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_geneve(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GRE) { + if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) { + flow_dv_translate_item_gre(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_gre(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) { + flow_dv_translate_item_nvgre(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_nvgre(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) { + flow_dv_translate_item_gre_option(match_mask, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_gre_option(match_value, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else { MLX5_ASSERT(false); + } } - matcher->priority = priority; + dev_flow->handle->vf_vlan.tag = wks.vlan_tag; + matcher->priority = wks.priority; #ifdef RTE_LIBRTE_MLX5_DEBUG - MLX5_ASSERT(!flow_dv_check_valid_spec(matcher->mask.buf, - dev_flow->dv.value.buf)); + MLX5_ASSERT(!flow_dv_check_valid_spec(match_mask, match_value)); #endif /* * Layers may be already initialized from prefix flow if this dev_flow * is the suffix flow. */ - handle->layers |= item_flags; - return ret; + dev_flow->handle->layers |= wks.item_flags; + dev_flow->flow->geneve_tlv_option = wks.geneve_tlv_option; + return 0; } /** @@ -14124,7 +14119,7 @@ flow_dv_translate(struct rte_eth_dev *dev, modify_action_position = actions_n++; } dev_flow->act_flags = action_flags; - ret = flow_dv_translate_items(dev, dev_flow, attr, items, &matcher, + ret = flow_dv_translate_items_sws(dev, dev_flow, attr, items, &matcher, error); if (ret) return -rte_errno; @@ -16690,27 +16685,23 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev, struct mlx5_flow_dv_match_params value = { .size = sizeof(value.buf), }; - struct mlx5_flow_dv_match_params matcher = { - .size = sizeof(matcher.buf), - }; struct mlx5_priv *priv = dev->data->dev_private; uint8_t misc_mask; if (match_src_port && priv->sh->esw_mode) { if (item && item->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT) - ret = flow_dv_translate_item_represented_port(dev, matcher.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_represented_port(dev, value.buf, + item, attr, MLX5_SET_MATCHER_SW_V); else - ret = flow_dv_translate_item_port_id(dev, matcher.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_port_id(dev, value.buf, + item, attr, MLX5_SET_MATCHER_SW_V); if (ret) { DRV_LOG(ERR, "Failed to create meter policy%d flow's" " value with port.", color); return -1; } } - flow_dv_match_meta_reg(matcher.buf, value.buf, - (enum modify_reg)color_reg_c_idx, + flow_dv_match_meta_reg(value.buf, (enum modify_reg)color_reg_c_idx, rte_col_2_mlx5_col(color), UINT32_MAX); misc_mask = flow_dv_matcher_enable(value.buf); __flow_dv_adjust_buf_size(&value.size, misc_mask); @@ -16742,9 +16733,6 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, }, .tbl = tbl_rsc, }; - struct mlx5_flow_dv_match_params value = { - .size = sizeof(value.buf), - }; struct mlx5_flow_cb_ctx ctx = { .error = error, .data = &matcher, @@ -16757,10 +16745,10 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, if (match_src_port && priv->sh->esw_mode) { if (item && item->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT) ret = flow_dv_translate_item_represented_port(dev, matcher.mask.buf, - value.buf, item, attr); + item, attr, MLX5_SET_MATCHER_SW_M); else - ret = flow_dv_translate_item_port_id(dev, matcher.mask.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_port_id(dev, matcher.mask.buf, + item, attr, MLX5_SET_MATCHER_SW_M); if (ret) { DRV_LOG(ERR, "Failed to register meter policy%d matcher" " with port.", priority); @@ -16769,7 +16757,7 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, } tbl_data = container_of(tbl_rsc, struct mlx5_flow_tbl_data_entry, tbl); if (priority < RTE_COLOR_RED) - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg(matcher.mask.buf, (enum modify_reg)color_reg_c_idx, 0, color_mask); matcher.priority = priority; matcher.crc = rte_raw_cksum((const void *)matcher.mask.buf, @@ -17305,7 +17293,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, tbl_data = container_of(mtrmng->drop_tbl[domain], struct mlx5_flow_tbl_data_entry, tbl); if (!mtrmng->def_matcher[domain]) { - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg_all(matcher.mask.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, 0); matcher.priority = MLX5_MTRS_DEFAULT_RULE_PRIORITY; @@ -17325,7 +17313,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, if (!mtrmng->def_rule[domain]) { i = 0; actions[i++] = priv->sh->dr_drop_action; - flow_dv_match_meta_reg(matcher_para.buf, value.buf, + flow_dv_match_meta_reg_all(matcher_para.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, 0); misc_mask = flow_dv_matcher_enable(value.buf); __flow_dv_adjust_buf_size(&value.size, misc_mask); @@ -17344,7 +17332,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, MLX5_ASSERT(mtrmng->max_mtr_bits); if (!mtrmng->drop_matcher[domain][mtrmng->max_mtr_bits - 1]) { /* Create matchers for Drop. */ - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg_all(matcher.mask.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, (mtr_id_mask << mtr_id_offset)); matcher.priority = MLX5_REG_BITS - mtrmng->max_mtr_bits; @@ -17364,7 +17352,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, drop_matcher = mtrmng->drop_matcher[domain][mtrmng->max_mtr_bits - 1]; /* Create drop rule, matching meter_id only. */ - flow_dv_match_meta_reg(matcher_para.buf, value.buf, + flow_dv_match_meta_reg_all(matcher_para.buf, value.buf, (enum modify_reg)mtr_id_reg_c, (mtr_idx << mtr_id_offset), UINT32_MAX); i = 0; @@ -18846,8 +18834,12 @@ flow_dv_discover_priorities(struct rte_eth_dev *dev, flow.dv.actions[0] = action; flow.dv.actions_n = 1; memset(ð, 0, sizeof(eth)); - flow_dv_translate_item_eth(matcher.mask.buf, flow.dv.value.buf, - &item, /* inner */ false, /* group */ 0); + flow_dv_translate_item_eth(matcher.mask.buf, &item, + /* inner */ false, /* group */ 0, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_eth(flow.dv.value.buf, &item, + /* inner */ false, /* group */ 0, + MLX5_SET_MATCHER_SW_V); matcher.crc = rte_raw_cksum(matcher.mask.buf, matcher.mask.size); for (i = 0; i < vprio_n; i++) { /* Configure the next proposed maximum priority. */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 03/18] net/mlx5: add hardware steering item translation function 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker 2022-10-19 14:42 ` [v4 01/18] net/mlx5: split flow item translation Alex Vesker 2022-10-19 14:42 ` [v4 02/18] net/mlx5: split flow item matcher and value translation Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 04/18] net/mlx5: add port to metadata conversion Alex Vesker ` (14 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> As hardware steering root table flows still work under FW steering mode. This commit provides shared item tranlsation code for hardware steering root table flows. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow.c | 10 +-- drivers/net/mlx5/mlx5_flow.h | 52 ++++++++++++- drivers/net/mlx5/mlx5_flow_dv.c | 134 ++++++++++++++++++++++++-------- 3 files changed, 155 insertions(+), 41 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 6fb1d53fc5..742dbd6358 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -7108,7 +7108,7 @@ mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) struct rte_flow_item_port_id port_spec = { .id = MLX5_PORT_ESW_MGR, }; - struct mlx5_rte_flow_item_tx_queue txq_spec = { + struct mlx5_rte_flow_item_sq txq_spec = { .queue = txq, }; struct rte_flow_item pattern[] = { @@ -7118,7 +7118,7 @@ mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) }, { .type = (enum rte_flow_item_type) - MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, + MLX5_RTE_FLOW_ITEM_TYPE_SQ, .spec = &txq_spec, }, { @@ -7504,16 +7504,16 @@ mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, .egress = 1, .priority = 0, }; - struct mlx5_rte_flow_item_tx_queue queue_spec = { + struct mlx5_rte_flow_item_sq queue_spec = { .queue = queue, }; - struct mlx5_rte_flow_item_tx_queue queue_mask = { + struct mlx5_rte_flow_item_sq queue_mask = { .queue = UINT32_MAX, }; struct rte_flow_item items[] = { { .type = (enum rte_flow_item_type) - MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, + MLX5_RTE_FLOW_ITEM_TYPE_SQ, .spec = &queue_spec, .last = NULL, .mask = &queue_mask, diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2ebb8496f2..288e09d5ba 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -28,7 +28,7 @@ enum mlx5_rte_flow_item_type { MLX5_RTE_FLOW_ITEM_TYPE_END = INT_MIN, MLX5_RTE_FLOW_ITEM_TYPE_TAG, - MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, + MLX5_RTE_FLOW_ITEM_TYPE_SQ, MLX5_RTE_FLOW_ITEM_TYPE_VLAN, MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL, }; @@ -95,7 +95,7 @@ struct mlx5_flow_action_copy_mreg { }; /* Matches on source queue. */ -struct mlx5_rte_flow_item_tx_queue { +struct mlx5_rte_flow_item_sq { uint32_t queue; }; @@ -159,7 +159,7 @@ enum mlx5_feature_name { #define MLX5_FLOW_LAYER_GENEVE (1u << 26) /* Queue items. */ -#define MLX5_FLOW_ITEM_TX_QUEUE (1u << 27) +#define MLX5_FLOW_ITEM_SQ (1u << 27) /* Pattern tunnel Layer bits (continued). */ #define MLX5_FLOW_LAYER_GTP (1u << 28) @@ -196,6 +196,9 @@ enum mlx5_feature_name { #define MLX5_FLOW_ITEM_PORT_REPRESENTOR (UINT64_C(1) << 41) #define MLX5_FLOW_ITEM_REPRESENTED_PORT (UINT64_C(1) << 42) +/* Meter color item */ +#define MLX5_FLOW_ITEM_METER_COLOR (UINT64_C(1) << 44) + /* Outer Masks. */ #define MLX5_FLOW_LAYER_OUTER_L3 \ (MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6) @@ -1006,6 +1009,18 @@ flow_items_to_tunnel(const struct rte_flow_item items[]) return items[0].spec; } +/* HW steering flow attributes. */ +struct mlx5_flow_attr { + uint32_t port_id; /* Port index. */ + uint32_t group; /* Flow group. */ + uint32_t priority; /* Original Priority. */ + /* rss level, used by priority adjustment. */ + uint32_t rss_level; + /* Action flags, used by priority adjustment. */ + uint32_t act_flags; + uint32_t tbl_type; /* Flow table type. */ +}; + /* Flow structure. */ struct rte_flow { uint32_t dev_handles; @@ -1766,6 +1781,32 @@ mlx5_translate_tunnel_etypes(uint64_t pattern_flags) int flow_hw_q_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error); + +/* + * Convert rte_mtr_color to mlx5 color. + * + * @param[in] rcol + * rte_mtr_color. + * + * @return + * mlx5 color. + */ +static inline int +rte_col_2_mlx5_col(enum rte_color rcol) +{ + switch (rcol) { + case RTE_COLOR_GREEN: + return MLX5_FLOW_COLOR_GREEN; + case RTE_COLOR_YELLOW: + return MLX5_FLOW_COLOR_YELLOW; + case RTE_COLOR_RED: + return MLX5_FLOW_COLOR_RED; + default: + break; + } + return MLX5_FLOW_COLOR_UNDEFINED; +} + int mlx5_flow_group_to_table(struct rte_eth_dev *dev, const struct mlx5_flow_tunnel *tunnel, uint32_t group, uint32_t *table, @@ -2122,4 +2163,9 @@ int mlx5_flow_get_item_vport_id(struct rte_eth_dev *dev, bool *all_ports, struct rte_flow_error *error); +int flow_dv_translate_items_hws(const struct rte_flow_item *items, + struct mlx5_flow_attr *attr, void *key, + uint32_t key_type, uint64_t *item_flags, + uint8_t *match_criteria, + struct rte_flow_error *error); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 0589cafc30..0cf757898d 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -216,31 +216,6 @@ flow_dv_attr_init(const struct rte_flow_item *item, union flow_dv_attr *attr, attr->valid = 1; } -/* - * Convert rte_mtr_color to mlx5 color. - * - * @param[in] rcol - * rte_mtr_color. - * - * @return - * mlx5 color. - */ -static inline int -rte_col_2_mlx5_col(enum rte_color rcol) -{ - switch (rcol) { - case RTE_COLOR_GREEN: - return MLX5_FLOW_COLOR_GREEN; - case RTE_COLOR_YELLOW: - return MLX5_FLOW_COLOR_YELLOW; - case RTE_COLOR_RED: - return MLX5_FLOW_COLOR_RED; - default: - break; - } - return MLX5_FLOW_COLOR_UNDEFINED; -} - struct field_modify_info { uint32_t size; /* Size of field in protocol header, in bytes. */ uint32_t offset; /* Offset of field in protocol header, in bytes. */ @@ -7342,8 +7317,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, return ret; last_item = MLX5_FLOW_ITEM_TAG; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - last_item = MLX5_FLOW_ITEM_TX_QUEUE; + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: + last_item = MLX5_FLOW_ITEM_SQ; break; case MLX5_RTE_FLOW_ITEM_TYPE_TAG: break; @@ -8223,7 +8198,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, * work due to metadata regC0 mismatch. */ if ((!attr->transfer && attr->egress) && priv->representor && - !(item_flags & MLX5_FLOW_ITEM_TX_QUEUE)) + !(item_flags & MLX5_FLOW_ITEM_SQ)) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, @@ -11242,9 +11217,9 @@ flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, const struct rte_flow_item *item, uint32_t key_type) { - const struct mlx5_rte_flow_item_tx_queue *queue_m; - const struct mlx5_rte_flow_item_tx_queue *queue_v; - const struct mlx5_rte_flow_item_tx_queue queue_mask = { + const struct mlx5_rte_flow_item_sq *queue_m; + const struct mlx5_rte_flow_item_sq *queue_v; + const struct mlx5_rte_flow_item_sq queue_mask = { .queue = UINT32_MAX, }; void *misc_v = @@ -13184,9 +13159,9 @@ flow_dv_translate_items(struct rte_eth_dev *dev, flow_dv_translate_mlx5_item_tag(dev, key, items, key_type); last_item = MLX5_FLOW_ITEM_TAG; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: flow_dv_translate_item_tx_queue(dev, key, items, key_type); - last_item = MLX5_FLOW_ITEM_TX_QUEUE; + last_item = MLX5_FLOW_ITEM_SQ; break; case RTE_FLOW_ITEM_TYPE_GTP: flow_dv_translate_item_gtp(key, items, tunnel, key_type); @@ -13226,6 +13201,99 @@ flow_dv_translate_items(struct rte_eth_dev *dev, return 0; } +/** + * Fill the HW steering flow with DV spec. + * + * @param[in] items + * Pointer to the list of items. + * @param[in] attr + * Pointer to the flow attributes. + * @param[in] key + * Pointer to the flow matcher key. + * @param[in] key_type + * Key type. + * @param[in, out] item_flags + * Pointer to the flow item flags. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +flow_dv_translate_items_hws(const struct rte_flow_item *items, + struct mlx5_flow_attr *attr, void *key, + uint32_t key_type, uint64_t *item_flags, + uint8_t *match_criteria, + struct rte_flow_error *error) +{ + struct mlx5_flow_rss_desc rss_desc = { .level = attr->rss_level }; + struct rte_flow_attr rattr = { + .group = attr->group, + .priority = attr->priority, + .ingress = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_NIC_RX), + .egress = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_NIC_TX), + .transfer = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_FDB), + }; + struct mlx5_dv_matcher_workspace wks = { + .action_flags = attr->act_flags, + .item_flags = item_flags ? *item_flags : 0, + .external = 0, + .next_protocol = 0xff, + .attr = &rattr, + .rss_desc = &rss_desc, + }; + int ret; + + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { + if (!mlx5_flow_os_item_supported(items->type)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "item not supported"); + ret = flow_dv_translate_items(&rte_eth_devices[attr->port_id], + items, &wks, key, key_type, NULL); + if (ret) + return ret; + } + if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { + flow_dv_translate_item_vxlan_gpe(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GENEVE) { + flow_dv_translate_item_geneve(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GRE) { + if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) { + flow_dv_translate_item_gre(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) { + flow_dv_translate_item_gre_option(key, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + key_type); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) { + flow_dv_translate_item_nvgre(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else { + MLX5_ASSERT(false); + } + } + + if (match_criteria) + *match_criteria = flow_dv_matcher_enable(key); + if (item_flags) + *item_flags = wks.item_flags; + return 0; +} + /** * Fill the SW steering flow with DV spec. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 04/18] net/mlx5: add port to metadata conversion 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (2 preceding siblings ...) 2022-10-19 14:42 ` [v4 03/18] net/mlx5: add hardware steering item translation function Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 05/18] common/mlx5: query set capability of registers Alex Vesker ` (13 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Dariusz Sosnowski From: Dariusz Sosnowski <dsosnowski@nvidia.com> This patch initial version of functions used to: - convert between ethdev port_id and internal tag/mask value, - convert between IB context and internal tag/mask value. Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- drivers/net/mlx5/linux/mlx5_os.c | 10 +++++- drivers/net/mlx5/mlx5.c | 1 + drivers/net/mlx5/mlx5_flow.c | 6 ++++ drivers/net/mlx5/mlx5_flow.h | 52 ++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 29 ++++++++++++++++++ 5 files changed, 97 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 60677eb8d7..98c6374547 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1541,8 +1541,16 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, if (!priv->hrxqs) goto error; rte_rwlock_init(&priv->ind_tbls_lock); - if (priv->sh->config.dv_flow_en == 2) + if (priv->sh->config.dv_flow_en == 2) { +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + if (priv->vport_meta_mask) + flow_hw_set_port_info(eth_dev); return eth_dev; +#else + DRV_LOG(ERR, "DV support is missing for HWS."); + goto error; +#endif + } /* Port representor shares the same max priority with pf port. */ if (!priv->sh->flow_priority_check_flag) { /* Supported Verbs flow priority number detection. */ diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 752b60d769..1d10932619 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1944,6 +1944,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_flex_item_port_cleanup(dev); #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) flow_hw_resource_release(dev); + flow_hw_clear_port_info(dev); #endif if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 742dbd6358..9d94da0868 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -33,6 +33,12 @@ #include "mlx5_common_os.h" #include "rte_pmd_mlx5.h" +/* + * Shared array for quick translation between port_id and vport mask/values + * used for HWS rules. + */ +struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; + struct tunnel_default_miss_ctx { uint16_t *queue; __extension__ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 288e09d5ba..17102623c1 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1323,6 +1323,58 @@ struct mlx5_flow_split_info { uint64_t prefix_layers; /**< Prefix subflow layers. */ }; +struct flow_hw_port_info { + uint32_t regc_mask; + uint32_t regc_value; + uint32_t is_wire:1; +}; + +extern struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; + +/* + * Get metadata match tag and mask for given rte_eth_dev port. + * Used in HWS rule creation. + */ +static __rte_always_inline const struct flow_hw_port_info * +flow_hw_conv_port_id(const uint16_t port_id) +{ + struct flow_hw_port_info *port_info; + + if (port_id >= RTE_MAX_ETHPORTS) + return NULL; + port_info = &mlx5_flow_hw_port_infos[port_id]; + return !!port_info->regc_mask ? port_info : NULL; +} + +#ifdef HAVE_IBV_FLOW_DV_SUPPORT +/* + * Get metadata match tag and mask for the uplink port represented + * by given IB context. Used in HWS context creation. + */ +static __rte_always_inline const struct flow_hw_port_info * +flow_hw_get_wire_port(struct ibv_context *ibctx) +{ + struct ibv_device *ibdev = ibctx->device; + uint16_t port_id; + + MLX5_ETH_FOREACH_DEV(port_id, NULL) { + const struct mlx5_priv *priv = + rte_eth_devices[port_id].data->dev_private; + + if (priv && priv->master) { + struct ibv_context *port_ibctx = priv->sh->cdev->ctx; + + if (port_ibctx->device == ibdev) + return flow_hw_conv_port_id(port_id); + } + } + return NULL; +} +#endif + +void flow_hw_set_port_info(struct rte_eth_dev *dev); +void flow_hw_clear_port_info(struct rte_eth_dev *dev); + typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 12498794a5..fe809a83b9 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2208,6 +2208,35 @@ flow_hw_resource_release(struct rte_eth_dev *dev) priv->nb_queue = 0; } +/* Sets vport tag and mask, for given port, used in HWS rules. */ +void +flow_hw_set_port_info(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint16_t port_id = dev->data->port_id; + struct flow_hw_port_info *info; + + MLX5_ASSERT(port_id < RTE_MAX_ETHPORTS); + info = &mlx5_flow_hw_port_infos[port_id]; + info->regc_mask = priv->vport_meta_mask; + info->regc_value = priv->vport_meta_tag; + info->is_wire = priv->master; +} + +/* Clears vport tag and mask used for HWS rules. */ +void +flow_hw_clear_port_info(struct rte_eth_dev *dev) +{ + uint16_t port_id = dev->data->port_id; + struct flow_hw_port_info *info; + + MLX5_ASSERT(port_id < RTE_MAX_ETHPORTS); + info = &mlx5_flow_hw_port_infos[port_id]; + info->regc_mask = 0; + info->regc_value = 0; + info->is_wire = 0; +} + /** * Create shared action. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 05/18] common/mlx5: query set capability of registers 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (3 preceding siblings ...) 2022-10-19 14:42 ` [v4 04/18] net/mlx5: add port to metadata conversion Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 06/18] net/mlx5: provide the available tag registers Alex Vesker ` (12 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Bing Zhao From: Bing Zhao <bingz@nvidia.com> In the flow table capabilities, new fields are added to query the capability to set, add, copy to a REG_C_x. The set capability are queried and saved for the future usage. Signed-off-by: Bing Zhao <bingz@nvidia.com> --- drivers/common/mlx5/mlx5_devx_cmds.c | 30 +++++++++++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 2 ++ drivers/common/mlx5/mlx5_prm.h | 45 +++++++++++++++++++++++++--- 3 files changed, 73 insertions(+), 4 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 76f0b6724f..9c185366d0 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -1064,6 +1064,24 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->modify_outer_ip_ecn = MLX5_GET (flow_table_nic_cap, hcattr, ft_header_modify_nic_receive.outer_ip_ecn); + attr->set_reg_c = 0xff; + if (attr->nic_flow_table) { +#define GET_RX_REG_X_BITS \ + MLX5_GET(flow_table_nic_cap, hcattr, \ + ft_header_modify_nic_receive.metadata_reg_c_x) +#define GET_TX_REG_X_BITS \ + MLX5_GET(flow_table_nic_cap, hcattr, \ + ft_header_modify_nic_transmit.metadata_reg_c_x) + + uint32_t tx_reg, rx_reg; + + tx_reg = GET_TX_REG_X_BITS; + rx_reg = GET_RX_REG_X_BITS; + attr->set_reg_c &= (rx_reg & tx_reg); + +#undef GET_RX_REG_X_BITS +#undef GET_TX_REG_X_BITS + } attr->pkt_integrity_match = mlx5_devx_query_pkt_integrity_match(hcattr); attr->inner_ipv4_ihl = MLX5_GET (flow_table_nic_cap, hcattr, @@ -1163,6 +1181,18 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->esw_mgr_vport_id = MLX5_GET(esw_cap, hcattr, esw_manager_vport_number); } + if (attr->eswitch_manager) { + uint32_t esw_reg; + + hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + if (!hcattr) + return rc; + esw_reg = MLX5_GET(flow_table_esw_cap, hcattr, + ft_header_modify_esw_fdb.metadata_reg_c_x); + attr->set_reg_c &= esw_reg; + } return 0; error: rc = (rc > 0) ? -rc : rc; diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index cceaf3411d..a10aa3331b 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -263,6 +263,8 @@ struct mlx5_hca_attr { uint32_t crypto_wrapped_import_method:1; uint16_t esw_mgr_vport_id; /* E-Switch Mgr vport ID . */ uint16_t max_wqe_sz_sq; + uint32_t set_reg_c:8; + uint32_t nic_flow_table:1; uint32_t modify_outer_ip_ecn:1; }; diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 9c1c93f916..ca4763f53d 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1295,6 +1295,7 @@ enum { MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP = 0xc << 1, MLX5_GET_HCA_CAP_OP_MOD_ROCE = 0x4 << 1, MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE = 0x7 << 1, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE = 0x8 << 1, MLX5_SET_HCA_CAP_OP_MOD_ESW = 0x9 << 1, MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1, MLX5_GET_HCA_CAP_OP_MOD_CRYPTO = 0x1A << 1, @@ -1892,6 +1893,7 @@ struct mlx5_ifc_roce_caps_bits { }; struct mlx5_ifc_ft_fields_support_bits { + /* set_action_field_support */ u8 outer_dmac[0x1]; u8 outer_smac[0x1]; u8 outer_ether_type[0x1]; @@ -1919,7 +1921,7 @@ struct mlx5_ifc_ft_fields_support_bits { u8 outer_gre_key[0x1]; u8 outer_vxlan_vni[0x1]; u8 reserved_at_1a[0x5]; - u8 source_eswitch_port[0x1]; + u8 source_eswitch_port[0x1]; /* end of DW0 */ u8 inner_dmac[0x1]; u8 inner_smac[0x1]; u8 inner_ether_type[0x1]; @@ -1943,8 +1945,33 @@ struct mlx5_ifc_ft_fields_support_bits { u8 inner_tcp_sport[0x1]; u8 inner_tcp_dport[0x1]; u8 inner_tcp_flags[0x1]; - u8 reserved_at_37[0x9]; - u8 reserved_at_40[0x40]; + u8 reserved_at_37[0x9]; /* end of DW1 */ + u8 reserved_at_40[0x20]; /* end of DW2 */ + u8 reserved_at_60[0x18]; + union { + struct { + u8 metadata_reg_c_7[0x1]; + u8 metadata_reg_c_6[0x1]; + u8 metadata_reg_c_5[0x1]; + u8 metadata_reg_c_4[0x1]; + u8 metadata_reg_c_3[0x1]; + u8 metadata_reg_c_2[0x1]; + u8 metadata_reg_c_1[0x1]; + u8 metadata_reg_c_0[0x1]; + }; + u8 metadata_reg_c_x[0x8]; + }; /* end of DW3 */ + /* set_action_field_support_2 */ + u8 reserved_at_80[0x80]; + /* add_action_field_support */ + u8 reserved_at_100[0x80]; + /* add_action_field_support_2 */ + u8 reserved_at_180[0x80]; + /* copy_action_field_support */ + u8 reserved_at_200[0x80]; + /* copy_action_field_support_2 */ + u8 reserved_at_280[0x80]; + u8 reserved_at_300[0x100]; }; /* @@ -1989,9 +2016,18 @@ struct mlx5_ifc_flow_table_nic_cap_bits { u8 reserved_at_e00[0x200]; struct mlx5_ifc_ft_fields_support_bits ft_header_modify_nic_receive; - u8 reserved_at_1080[0x380]; struct mlx5_ifc_ft_fields_support_2_bits ft_field_support_2_nic_receive; + u8 reserved_at_1480[0x780]; + struct mlx5_ifc_ft_fields_support_bits + ft_header_modify_nic_transmit; + u8 reserved_at_2000[0x6000]; +}; + +struct mlx5_ifc_flow_table_esw_cap_bits { + u8 reserved_at_0[0x800]; + struct mlx5_ifc_ft_fields_support_bits ft_header_modify_esw_fdb; + u8 reserved_at_C00[0x7400]; }; /* @@ -2046,6 +2082,7 @@ union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_qos_cap_bits qos_cap; struct mlx5_ifc_virtio_emulation_cap_bits vdpa_caps; struct mlx5_ifc_flow_table_nic_cap_bits flow_table_nic_cap; + struct mlx5_ifc_flow_table_esw_cap_bits flow_table_esw_cap; struct mlx5_ifc_esw_cap_bits esw_cap; struct mlx5_ifc_roce_caps_bits roce_caps; u8 reserved_at_0[0x8000]; -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 06/18] net/mlx5: provide the available tag registers 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (4 preceding siblings ...) 2022-10-19 14:42 ` [v4 05/18] common/mlx5: query set capability of registers Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 07/18] net/mlx5: Add additional glue functions for HWS Alex Vesker ` (11 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Bing Zhao From: Bing Zhao <bingz@nvidia.com> The available tags that can be used by the application are fixed after startup. A global array is used to store the information and transfer the TAG item directly from the ID to the REG_C_x. Signed-off-by: Bing Zhao <bingz@nvidia.com> --- drivers/net/mlx5/linux/mlx5_os.c | 2 + drivers/net/mlx5/mlx5.c | 2 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_defs.h | 2 + drivers/net/mlx5/mlx5_flow.c | 11 +++++ drivers/net/mlx5/mlx5_flow.h | 27 ++++++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 76 ++++++++++++++++++++++++++++++++ 7 files changed, 121 insertions(+) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 98c6374547..aed55e6a62 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1545,6 +1545,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, #ifdef HAVE_IBV_FLOW_DV_SUPPORT if (priv->vport_meta_mask) flow_hw_set_port_info(eth_dev); + /* Only HWS requires this information. */ + flow_hw_init_tags_set(eth_dev); return eth_dev; #else DRV_LOG(ERR, "DV support is missing for HWS."); diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 1d10932619..b39ef1ecbe 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1945,6 +1945,8 @@ mlx5_dev_close(struct rte_eth_dev *dev) #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) flow_hw_resource_release(dev); flow_hw_clear_port_info(dev); + if (priv->sh->config.dv_flow_en == 2) + flow_hw_clear_tags_set(dev); #endif if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 3c9e6bad53..741be2df98 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1200,6 +1200,7 @@ struct mlx5_dev_ctx_shared { uint32_t drop_action_check_flag:1; /* Check Flag for drop action. */ uint32_t flow_priority_check_flag:1; /* Check Flag for flow priority. */ uint32_t metadata_regc_check_flag:1; /* Check Flag for metadata REGC. */ + uint32_t hws_tags:1; /* Check if tags info for HWS initialized. */ uint32_t max_port; /* Maximal IB device port index. */ struct mlx5_bond_info bond; /* Bonding information. */ struct mlx5_common_device *cdev; /* Backend mlx5 device. */ diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index 018d3f0f0c..585afb0a98 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -139,6 +139,8 @@ #define MLX5_XMETA_MODE_META32 2 /* Provide info on patrial hw miss. Implies MLX5_XMETA_MODE_META16 */ #define MLX5_XMETA_MODE_MISS_INFO 3 +/* Only valid in HWS, 32bits extended META without MARK support in FDB. */ +#define MLX5_XMETA_MODE_META32_HWS 4 /* Tx accurate scheduling on timestamps parameters. */ #define MLX5_TXPP_WAIT_INIT_TS 1000ul /* How long to wait timestamp. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 9d94da0868..dd3d2bb1a4 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -39,6 +39,17 @@ */ struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; +/* + * A global structure to save the available REG_C_x for tags usage. + * The Meter color REG (ASO) and the last available one will be reserved + * for PMD internal usage. + * Since there is no "port" concept in the driver, it is assumed that the + * available tags set will be the minimum intersection. + * 3 - in FDB mode / 5 - in legacy mode + */ +uint32_t mlx5_flow_hw_avl_tags_init_cnt; +enum modify_reg mlx5_flow_hw_avl_tags[MLX5_FLOW_HW_TAGS_MAX] = {REG_NON}; + struct tunnel_default_miss_ctx { uint16_t *queue; __extension__ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 17102623c1..2002f6ef4b 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1331,6 +1331,10 @@ struct flow_hw_port_info { extern struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; +#define MLX5_FLOW_HW_TAGS_MAX 8 +extern uint32_t mlx5_flow_hw_avl_tags_init_cnt; +extern enum modify_reg mlx5_flow_hw_avl_tags[]; + /* * Get metadata match tag and mask for given rte_eth_dev port. * Used in HWS rule creation. @@ -1372,9 +1376,32 @@ flow_hw_get_wire_port(struct ibv_context *ibctx) } #endif +/* + * Convert metadata or tag to the actual register. + * META: Can only be used to match in the FDB in this stage, fixed C_1. + * TAG: C_x expect meter color reg and the reserved ones. + * TODO: Per port / device, FDB or NIC for Meta matching. + */ +static __rte_always_inline int +flow_hw_get_reg_id(enum rte_flow_item_type type, uint32_t id) +{ + switch (type) { + case RTE_FLOW_ITEM_TYPE_META: + return REG_C_1; + case RTE_FLOW_ITEM_TYPE_TAG: + MLX5_ASSERT(id < MLX5_FLOW_HW_TAGS_MAX); + return mlx5_flow_hw_avl_tags[id]; + default: + return REG_NON; + } +} + void flow_hw_set_port_info(struct rte_eth_dev *dev); void flow_hw_clear_port_info(struct rte_eth_dev *dev); +void flow_hw_init_tags_set(struct rte_eth_dev *dev); +void flow_hw_clear_tags_set(struct rte_eth_dev *dev); + typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index fe809a83b9..78c741bb91 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2237,6 +2237,82 @@ flow_hw_clear_port_info(struct rte_eth_dev *dev) info->is_wire = 0; } +/* + * Initialize the information of available tag registers and an intersection + * of all the probed devices' REG_C_Xs. + * PS. No port concept in steering part, right now it cannot be per port level. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + */ +void flow_hw_init_tags_set(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t meta_mode = priv->sh->config.dv_xmeta_en; + uint8_t masks = (uint8_t)priv->sh->cdev->config.hca_attr.set_reg_c; + uint32_t i, j; + enum modify_reg copy[MLX5_FLOW_HW_TAGS_MAX] = {REG_NON}; + uint8_t unset = 0; + uint8_t copy_masks = 0; + + /* + * The CAPA is global for common device but only used in net. + * It is shared per eswitch domain. + */ + if (!!priv->sh->hws_tags) + return; + unset |= 1 << (priv->mtr_color_reg - REG_C_0); + unset |= 1 << (REG_C_6 - REG_C_0); + if (meta_mode == MLX5_XMETA_MODE_META32_HWS) { + unset |= 1 << (REG_C_1 - REG_C_0); + unset |= 1 << (REG_C_0 - REG_C_0); + } + masks &= ~unset; + if (mlx5_flow_hw_avl_tags_init_cnt) { + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) { + if (mlx5_flow_hw_avl_tags[i] != REG_NON && !!((1 << i) & masks)) { + copy[mlx5_flow_hw_avl_tags[i] - REG_C_0] = + mlx5_flow_hw_avl_tags[i]; + copy_masks |= (1 << i); + } + } + if (copy_masks != masks) { + j = 0; + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) + if (!!((1 << i) & copy_masks)) + mlx5_flow_hw_avl_tags[j++] = copy[i]; + } + } else { + j = 0; + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) { + if (!!((1 << i) & masks)) + mlx5_flow_hw_avl_tags[j++] = + (enum modify_reg)(i + (uint32_t)REG_C_0); + } + } + priv->sh->hws_tags = 1; + mlx5_flow_hw_avl_tags_init_cnt++; +} + +/* + * Reset the available tag registers information to NONE. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + */ +void flow_hw_clear_tags_set(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + if (!priv->sh->hws_tags) + return; + priv->sh->hws_tags = 0; + mlx5_flow_hw_avl_tags_init_cnt--; + if (!mlx5_flow_hw_avl_tags_init_cnt) + memset(mlx5_flow_hw_avl_tags, REG_NON, + sizeof(enum modify_reg) * MLX5_FLOW_HW_TAGS_MAX); +} + /** * Create shared action. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 07/18] net/mlx5: Add additional glue functions for HWS 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (5 preceding siblings ...) 2022-10-19 14:42 ` [v4 06/18] net/mlx5: provide the available tag registers Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 08/18] net/mlx5/hws: Add HWS command layer Alex Vesker ` (10 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Add missing glue support for HWS mlx5dr layer. The new glue functions are needed for mlx5dv create matcher and action, which are used as the kernel root table as well as for capabilities query like device name and ports info. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/common/mlx5/linux/mlx5_glue.c | 121 ++++++++++++++++++++++++-- drivers/common/mlx5/linux/mlx5_glue.h | 17 ++++ 2 files changed, 131 insertions(+), 7 deletions(-) diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c index 450dd6a06a..9f5953fbce 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.c +++ b/drivers/common/mlx5/linux/mlx5_glue.c @@ -111,6 +111,12 @@ mlx5_glue_query_device_ex(struct ibv_context *context, return ibv_query_device_ex(context, input, attr); } +static const char * +mlx5_glue_get_device_name(struct ibv_device *device) +{ + return ibv_get_device_name(device); +} + static int mlx5_glue_query_rt_values_ex(struct ibv_context *context, struct ibv_values_ex *values) @@ -620,6 +626,20 @@ mlx5_glue_dv_create_qp(struct ibv_context *context, #endif } +static void * +__mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, + struct mlx5dv_flow_matcher_attr *matcher_attr) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_matcher(context, matcher_attr); +#else + (void)context; + (void)matcher_attr; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, struct mlx5dv_flow_matcher_attr *matcher_attr, @@ -633,7 +653,7 @@ mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, matcher_attr->match_mask); #else (void)tbl; - return mlx5dv_create_flow_matcher(context, matcher_attr); + return __mlx5_glue_dv_create_flow_matcher(context, matcher_attr); #endif #else (void)context; @@ -644,6 +664,26 @@ mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, #endif } +static void * +__mlx5_glue_dv_create_flow(void *matcher, + void *match_value, + size_t num_actions, + void *actions) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow(matcher, + match_value, + num_actions, + (struct mlx5dv_flow_action_attr *)actions); +#else + (void)matcher; + (void)match_value; + (void)num_actions; + (void)actions; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow(void *matcher, void *match_value, @@ -663,8 +703,8 @@ mlx5_glue_dv_create_flow(void *matcher, for (i = 0; i < num_actions; i++) actions_attr[i] = *((struct mlx5dv_flow_action_attr *)(actions[i])); - return mlx5dv_create_flow(matcher, match_value, - num_actions, actions_attr); + return __mlx5_glue_dv_create_flow(matcher, match_value, + num_actions, actions_attr); #endif #else (void)matcher; @@ -735,6 +775,26 @@ mlx5_glue_dv_create_flow_action_dest_devx_tir(void *tir) #endif } +static void * +__mlx5_glue_dv_create_flow_action_modify_header + (struct ibv_context *ctx, + size_t actions_sz, + uint64_t actions[], + enum mlx5dv_flow_table_type ft_type) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_action_modify_header + (ctx, actions_sz, actions, ft_type); +#else + (void)ctx; + (void)ft_type; + (void)actions_sz; + (void)actions; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_action_modify_header (struct ibv_context *ctx, @@ -758,7 +818,7 @@ mlx5_glue_dv_create_flow_action_modify_header if (!action) return NULL; action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; - action->action = mlx5dv_create_flow_action_modify_header + action->action = __mlx5_glue_dv_create_flow_action_modify_header (ctx, actions_sz, actions, ft_type); return action; #endif @@ -774,6 +834,27 @@ mlx5_glue_dv_create_flow_action_modify_header #endif } +static void * +__mlx5_glue_dv_create_flow_action_packet_reformat + (struct ibv_context *ctx, + size_t data_sz, void *data, + enum mlx5dv_flow_action_packet_reformat_type reformat_type, + enum mlx5dv_flow_table_type ft_type) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_action_packet_reformat + (ctx, data_sz, data, reformat_type, ft_type); +#else + (void)ctx; + (void)reformat_type; + (void)ft_type; + (void)data_sz; + (void)data; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_action_packet_reformat (struct ibv_context *ctx, @@ -798,7 +879,7 @@ mlx5_glue_dv_create_flow_action_packet_reformat if (!action) return NULL; action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; - action->action = mlx5dv_create_flow_action_packet_reformat + action->action = __mlx5_glue_dv_create_flow_action_packet_reformat (ctx, data_sz, data, reformat_type, ft_type); return action; #endif @@ -908,6 +989,18 @@ mlx5_glue_dv_destroy_flow(void *flow_id) #endif } +static int +__mlx5_glue_dv_destroy_flow_matcher(void *matcher) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_destroy_flow_matcher(matcher); +#else + (void)matcher; + errno = ENOTSUP; + return errno; +#endif +} + static int mlx5_glue_dv_destroy_flow_matcher(void *matcher) { @@ -915,7 +1008,7 @@ mlx5_glue_dv_destroy_flow_matcher(void *matcher) #ifdef HAVE_MLX5DV_DR return mlx5dv_dr_matcher_destroy(matcher); #else - return mlx5dv_destroy_flow_matcher(matcher); + return __mlx5_glue_dv_destroy_flow_matcher(matcher); #endif #else (void)matcher; @@ -1164,12 +1257,18 @@ mlx5_glue_devx_port_query(struct ibv_context *ctx, info->vport_id = devx_port.vport; info->query_flags |= MLX5_PORT_QUERY_VPORT; } + if (devx_port.flags & MLX5DV_QUERY_PORT_ESW_OWNER_VHCA_ID) { + info->esw_owner_vhca_id = devx_port.esw_owner_vhca_id; + info->query_flags |= MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID; + } #else #ifdef HAVE_MLX5DV_DR_DEVX_PORT /* The legacy DevX port query API is implemented (prior v35). */ struct mlx5dv_devx_port devx_port = { .comp_mask = MLX5DV_DEVX_PORT_VPORT | - MLX5DV_DEVX_PORT_MATCH_REG_C_0 + MLX5DV_DEVX_PORT_MATCH_REG_C_0 | + MLX5DV_DEVX_PORT_VPORT_VHCA_ID | + MLX5DV_DEVX_PORT_ESW_OWNER_VHCA_ID }; err = mlx5dv_query_devx_port(ctx, port_num, &devx_port); @@ -1449,6 +1548,7 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { .close_device = mlx5_glue_close_device, .query_device = mlx5_glue_query_device, .query_device_ex = mlx5_glue_query_device_ex, + .get_device_name = mlx5_glue_get_device_name, .query_rt_values_ex = mlx5_glue_query_rt_values_ex, .query_port = mlx5_glue_query_port, .create_comp_channel = mlx5_glue_create_comp_channel, @@ -1507,7 +1607,9 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { .dv_init_obj = mlx5_glue_dv_init_obj, .dv_create_qp = mlx5_glue_dv_create_qp, .dv_create_flow_matcher = mlx5_glue_dv_create_flow_matcher, + .dv_create_flow_matcher_root = __mlx5_glue_dv_create_flow_matcher, .dv_create_flow = mlx5_glue_dv_create_flow, + .dv_create_flow_root = __mlx5_glue_dv_create_flow, .dv_create_flow_action_counter = mlx5_glue_dv_create_flow_action_counter, .dv_create_flow_action_dest_ibv_qp = @@ -1516,8 +1618,12 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { mlx5_glue_dv_create_flow_action_dest_devx_tir, .dv_create_flow_action_modify_header = mlx5_glue_dv_create_flow_action_modify_header, + .dv_create_flow_action_modify_header_root = + __mlx5_glue_dv_create_flow_action_modify_header, .dv_create_flow_action_packet_reformat = mlx5_glue_dv_create_flow_action_packet_reformat, + .dv_create_flow_action_packet_reformat_root = + __mlx5_glue_dv_create_flow_action_packet_reformat, .dv_create_flow_action_tag = mlx5_glue_dv_create_flow_action_tag, .dv_create_flow_action_meter = mlx5_glue_dv_create_flow_action_meter, .dv_modify_flow_action_meter = mlx5_glue_dv_modify_flow_action_meter, @@ -1526,6 +1632,7 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { mlx5_glue_dr_create_flow_action_default_miss, .dv_destroy_flow = mlx5_glue_dv_destroy_flow, .dv_destroy_flow_matcher = mlx5_glue_dv_destroy_flow_matcher, + .dv_destroy_flow_matcher_root = __mlx5_glue_dv_destroy_flow_matcher, .dv_open_device = mlx5_glue_dv_open_device, .devx_obj_create = mlx5_glue_devx_obj_create, .devx_obj_destroy = mlx5_glue_devx_obj_destroy, diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h index c4903a6dce..ef7341a76a 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.h +++ b/drivers/common/mlx5/linux/mlx5_glue.h @@ -91,10 +91,12 @@ struct mlx5dv_port; #define MLX5_PORT_QUERY_VPORT (1u << 0) #define MLX5_PORT_QUERY_REG_C0 (1u << 1) +#define MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID (1u << 2) struct mlx5_port_info { uint16_t query_flags; uint16_t vport_id; /* Associated VF vport index (if any). */ + uint16_t esw_owner_vhca_id; /* Associated the esw_owner that this VF belongs to. */ uint32_t vport_meta_tag; /* Used for vport index match ove VF LAG. */ uint32_t vport_meta_mask; /* Used for vport index field match mask. */ }; @@ -164,6 +166,7 @@ struct mlx5_glue { int (*query_device_ex)(struct ibv_context *context, const struct ibv_query_device_ex_input *input, struct ibv_device_attr_ex *attr); + const char *(*get_device_name)(struct ibv_device *device); int (*query_rt_values_ex)(struct ibv_context *context, struct ibv_values_ex *values); int (*query_port)(struct ibv_context *context, uint8_t port_num, @@ -268,8 +271,13 @@ struct mlx5_glue { (struct ibv_context *context, struct mlx5dv_flow_matcher_attr *matcher_attr, void *tbl); + void *(*dv_create_flow_matcher_root) + (struct ibv_context *context, + struct mlx5dv_flow_matcher_attr *matcher_attr); void *(*dv_create_flow)(void *matcher, void *match_value, size_t num_actions, void *actions[]); + void *(*dv_create_flow_root)(void *matcher, void *match_value, + size_t num_actions, void *actions); void *(*dv_create_flow_action_counter)(void *obj, uint32_t offset); void *(*dv_create_flow_action_dest_ibv_qp)(void *qp); void *(*dv_create_flow_action_dest_devx_tir)(void *tir); @@ -277,12 +285,20 @@ struct mlx5_glue { (struct ibv_context *ctx, enum mlx5dv_flow_table_type ft_type, void *domain, uint64_t flags, size_t actions_sz, uint64_t actions[]); + void *(*dv_create_flow_action_modify_header_root) + (struct ibv_context *ctx, size_t actions_sz, uint64_t actions[], + enum mlx5dv_flow_table_type ft_type); void *(*dv_create_flow_action_packet_reformat) (struct ibv_context *ctx, enum mlx5dv_flow_action_packet_reformat_type reformat_type, enum mlx5dv_flow_table_type ft_type, struct mlx5dv_dr_domain *domain, uint32_t flags, size_t data_sz, void *data); + void *(*dv_create_flow_action_packet_reformat_root) + (struct ibv_context *ctx, + size_t data_sz, void *data, + enum mlx5dv_flow_action_packet_reformat_type reformat_type, + enum mlx5dv_flow_table_type ft_type); void *(*dv_create_flow_action_tag)(uint32_t tag); void *(*dv_create_flow_action_meter) (struct mlx5dv_dr_flow_meter_attr *attr); @@ -291,6 +307,7 @@ struct mlx5_glue { void *(*dr_create_flow_action_default_miss)(void); int (*dv_destroy_flow)(void *flow); int (*dv_destroy_flow_matcher)(void *matcher); + int (*dv_destroy_flow_matcher_root)(void *matcher); struct ibv_context *(*dv_open_device)(struct ibv_device *device); struct mlx5dv_var *(*dv_alloc_var)(struct ibv_context *context, uint32_t flags); -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 08/18] net/mlx5/hws: Add HWS command layer 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (6 preceding siblings ...) 2022-10-19 14:42 ` [v4 07/18] net/mlx5: Add additional glue functions for HWS Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 09/18] net/mlx5/hws: Add HWS pool and buddy Alex Vesker ` (9 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit From: Erez Shitrit <erezsh@nvidia.com> The command layer is used to communicate with the FW, query capabilities and allocate FW resources needed for HWS. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/common/mlx5/mlx5_prm.h | 607 ++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_cmd.c | 948 ++++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_cmd.h | 230 ++++++++ 3 files changed, 1775 insertions(+), 10 deletions(-) create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.h diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index ca4763f53d..371942ae50 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -289,6 +289,8 @@ /* The alignment needed for CQ buffer. */ #define MLX5_CQE_BUF_ALIGNMENT rte_mem_page_size() +#define MAX_ACTIONS_DATA_IN_HEADER_MODIFY 512 + /* Completion mode. */ enum mlx5_completion_mode { MLX5_COMP_ONLY_ERR = 0x0, @@ -677,6 +679,10 @@ enum { MLX5_MODIFICATION_TYPE_SET = 0x1, MLX5_MODIFICATION_TYPE_ADD = 0x2, MLX5_MODIFICATION_TYPE_COPY = 0x3, + MLX5_MODIFICATION_TYPE_INSERT = 0x4, + MLX5_MODIFICATION_TYPE_REMOVE = 0x5, + MLX5_MODIFICATION_TYPE_NOP = 0x6, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS = 0x7, }; /* The field of packet to be modified. */ @@ -1111,6 +1117,10 @@ enum { MLX5_CMD_OP_QUERY_TIS = 0x915, MLX5_CMD_OP_CREATE_RQT = 0x916, MLX5_CMD_OP_MODIFY_RQT = 0x917, + MLX5_CMD_OP_CREATE_FLOW_TABLE = 0x930, + MLX5_CMD_OP_CREATE_FLOW_GROUP = 0x933, + MLX5_CMD_OP_SET_FLOW_TABLE_ENTRY = 0x936, + MLX5_CMD_OP_MODIFY_FLOW_TABLE = 0x93c, MLX5_CMD_OP_ALLOC_FLOW_COUNTER = 0x939, MLX5_CMD_OP_QUERY_FLOW_COUNTER = 0x93b, MLX5_CMD_OP_CREATE_GENERAL_OBJECT = 0xa00, @@ -1299,6 +1309,7 @@ enum { MLX5_SET_HCA_CAP_OP_MOD_ESW = 0x9 << 1, MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1, MLX5_GET_HCA_CAP_OP_MOD_CRYPTO = 0x1A << 1, + MLX5_GET_HCA_CAP_OP_MOD_WQE_BASED_FLOW_TABLE = 0x1B << 1, MLX5_GET_HCA_CAP_OP_MOD_PARSE_GRAPH_NODE_CAP = 0x1C << 1, MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE_2 = 0x20 << 1, }; @@ -1317,6 +1328,14 @@ enum { (1ULL << MLX5_GENERAL_OBJ_TYPE_GENEVE_TLV_OPT) #define MLX5_GENERAL_OBJ_TYPES_CAP_CONN_TRACK_OFFLOAD \ (1ULL << MLX5_GENERAL_OBJ_TYPE_CONN_TRACK_OFFLOAD) +#define MLX5_GENERAL_OBJ_TYPES_CAP_RTC \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_RTC) +#define MLX5_GENERAL_OBJ_TYPES_CAP_STC \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_STC) +#define MLX5_GENERAL_OBJ_TYPES_CAP_STE \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_STE) +#define MLX5_GENERAL_OBJ_TYPES_CAP_DEFINER \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_DEFINER) #define MLX5_GENERAL_OBJ_TYPES_CAP_DEK \ (1ULL << MLX5_GENERAL_OBJ_TYPE_DEK) #define MLX5_GENERAL_OBJ_TYPES_CAP_IMPORT_KEK \ @@ -1373,6 +1392,11 @@ enum { #define MLX5_HCA_FLEX_VXLAN_GPE_ENABLED (1UL << 7) #define MLX5_HCA_FLEX_ICMP_ENABLED (1UL << 8) #define MLX5_HCA_FLEX_ICMPV6_ENABLED (1UL << 9) +#define MLX5_HCA_FLEX_GTPU_ENABLED (1UL << 11) +#define MLX5_HCA_FLEX_GTPU_DW_2_ENABLED (1UL << 16) +#define MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED (1UL << 17) +#define MLX5_HCA_FLEX_GTPU_DW_0_ENABLED (1UL << 18) +#define MLX5_HCA_FLEX_GTPU_TEID_ENABLED (1UL << 19) /* The device steering logic format. */ #define MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 0x0 @@ -1505,7 +1529,8 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 wol_u[0x1]; u8 wol_p[0x1]; u8 stat_rate_support[0x10]; - u8 reserved_at_1f0[0xc]; + u8 reserved_at_1ef[0xb]; + u8 wqe_based_flow_table_update_cap[0x1]; u8 cqe_version[0x4]; u8 compact_address_vector[0x1]; u8 striding_rq[0x1]; @@ -1681,7 +1706,9 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 cqe_compression[0x1]; u8 cqe_compression_timeout[0x10]; u8 cqe_compression_max_num[0x10]; - u8 reserved_at_5e0[0x10]; + u8 reserved_at_5e0[0x8]; + u8 flex_parser_id_gtpu_dw_0[0x4]; + u8 reserved_at_5ec[0x4]; u8 tag_matching[0x1]; u8 rndv_offload_rc[0x1]; u8 rndv_offload_dc[0x1]; @@ -1691,17 +1718,38 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 affiliate_nic_vport_criteria[0x8]; u8 native_port_num[0x8]; u8 num_vhca_ports[0x8]; - u8 reserved_at_618[0x6]; + u8 flex_parser_id_gtpu_teid[0x4]; + u8 reserved_at_61c[0x2]; u8 sw_owner_id[0x1]; u8 reserved_at_61f[0x6C]; u8 wait_on_data[0x1]; u8 wait_on_time[0x1]; - u8 reserved_at_68d[0xBB]; + u8 reserved_at_68d[0x37]; + u8 flex_parser_id_geneve_opt_0[0x4]; + u8 flex_parser_id_icmp_dw1[0x4]; + u8 flex_parser_id_icmp_dw0[0x4]; + u8 flex_parser_id_icmpv6_dw1[0x4]; + u8 flex_parser_id_icmpv6_dw0[0x4]; + u8 flex_parser_id_outer_first_mpls_over_gre[0x4]; + u8 flex_parser_id_outer_first_mpls_over_udp_label[0x4]; + u8 reserved_at_6e0[0x20]; + u8 flex_parser_id_gtpu_dw_2[0x4]; + u8 flex_parser_id_gtpu_first_ext_dw_0[0x4]; + u8 reserved_at_708[0x40]; u8 dma_mmo_qp[0x1]; u8 regexp_mmo_qp[0x1]; u8 compress_mmo_qp[0x1]; u8 decompress_mmo_qp[0x1]; - u8 reserved_at_624[0xd4]; + u8 reserved_at_74c[0x14]; + u8 reserved_at_760[0x3]; + u8 log_max_num_header_modify_argument[0x5]; + u8 log_header_modify_argument_granularity_offset[0x4]; + u8 log_header_modify_argument_granularity[0x4]; + u8 reserved_at_770[0x3]; + u8 log_header_modify_argument_max_alloc[0x5]; + u8 reserved_at_778[0x8]; + u8 reserved_at_780[0x40]; + u8 match_definer_format_supported[0x40]; }; struct mlx5_ifc_qos_cap_bits { @@ -1876,7 +1924,9 @@ struct mlx5_ifc_flow_table_prop_layout_bits { u8 log_max_ft_sampler_num[8]; u8 metadata_reg_b_width[0x8]; u8 metadata_reg_a_width[0x8]; - u8 reserved_at_60[0x18]; + u8 reserved_at_60[0xa]; + u8 reparse[0x1]; + u8 reserved_at_6b[0xd]; u8 log_max_ft_num[0x8]; u8 reserved_at_80[0x10]; u8 log_max_flow_counter[0x8]; @@ -2061,7 +2111,17 @@ struct mlx5_ifc_cmd_hca_cap_2_bits { u8 hairpin_sq_wqe_bb_size[0x5]; u8 hairpin_sq_wq_in_host_mem[0x1]; u8 hairpin_data_buffer_locked[0x1]; - u8 reserved_at_16a[0x696]; + u8 reserved_at_16a[0x36]; + u8 reserved_at_1a0[0xb]; + u8 format_select_dw_8_6_ext[0x1]; + u8 reserved_at_1ac[0x14]; + u8 general_obj_types_127_64[0x40]; + u8 reserved_at_200[0x80]; + u8 format_select_dw_gtpu_dw_0[0x8]; + u8 format_select_dw_gtpu_dw_1[0x8]; + u8 format_select_dw_gtpu_dw_2[0x8]; + u8 format_select_dw_gtpu_first_ext_dw_0[0x8]; + u8 reserved_at_2a0[0x560]; }; struct mlx5_ifc_esw_cap_bits { @@ -2074,6 +2134,37 @@ struct mlx5_ifc_esw_cap_bits { u8 reserved_at_80[0x780]; }; +struct mlx5_ifc_wqe_based_flow_table_cap_bits { + u8 reserved_at_0[0x3]; + u8 log_max_num_ste[0x5]; + u8 reserved_at_8[0x3]; + u8 log_max_num_stc[0x5]; + u8 reserved_at_10[0x3]; + u8 log_max_num_rtc[0x5]; + u8 reserved_at_18[0x3]; + u8 log_max_num_header_modify_pattern[0x5]; + u8 reserved_at_20[0x3]; + u8 stc_alloc_log_granularity[0x5]; + u8 reserved_at_28[0x3]; + u8 stc_alloc_log_max[0x5]; + u8 reserved_at_30[0x3]; + u8 ste_alloc_log_granularity[0x5]; + u8 reserved_at_38[0x3]; + u8 ste_alloc_log_max[0x5]; + u8 reserved_at_40[0xb]; + u8 rtc_reparse_mode[0x5]; + u8 reserved_at_50[0x3]; + u8 rtc_index_mode[0x5]; + u8 reserved_at_58[0x3]; + u8 rtc_log_depth_max[0x5]; + u8 reserved_at_60[0x10]; + u8 ste_format[0x10]; + u8 stc_action_type[0x80]; + u8 header_insert_type[0x10]; + u8 header_remove_type[0x10]; + u8 trivial_match_definer[0x20]; +}; + union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_cmd_hca_cap_bits cmd_hca_cap; struct mlx5_ifc_cmd_hca_cap_2_bits cmd_hca_cap_2; @@ -2085,6 +2176,7 @@ union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_flow_table_esw_cap_bits flow_table_esw_cap; struct mlx5_ifc_esw_cap_bits esw_cap; struct mlx5_ifc_roce_caps_bits roce_caps; + struct mlx5_ifc_wqe_based_flow_table_cap_bits wqe_based_flow_table_cap; u8 reserved_at_0[0x8000]; }; @@ -2098,6 +2190,20 @@ struct mlx5_ifc_set_action_in_bits { u8 data[0x20]; }; +struct mlx5_ifc_copy_action_in_bits { + u8 action_type[0x4]; + u8 src_field[0xc]; + u8 reserved_at_10[0x3]; + u8 src_offset[0x5]; + u8 reserved_at_18[0x3]; + u8 length[0x5]; + u8 reserved_at_20[0x4]; + u8 dst_field[0xc]; + u8 reserved_at_30[0x3]; + u8 dst_offset[0x5]; + u8 reserved_at_38[0x8]; +}; + struct mlx5_ifc_query_hca_cap_out_bits { u8 status[0x8]; u8 reserved_at_8[0x18]; @@ -2978,6 +3084,7 @@ enum { MLX5_GENERAL_OBJ_TYPE_GENEVE_TLV_OPT = 0x000b, MLX5_GENERAL_OBJ_TYPE_DEK = 0x000c, MLX5_GENERAL_OBJ_TYPE_VIRTQ = 0x000d, + MLX5_GENERAL_OBJ_TYPE_DEFINER = 0x0018, MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS = 0x001c, MLX5_GENERAL_OBJ_TYPE_IMPORT_KEK = 0x001d, MLX5_GENERAL_OBJ_TYPE_CREDENTIAL = 0x001e, @@ -2986,6 +3093,11 @@ enum { MLX5_GENERAL_OBJ_TYPE_FLOW_METER_ASO = 0x0024, MLX5_GENERAL_OBJ_TYPE_FLOW_HIT_ASO = 0x0025, MLX5_GENERAL_OBJ_TYPE_CONN_TRACK_OFFLOAD = 0x0031, + MLX5_GENERAL_OBJ_TYPE_ARG = 0x0023, + MLX5_GENERAL_OBJ_TYPE_STC = 0x0040, + MLX5_GENERAL_OBJ_TYPE_RTC = 0x0041, + MLX5_GENERAL_OBJ_TYPE_STE = 0x0042, + MLX5_GENERAL_OBJ_TYPE_MODIFY_HEADER_PATTERN = 0x0043, }; struct mlx5_ifc_general_obj_in_cmd_hdr_bits { @@ -2993,9 +3105,14 @@ struct mlx5_ifc_general_obj_in_cmd_hdr_bits { u8 reserved_at_10[0x20]; u8 obj_type[0x10]; u8 obj_id[0x20]; - u8 reserved_at_60[0x3]; - u8 log_obj_range[0x5]; - u8 reserved_at_58[0x18]; + union { + struct { + u8 reserved_at_60[0x3]; + u8 log_obj_range[0x5]; + u8 reserved_at_58[0x18]; + }; + u8 obj_offset[0x20]; + }; }; struct mlx5_ifc_general_obj_out_cmd_hdr_bits { @@ -3029,6 +3146,243 @@ struct mlx5_ifc_geneve_tlv_option_bits { u8 reserved_at_80[0x180]; }; + +enum mlx5_ifc_rtc_update_mode { + MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH = 0x0, + MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET = 0x1, +}; + +enum mlx5_ifc_rtc_ste_format { + MLX5_IFC_RTC_STE_FORMAT_8DW = 0x4, + MLX5_IFC_RTC_STE_FORMAT_11DW = 0x5, +}; + +enum mlx5_ifc_rtc_reparse_mode { + MLX5_IFC_RTC_REPARSE_NEVER = 0x0, + MLX5_IFC_RTC_REPARSE_ALWAYS = 0x1, +}; + +struct mlx5_ifc_rtc_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x40]; + u8 update_index_mode[0x2]; + u8 reparse_mode[0x2]; + u8 reserved_at_84[0x4]; + u8 pd[0x18]; + u8 reserved_at_a0[0x13]; + u8 log_depth[0x5]; + u8 log_hash_size[0x8]; + u8 ste_format[0x8]; + u8 table_type[0x8]; + u8 reserved_at_d0[0x10]; + u8 match_definer_id[0x20]; + u8 stc_id[0x20]; + u8 ste_table_base_id[0x20]; + u8 ste_table_offset[0x20]; + u8 reserved_at_160[0x8]; + u8 miss_flow_table_id[0x18]; + u8 reserved_at_180[0x280]; +}; + +enum mlx5_ifc_stc_action_type { + MLX5_IFC_STC_ACTION_TYPE_NOP = 0x00, + MLX5_IFC_STC_ACTION_TYPE_COPY = 0x05, + MLX5_IFC_STC_ACTION_TYPE_SET = 0x06, + MLX5_IFC_STC_ACTION_TYPE_ADD = 0x07, + MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS = 0x08, + MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE = 0x09, + MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT = 0x0b, + MLX5_IFC_STC_ACTION_TYPE_TAG = 0x0c, + MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST = 0x0e, + MLX5_IFC_STC_ACTION_TYPE_ASO = 0x12, + MLX5_IFC_STC_ACTION_TYPE_COUNTER = 0x14, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE = 0x80, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR = 0x81, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT = 0x82, + MLX5_IFC_STC_ACTION_TYPE_DROP = 0x83, + MLX5_IFC_STC_ACTION_TYPE_ALLOW = 0x84, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT = 0x85, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK = 0x86, +}; + +struct mlx5_ifc_stc_ste_param_ste_table_bits { + u8 ste_obj_id[0x20]; + u8 match_definer_id[0x20]; + u8 reserved_at_40[0x3]; + u8 log_hash_size[0x5]; + u8 reserved_at_48[0x38]; +}; + +struct mlx5_ifc_stc_ste_param_tir_bits { + u8 reserved_at_0[0x8]; + u8 tirn[0x18]; + u8 reserved_at_20[0x60]; +}; + +struct mlx5_ifc_stc_ste_param_table_bits { + u8 reserved_at_0[0x8]; + u8 table_id[0x18]; + u8 reserved_at_20[0x60]; +}; + +struct mlx5_ifc_stc_ste_param_flow_counter_bits { + u8 flow_counter_id[0x20]; +}; + +enum { + MLX5_ASO_CT_NUM_PER_OBJ = 1, + MLX5_ASO_METER_NUM_PER_OBJ = 2, +}; + +struct mlx5_ifc_stc_ste_param_execute_aso_bits { + u8 aso_object_id[0x20]; + u8 return_reg_id[0x4]; + u8 aso_type[0x4]; + u8 reserved_at_28[0x18]; +}; + +struct mlx5_ifc_stc_ste_param_header_modify_list_bits { + u8 header_modify_pattern_id[0x20]; + u8 header_modify_argument_id[0x20]; +}; + +enum mlx5_ifc_header_anchors { + MLX5_HEADER_ANCHOR_PACKET_START = 0x0, + MLX5_HEADER_ANCHOR_FIRST_VLAN_START = 0x2, + MLX5_HEADER_ANCHOR_IPV6_IPV4 = 0x07, + MLX5_HEADER_ANCHOR_INNER_MAC = 0x13, + MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4 = 0x19, +}; + +struct mlx5_ifc_stc_ste_param_remove_bits { + u8 action_type[0x4]; + u8 decap[0x1]; + u8 reserved_at_5[0x5]; + u8 remove_start_anchor[0x6]; + u8 reserved_at_10[0x2]; + u8 remove_end_anchor[0x6]; + u8 reserved_at_18[0x8]; +}; + +struct mlx5_ifc_stc_ste_param_remove_words_bits { + u8 action_type[0x4]; + u8 reserved_at_4[0x6]; + u8 remove_start_anchor[0x6]; + u8 reserved_at_10[0x1]; + u8 remove_offset[0x7]; + u8 reserved_at_18[0x2]; + u8 remove_size[0x6]; +}; + +struct mlx5_ifc_stc_ste_param_insert_bits { + u8 action_type[0x4]; + u8 encap[0x1]; + u8 inline_data[0x1]; + u8 reserved_at_6[0x4]; + u8 insert_anchor[0x6]; + u8 reserved_at_10[0x1]; + u8 insert_offset[0x7]; + u8 reserved_at_18[0x1]; + u8 insert_size[0x7]; + u8 insert_argument[0x20]; +}; + +struct mlx5_ifc_stc_ste_param_vport_bits { + u8 eswitch_owner_vhca_id[0x10]; + u8 vport_number[0x10]; + u8 eswitch_owner_vhca_id_valid[0x1]; + u8 reserved_at_21[0x59]; +}; + +union mlx5_ifc_stc_param_bits { + struct mlx5_ifc_stc_ste_param_ste_table_bits ste_table; + struct mlx5_ifc_stc_ste_param_tir_bits tir; + struct mlx5_ifc_stc_ste_param_table_bits table; + struct mlx5_ifc_stc_ste_param_flow_counter_bits counter; + struct mlx5_ifc_stc_ste_param_header_modify_list_bits modify_header; + struct mlx5_ifc_stc_ste_param_execute_aso_bits aso; + struct mlx5_ifc_stc_ste_param_remove_bits remove_header; + struct mlx5_ifc_stc_ste_param_insert_bits insert_header; + struct mlx5_ifc_set_action_in_bits add; + struct mlx5_ifc_set_action_in_bits set; + struct mlx5_ifc_copy_action_in_bits copy; + struct mlx5_ifc_stc_ste_param_vport_bits vport; + u8 reserved_at_0[0x80]; +}; + +enum { + MLX5_IFC_MODIFY_STC_FIELD_SELECT_NEW_STC = 1 << 0, +}; + +struct mlx5_ifc_stc_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x48]; + u8 table_type[0x8]; + u8 ste_action_offset[0x8]; + u8 action_type[0x8]; + u8 reserved_at_a0[0x60]; + union mlx5_ifc_stc_param_bits stc_param; + u8 reserved_at_180[0x280]; +}; + +struct mlx5_ifc_ste_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x48]; + u8 table_type[0x8]; + u8 reserved_at_90[0x370]; +}; + +enum { + MLX5_IFC_DEFINER_FORMAT_ID_SELECT = 61, +}; + +struct mlx5_ifc_definer_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x50]; + u8 format_id[0x10]; + u8 reserved_at_60[0x60]; + u8 format_select_dw3[0x8]; + u8 format_select_dw2[0x8]; + u8 format_select_dw1[0x8]; + u8 format_select_dw0[0x8]; + u8 format_select_dw7[0x8]; + u8 format_select_dw6[0x8]; + u8 format_select_dw5[0x8]; + u8 format_select_dw4[0x8]; + u8 reserved_at_100[0x18]; + u8 format_select_dw8[0x8]; + u8 reserved_at_120[0x20]; + u8 format_select_byte3[0x8]; + u8 format_select_byte2[0x8]; + u8 format_select_byte1[0x8]; + u8 format_select_byte0[0x8]; + u8 format_select_byte7[0x8]; + u8 format_select_byte6[0x8]; + u8 format_select_byte5[0x8]; + u8 format_select_byte4[0x8]; + u8 reserved_at_180[0x40]; + u8 ctrl[0xa0]; + u8 match_mask[0x160]; +}; + +struct mlx5_ifc_arg_bits { + u8 rsvd0[0x88]; + u8 access_pd[0x18]; +}; + +struct mlx5_ifc_header_modify_pattern_in_bits { + u8 modify_field_select[0x40]; + + u8 reserved_at_40[0x40]; + + u8 pattern_length[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x60]; + + u8 pattern_data[MAX_ACTIONS_DATA_IN_HEADER_MODIFY * 8]; +}; + struct mlx5_ifc_create_virtio_q_counters_in_bits { struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters; @@ -3044,6 +3398,36 @@ struct mlx5_ifc_create_geneve_tlv_option_in_bits { struct mlx5_ifc_geneve_tlv_option_bits geneve_tlv_opt; }; +struct mlx5_ifc_create_rtc_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_rtc_bits rtc; +}; + +struct mlx5_ifc_create_stc_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_stc_bits stc; +}; + +struct mlx5_ifc_create_ste_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_ste_bits ste; +}; + +struct mlx5_ifc_create_definer_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_definer_bits definer; +}; + +struct mlx5_ifc_create_arg_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_arg_bits arg; +}; + +struct mlx5_ifc_create_header_modify_pattern_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_header_modify_pattern_in_bits pattern; +}; + enum { MLX5_CRYPTO_KEY_SIZE_128b = 0x0, MLX5_CRYPTO_KEY_SIZE_256b = 0x1, @@ -4253,6 +4637,209 @@ struct mlx5_ifc_query_q_counter_in_bits { u8 counter_set_id[0x8]; }; +enum { + FS_FT_NIC_RX = 0x0, + FS_FT_NIC_TX = 0x1, + FS_FT_FDB = 0x4, + FS_FT_FDB_RX = 0xa, + FS_FT_FDB_TX = 0xb, +}; + +struct mlx5_ifc_flow_table_context_bits { + u8 reformat_en[0x1]; + u8 decap_en[0x1]; + u8 sw_owner[0x1]; + u8 termination_table[0x1]; + u8 table_miss_action[0x4]; + u8 level[0x8]; + u8 rtc_valid[0x1]; + u8 reserved_at_11[0x7]; + u8 log_size[0x8]; + + u8 reserved_at_20[0x8]; + u8 table_miss_id[0x18]; + + u8 reserved_at_40[0x8]; + u8 lag_master_next_table_id[0x18]; + + u8 reserved_at_60[0x60]; + + u8 rtc_id_0[0x20]; + + u8 rtc_id_1[0x20]; + + u8 reserved_at_100[0x40]; +}; + +struct mlx5_ifc_create_flow_table_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + + u8 reserved_at_60[0x20]; + + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x20]; + + struct mlx5_ifc_flow_table_context_bits flow_table_context; +}; + +struct mlx5_ifc_create_flow_table_out_bits { + u8 status[0x8]; + u8 icm_address_63_40[0x18]; + u8 syndrome[0x20]; + u8 icm_address_39_32[0x8]; + u8 table_id[0x18]; + u8 icm_address_31_0[0x20]; +}; + +enum mlx5_flow_destination_type { + MLX5_FLOW_DESTINATION_TYPE_VPORT = 0x0, +}; + +enum { + MLX5_FLOW_CONTEXT_ACTION_FWD_DEST = 0x4, +}; + +struct mlx5_ifc_set_fte_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x40]; +}; + +struct mlx5_ifc_dest_format_bits { + u8 destination_type[0x8]; + u8 destination_id[0x18]; + u8 destination_eswitch_owner_vhca_id_valid[0x1]; + u8 packet_reformat[0x1]; + u8 reserved_at_22[0xe]; + u8 destination_eswitch_owner_vhca_id[0x10]; +}; + +struct mlx5_ifc_flow_counter_list_bits { + u8 flow_counter_id[0x20]; + u8 reserved_at_20[0x20]; +}; + +union mlx5_ifc_dest_format_flow_counter_list_auto_bits { + struct mlx5_ifc_dest_format_bits dest_format; + struct mlx5_ifc_flow_counter_list_bits flow_counter_list; + u8 reserved_at_0[0x40]; +}; + +struct mlx5_ifc_flow_context_bits { + u8 reserved_at_00[0x20]; + u8 group_id[0x20]; + u8 reserved_at_40[0x8]; + u8 flow_tag[0x18]; + u8 reserved_at_60[0x10]; + u8 action[0x10]; + u8 extended_destination[0x1]; + u8 reserved_at_81[0x7]; + u8 destination_list_size[0x18]; + u8 reserved_at_a0[0x8]; + u8 flow_counter_list_size[0x18]; + u8 reserved_at_c0[0x1740]; + /* Currently only one destnation */ + union mlx5_ifc_dest_format_flow_counter_list_auto_bits destination[1]; +}; + +struct mlx5_ifc_set_fte_in_bits { + u8 opcode[0x10]; + u8 reserved_at_10[0x10]; + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + u8 reserved_at_60[0x20]; + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + u8 ignore_flow_level[0x1]; + u8 reserved_at_c1[0x17]; + u8 modify_enable_mask[0x8]; + u8 reserved_at_e0[0x20]; + u8 flow_index[0x20]; + u8 reserved_at_120[0xe0]; + struct mlx5_ifc_flow_context_bits flow_context; +}; + +struct mlx5_ifc_create_flow_group_in_bits { + u8 opcode[0x10]; + u8 reserved_at_10[0x10]; + u8 reserved_at_20[0x20]; + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + u8 reserved_at_60[0x20]; + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + u8 reserved_at_c0[0x1f40]; +}; + +struct mlx5_ifc_create_flow_group_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x8]; + u8 group_id[0x18]; + u8 reserved_at_60[0x20]; +}; + +enum { + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION = 1 << 0, + MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID = 1 << 1, +}; + +enum { + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_DEFAULT = 0, + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_GOTO_TBL = 1, +}; + +struct mlx5_ifc_modify_flow_table_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 reserved_at_40[0x10]; + u8 vport_number[0x10]; + + u8 reserved_at_60[0x10]; + u8 modify_field_select[0x10]; + + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + + struct mlx5_ifc_flow_table_context_bits flow_table_context; +}; + +struct mlx5_ifc_modify_flow_table_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + + u8 syndrome[0x20]; + + u8 reserved_at_40[0x60]; +}; + /* CQE format mask. */ #define MLX5E_CQE_FORMAT_MASK 0xc diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c new file mode 100644 index 0000000000..2211e49598 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c @@ -0,0 +1,948 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +int mlx5dr_cmd_destroy_obj(struct mlx5dr_devx_obj *devx_obj) +{ + int ret; + + ret = mlx5_glue->devx_obj_destroy(devx_obj->obj); + simple_free(devx_obj); + + return ret; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_table_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(create_flow_table_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_flow_table_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *ft_ctx; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for flow table object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_flow_table_in, in, opcode, MLX5_CMD_OP_CREATE_FLOW_TABLE); + MLX5_SET(create_flow_table_in, in, table_type, ft_attr->type); + + ft_ctx = MLX5_ADDR_OF(create_flow_table_in, in, flow_table_context); + MLX5_SET(flow_table_context, ft_ctx, level, ft_attr->level); + MLX5_SET(flow_table_context, ft_ctx, rtc_valid, ft_attr->rtc_valid); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create FT"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_flow_table_out, out, table_id); + + return devx_obj; +} + +int +mlx5dr_cmd_flow_table_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_ft_modify_attr *ft_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(modify_flow_table_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(modify_flow_table_in)] = {0}; + void *ft_ctx; + int ret; + + MLX5_SET(modify_flow_table_in, in, opcode, MLX5_CMD_OP_MODIFY_FLOW_TABLE); + MLX5_SET(modify_flow_table_in, in, table_type, ft_attr->type); + MLX5_SET(modify_flow_table_in, in, modify_field_select, ft_attr->modify_fs); + MLX5_SET(modify_flow_table_in, in, table_id, devx_obj->id); + + ft_ctx = MLX5_ADDR_OF(modify_flow_table_in, in, flow_table_context); + + MLX5_SET(flow_table_context, ft_ctx, table_miss_action, ft_attr->table_miss_action); + MLX5_SET(flow_table_context, ft_ctx, table_miss_id, ft_attr->table_miss_id); + MLX5_SET(flow_table_context, ft_ctx, rtc_id_0, ft_attr->rtc_id_0); + MLX5_SET(flow_table_context, ft_ctx, rtc_id_1, ft_attr->rtc_id_1); + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify FT"); + rte_errno = errno; + } + + return ret; +} + +static struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_group_create(struct ibv_context *ctx, + struct mlx5dr_cmd_fg_attr *fg_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(create_flow_group_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_flow_group_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for flow group object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_flow_group_in, in, opcode, MLX5_CMD_OP_CREATE_FLOW_GROUP); + MLX5_SET(create_flow_group_in, in, table_type, fg_attr->table_type); + MLX5_SET(create_flow_group_in, in, table_id, fg_attr->table_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create Flow group"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_flow_group_out, out, group_id); + + return devx_obj; +} + +static struct mlx5dr_devx_obj * +mlx5dr_cmd_set_vport_fte(struct ibv_context *ctx, + uint32_t table_type, + uint32_t table_id, + uint32_t group_id, + uint32_t vport_id) +{ + uint32_t in[MLX5_ST_SZ_DW(set_fte_in) + MLX5_ST_SZ_DW(dest_format)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(set_fte_out)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *in_flow_context; + void *in_dests; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for fte object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(set_fte_in, in, opcode, MLX5_CMD_OP_SET_FLOW_TABLE_ENTRY); + MLX5_SET(set_fte_in, in, table_type, table_type); + MLX5_SET(set_fte_in, in, table_id, table_id); + + in_flow_context = MLX5_ADDR_OF(set_fte_in, in, flow_context); + MLX5_SET(flow_context, in_flow_context, group_id, group_id); + MLX5_SET(flow_context, in_flow_context, destination_list_size, 1); + MLX5_SET(flow_context, in_flow_context, action, MLX5_FLOW_CONTEXT_ACTION_FWD_DEST); + + in_dests = MLX5_ADDR_OF(flow_context, in_flow_context, destination); + MLX5_SET(dest_format, in_dests, destination_type, + MLX5_FLOW_DESTINATION_TYPE_VPORT); + MLX5_SET(dest_format, in_dests, destination_id, vport_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create FTE"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + return devx_obj; +} + +void mlx5dr_cmd_miss_ft_destroy(struct mlx5dr_cmd_forward_tbl *tbl) +{ + mlx5dr_cmd_destroy_obj(tbl->fte); + mlx5dr_cmd_destroy_obj(tbl->fg); + mlx5dr_cmd_destroy_obj(tbl->ft); +} + +struct mlx5dr_cmd_forward_tbl * +mlx5dr_cmd_miss_ft_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr, + uint32_t vport) +{ + struct mlx5dr_cmd_fg_attr fg_attr = {0}; + struct mlx5dr_cmd_forward_tbl *tbl; + + tbl = simple_calloc(1, sizeof(*tbl)); + if (!tbl) { + DR_LOG(ERR, "Failed to allocate memory for forward default"); + rte_errno = ENOMEM; + return NULL; + } + + tbl->ft = mlx5dr_cmd_flow_table_create(ctx, ft_attr); + if (!tbl->ft) { + DR_LOG(ERR, "Failed to create FT for miss-table"); + goto free_tbl; + } + + fg_attr.table_id = tbl->ft->id; + fg_attr.table_type = ft_attr->type; + + tbl->fg = mlx5dr_cmd_flow_group_create(ctx, &fg_attr); + if (!tbl->fg) { + DR_LOG(ERR, "Failed to create FG for miss-table"); + goto free_ft; + } + + tbl->fte = mlx5dr_cmd_set_vport_fte(ctx, ft_attr->type, tbl->ft->id, tbl->fg->id, vport); + if (!tbl->fte) { + DR_LOG(ERR, "Failed to create FTE for miss-table"); + goto free_fg; + } + return tbl; + +free_fg: + mlx5dr_cmd_destroy_obj(tbl->fg); +free_ft: + mlx5dr_cmd_destroy_obj(tbl->ft); +free_tbl: + simple_free(tbl); + return NULL; +} + +void mlx5dr_cmd_set_attr_connect_miss_tbl(struct mlx5dr_context *ctx, + uint32_t fw_ft_type, + enum mlx5dr_table_type type, + struct mlx5dr_cmd_ft_modify_attr *ft_attr) +{ + struct mlx5dr_devx_obj *default_miss_tbl; + + if (type != MLX5DR_TABLE_TYPE_FDB) + return; + + default_miss_tbl = ctx->common_res[type].default_miss->ft; + if (!default_miss_tbl) { + assert(false); + return; + } + ft_attr->modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION; + ft_attr->type = fw_ft_type; + ft_attr->table_miss_action = MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_GOTO_TBL; + ft_attr->table_miss_id = default_miss_tbl->id; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_rtc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_rtc_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for RTC object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_rtc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_RTC); + + attr = MLX5_ADDR_OF(create_rtc_in, in, rtc); + MLX5_SET(rtc, attr, ste_format, rtc_attr->is_jumbo ? + MLX5_IFC_RTC_STE_FORMAT_11DW : + MLX5_IFC_RTC_STE_FORMAT_8DW); + MLX5_SET(rtc, attr, pd, rtc_attr->pd); + MLX5_SET(rtc, attr, update_index_mode, rtc_attr->update_index_mode); + MLX5_SET(rtc, attr, log_depth, rtc_attr->log_depth); + MLX5_SET(rtc, attr, log_hash_size, rtc_attr->log_size); + MLX5_SET(rtc, attr, table_type, rtc_attr->table_type); + MLX5_SET(rtc, attr, match_definer_id, rtc_attr->definer_id); + MLX5_SET(rtc, attr, stc_id, rtc_attr->stc_base); + MLX5_SET(rtc, attr, ste_table_base_id, rtc_attr->ste_base); + MLX5_SET(rtc, attr, ste_table_offset, rtc_attr->ste_offset); + MLX5_SET(rtc, attr, miss_flow_table_id, rtc_attr->miss_ft_id); + MLX5_SET(rtc, attr, reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create RTC"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_stc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_stc_create_attr *stc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_stc_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for STC object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_stc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STC); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, stc_attr->log_obj_range); + + attr = MLX5_ADDR_OF(create_stc_in, in, stc); + MLX5_SET(stc, attr, table_type, stc_attr->table_type); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create STC"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +static int +mlx5dr_cmd_stc_modify_set_stc_param(struct mlx5dr_cmd_stc_modify_attr *stc_attr, + void *stc_parm) +{ + switch (stc_attr->action_type) { + case MLX5_IFC_STC_ACTION_TYPE_COUNTER: + MLX5_SET(stc_ste_param_flow_counter, stc_parm, flow_counter_id, stc_attr->id); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR: + MLX5_SET(stc_ste_param_tir, stc_parm, tirn, stc_attr->dest_tir_num); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT: + MLX5_SET(stc_ste_param_table, stc_parm, table_id, stc_attr->dest_table_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST: + MLX5_SET(stc_ste_param_header_modify_list, stc_parm, + header_modify_pattern_id, stc_attr->modify_header.pattern_id); + MLX5_SET(stc_ste_param_header_modify_list, stc_parm, + header_modify_argument_id, stc_attr->modify_header.arg_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE: + MLX5_SET(stc_ste_param_remove, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_REMOVE); + MLX5_SET(stc_ste_param_remove, stc_parm, decap, + stc_attr->remove_header.decap); + MLX5_SET(stc_ste_param_remove, stc_parm, remove_start_anchor, + stc_attr->remove_header.start_anchor); + MLX5_SET(stc_ste_param_remove, stc_parm, remove_end_anchor, + stc_attr->remove_header.end_anchor); + break; + case MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT: + MLX5_SET(stc_ste_param_insert, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_INSERT); + MLX5_SET(stc_ste_param_insert, stc_parm, encap, + stc_attr->insert_header.encap); + MLX5_SET(stc_ste_param_insert, stc_parm, inline_data, + stc_attr->insert_header.is_inline); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_anchor, + stc_attr->insert_header.insert_anchor); + /* HW gets the next 2 sizes in words */ + MLX5_SET(stc_ste_param_insert, stc_parm, insert_size, + stc_attr->insert_header.header_size / 2); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_offset, + stc_attr->insert_header.insert_offset / 2); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_argument, + stc_attr->insert_header.arg_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_COPY: + case MLX5_IFC_STC_ACTION_TYPE_SET: + case MLX5_IFC_STC_ACTION_TYPE_ADD: + *(__be64 *)stc_parm = stc_attr->modify_action.data; + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT: + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK: + MLX5_SET(stc_ste_param_vport, stc_parm, vport_number, + stc_attr->vport.vport_num); + MLX5_SET(stc_ste_param_vport, stc_parm, eswitch_owner_vhca_id, + stc_attr->vport.esw_owner_vhca_id); + MLX5_SET(stc_ste_param_vport, stc_parm, eswitch_owner_vhca_id_valid, 1); + break; + case MLX5_IFC_STC_ACTION_TYPE_DROP: + case MLX5_IFC_STC_ACTION_TYPE_NOP: + case MLX5_IFC_STC_ACTION_TYPE_TAG: + case MLX5_IFC_STC_ACTION_TYPE_ALLOW: + break; + case MLX5_IFC_STC_ACTION_TYPE_ASO: + MLX5_SET(stc_ste_param_execute_aso, stc_parm, aso_object_id, + stc_attr->aso.devx_obj_id); + MLX5_SET(stc_ste_param_execute_aso, stc_parm, return_reg_id, + stc_attr->aso.return_reg_id); + MLX5_SET(stc_ste_param_execute_aso, stc_parm, aso_type, + stc_attr->aso.aso_type); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE: + MLX5_SET(stc_ste_param_ste_table, stc_parm, ste_obj_id, + stc_attr->ste_table.ste_obj_id); + MLX5_SET(stc_ste_param_ste_table, stc_parm, match_definer_id, + stc_attr->ste_table.match_definer_id); + MLX5_SET(stc_ste_param_ste_table, stc_parm, log_hash_size, + stc_attr->ste_table.log_hash_size); + break; + case MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS: + MLX5_SET(stc_ste_param_remove_words, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS); + MLX5_SET(stc_ste_param_remove_words, stc_parm, remove_start_anchor, + stc_attr->remove_words.start_anchor); + MLX5_SET(stc_ste_param_remove_words, stc_parm, + remove_size, stc_attr->remove_words.num_of_words); + break; + default: + DR_LOG(ERR, "Not supported type %d", stc_attr->action_type); + rte_errno = EINVAL; + return rte_errno; + } + return 0; +} + +int +mlx5dr_cmd_stc_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_stc_modify_attr *stc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_stc_in)] = {0}; + void *stc_parm; + void *attr; + int ret; + + attr = MLX5_ADDR_OF(create_stc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_MODIFY_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STC); + MLX5_SET(general_obj_in_cmd_hdr, in, obj_id, devx_obj->id); + MLX5_SET(general_obj_in_cmd_hdr, in, obj_offset, stc_attr->stc_offset); + + attr = MLX5_ADDR_OF(create_stc_in, in, stc); + MLX5_SET(stc, attr, ste_action_offset, stc_attr->action_offset); + MLX5_SET(stc, attr, action_type, stc_attr->action_type); + MLX5_SET64(stc, attr, modify_field_select, + MLX5_IFC_MODIFY_STC_FIELD_SELECT_NEW_STC); + + /* Set destination TIRN, TAG, FT ID, STE ID */ + stc_parm = MLX5_ADDR_OF(stc, attr, stc_param); + ret = mlx5dr_cmd_stc_modify_set_stc_param(stc_attr, stc_parm); + if (ret) + return ret; + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify STC FW action_type %d", stc_attr->action_type); + rte_errno = errno; + } + + return ret; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_arg_create(struct ibv_context *ctx, + uint16_t log_obj_range, + uint32_t pd) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_arg_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for ARG object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_arg_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_ARG); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, log_obj_range); + + attr = MLX5_ADDR_OF(create_arg_in, in, arg); + MLX5_SET(arg, attr, access_pd, pd); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create ARG"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_header_modify_pattern_create(struct ibv_context *ctx, + uint32_t pattern_length, + uint8_t *actions) +{ + uint32_t in[MLX5_ST_SZ_DW(create_header_modify_pattern_in)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *pattern_data; + void *pattern; + void *attr; + + if (pattern_length > MAX_ACTIONS_DATA_IN_HEADER_MODIFY) { + DR_LOG(ERR, "Pattern length %d exceeds limit %d", + pattern_length, MAX_ACTIONS_DATA_IN_HEADER_MODIFY); + rte_errno = EINVAL; + return NULL; + } + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for header_modify_pattern object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_header_modify_pattern_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_MODIFY_HEADER_PATTERN); + + pattern = MLX5_ADDR_OF(create_header_modify_pattern_in, in, pattern); + /* Pattern_length is in ddwords */ + MLX5_SET(header_modify_pattern_in, pattern, pattern_length, pattern_length / (2 * DW_SIZE)); + + pattern_data = MLX5_ADDR_OF(header_modify_pattern_in, pattern, pattern_data); + memcpy(pattern_data, actions, pattern_length); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create header_modify_pattern"); + rte_errno = errno; + goto free_obj; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; + +free_obj: + simple_free(devx_obj); + return NULL; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_ste_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ste_create_attr *ste_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_ste_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for STE object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_ste_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STE); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, ste_attr->log_obj_range); + + attr = MLX5_ADDR_OF(create_ste_in, in, ste); + MLX5_SET(ste, attr, table_type, ste_attr->table_type); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create STE"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_definer_create(struct ibv_context *ctx, + struct mlx5dr_cmd_definer_create_attr *def_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_definer_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *ptr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for definer object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(general_obj_in_cmd_hdr, + in, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + in, obj_type, MLX5_GENERAL_OBJ_TYPE_DEFINER); + + ptr = MLX5_ADDR_OF(create_definer_in, in, definer); + MLX5_SET(definer, ptr, format_id, MLX5_IFC_DEFINER_FORMAT_ID_SELECT); + + MLX5_SET(definer, ptr, format_select_dw0, def_attr->dw_selector[0]); + MLX5_SET(definer, ptr, format_select_dw1, def_attr->dw_selector[1]); + MLX5_SET(definer, ptr, format_select_dw2, def_attr->dw_selector[2]); + MLX5_SET(definer, ptr, format_select_dw3, def_attr->dw_selector[3]); + MLX5_SET(definer, ptr, format_select_dw4, def_attr->dw_selector[4]); + MLX5_SET(definer, ptr, format_select_dw5, def_attr->dw_selector[5]); + MLX5_SET(definer, ptr, format_select_dw6, def_attr->dw_selector[6]); + MLX5_SET(definer, ptr, format_select_dw7, def_attr->dw_selector[7]); + MLX5_SET(definer, ptr, format_select_dw8, def_attr->dw_selector[8]); + + MLX5_SET(definer, ptr, format_select_byte0, def_attr->byte_selector[0]); + MLX5_SET(definer, ptr, format_select_byte1, def_attr->byte_selector[1]); + MLX5_SET(definer, ptr, format_select_byte2, def_attr->byte_selector[2]); + MLX5_SET(definer, ptr, format_select_byte3, def_attr->byte_selector[3]); + MLX5_SET(definer, ptr, format_select_byte4, def_attr->byte_selector[4]); + MLX5_SET(definer, ptr, format_select_byte5, def_attr->byte_selector[5]); + MLX5_SET(definer, ptr, format_select_byte6, def_attr->byte_selector[6]); + MLX5_SET(definer, ptr, format_select_byte7, def_attr->byte_selector[7]); + + ptr = MLX5_ADDR_OF(definer, ptr, match_mask); + memcpy(ptr, def_attr->match_mask, MLX5_FLD_SZ_BYTES(definer, match_mask)); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create Definer"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_sq_create(struct ibv_context *ctx, + struct mlx5dr_cmd_sq_create_attr *attr) +{ + uint32_t out[DEVX_ST_SZ_DW(create_sq_out)] = {0}; + uint32_t in[DEVX_ST_SZ_DW(create_sq_in)] = {0}; + void *sqc = DEVX_ADDR_OF(create_sq_in, in, ctx); + void *wqc = DEVX_ADDR_OF(sqc, sqc, wq); + struct mlx5dr_devx_obj *devx_obj; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to create SQ"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_sq_in, in, opcode, MLX5_CMD_OP_CREATE_SQ); + MLX5_SET(sqc, sqc, cqn, attr->cqn); + MLX5_SET(sqc, sqc, flush_in_error_en, 1); + MLX5_SET(sqc, sqc, non_wire, 1); + MLX5_SET(wq, wqc, wq_type, MLX5_WQ_TYPE_CYCLIC); + MLX5_SET(wq, wqc, pd, attr->pdn); + MLX5_SET(wq, wqc, uar_page, attr->page_id); + MLX5_SET(wq, wqc, log_wq_stride, log2above(MLX5_SEND_WQE_BB)); + MLX5_SET(wq, wqc, log_wq_sz, attr->log_wq_sz); + MLX5_SET(wq, wqc, dbr_umem_id, attr->dbr_id); + MLX5_SET(wq, wqc, wq_umem_id, attr->wq_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_sq_out, out, sqn); + + return devx_obj; +} + +int mlx5dr_cmd_sq_modify_rdy(struct mlx5dr_devx_obj *devx_obj) +{ + uint32_t out[DEVX_ST_SZ_DW(modify_sq_out)] = {0}; + uint32_t in[DEVX_ST_SZ_DW(modify_sq_in)] = {0}; + void *sqc = DEVX_ADDR_OF(modify_sq_in, in, ctx); + int ret; + + MLX5_SET(modify_sq_in, in, opcode, MLX5_CMD_OP_MODIFY_SQ); + MLX5_SET(modify_sq_in, in, sqn, devx_obj->id); + MLX5_SET(modify_sq_in, in, sq_state, MLX5_SQC_STATE_RST); + MLX5_SET(sqc, sqc, state, MLX5_SQC_STATE_RDY); + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify SQ"); + rte_errno = errno; + } + + return ret; +} + +int mlx5dr_cmd_query_caps(struct ibv_context *ctx, + struct mlx5dr_cmd_query_caps *caps) +{ + uint32_t out[DEVX_ST_SZ_DW(query_hca_cap_out)] = {0}; + uint32_t in[DEVX_ST_SZ_DW(query_hca_cap_in)] = {0}; + const struct flow_hw_port_info *port_info; + struct ibv_device_attr_ex attr_ex; + int ret; + + MLX5_SET(query_hca_cap_in, in, opcode, MLX5_CMD_OP_QUERY_HCA_CAP); + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query device caps"); + rte_errno = errno; + return rte_errno; + } + + caps->wqe_based_update = + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.wqe_based_flow_table_update_cap); + + caps->eswitch_manager = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.eswitch_manager); + + caps->flex_protocols = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.flex_parser_protocols); + + caps->log_header_modify_argument_granularity = + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.log_header_modify_argument_granularity); + + caps->log_header_modify_argument_granularity -= + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap. + log_header_modify_argument_granularity_offset); + + caps->log_header_modify_argument_max_alloc = + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.log_header_modify_argument_max_alloc); + + caps->definer_format_sup = + MLX5_GET64(query_hca_cap_out, out, + capability.cmd_hca_cap.match_definer_format_supported); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE_2 | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query device caps"); + rte_errno = errno; + return rte_errno; + } + + caps->full_dw_jumbo_support = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_8_6_ext); + + caps->format_select_gtpu_dw_0 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_0); + + caps->format_select_gtpu_dw_1 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_1); + + caps->format_select_gtpu_dw_2 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_2); + + caps->format_select_gtpu_ext_dw_0 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_first_ext_dw_0); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query flow table caps"); + rte_errno = errno; + return rte_errno; + } + + caps->nic_ft.max_level = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.max_ft_level); + + caps->nic_ft.reparse = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.reparse); + + if (caps->wqe_based_update) { + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_WQE_BASED_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query WQE based FT caps"); + rte_errno = errno; + return rte_errno; + } + + caps->rtc_reparse_mode = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_reparse_mode); + + caps->ste_format = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_format); + + caps->rtc_index_mode = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_index_mode); + + caps->rtc_log_depth_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_log_depth_max); + + caps->ste_alloc_log_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_alloc_log_max); + + caps->ste_alloc_log_gran = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_alloc_log_granularity); + + caps->trivial_match_definer = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + trivial_match_definer); + + caps->stc_alloc_log_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + stc_alloc_log_max); + + caps->stc_alloc_log_gran = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + stc_alloc_log_granularity); + } + + if (caps->eswitch_manager) { + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query flow table esw caps"); + rte_errno = errno; + return rte_errno; + } + + caps->fdb_ft.max_level = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.max_ft_level); + + caps->fdb_ft.reparse = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.reparse); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_SET_HCA_CAP_OP_MOD_ESW | MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Query eswitch capabilities failed %d\n", ret); + rte_errno = errno; + return rte_errno; + } + + if (MLX5_GET(query_hca_cap_out, out, + capability.esw_cap.esw_manager_vport_number_valid)) + caps->eswitch_manager_vport_number = + MLX5_GET(query_hca_cap_out, out, + capability.esw_cap.esw_manager_vport_number); + } + + ret = mlx5_glue->query_device_ex(ctx, NULL, &attr_ex); + if (ret) { + DR_LOG(ERR, "Failed to query device attributes"); + rte_errno = ret; + return rte_errno; + } + + strlcpy(caps->fw_ver, attr_ex.orig_attr.fw_ver, sizeof(caps->fw_ver)); + + port_info = flow_hw_get_wire_port(ctx); + if (port_info) { + caps->wire_regc = port_info->regc_value; + caps->wire_regc_mask = port_info->regc_mask; + } else { + DR_LOG(INFO, "Failed to query wire port regc value"); + } + + return ret; +} + +int mlx5dr_cmd_query_ib_port(struct ibv_context *ctx, + struct mlx5dr_cmd_query_vport_caps *vport_caps, + uint32_t port_num) +{ + struct mlx5_port_info port_info = {0}; + uint32_t flags; + int ret; + + flags = MLX5_PORT_QUERY_VPORT | MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID; + + ret = mlx5_glue->devx_port_query(ctx, port_num, &port_info); + /* Check if query succeed and vport is enabled */ + if (ret || (port_info.query_flags & flags) != flags) { + rte_errno = ENOTSUP; + return rte_errno; + } + + vport_caps->vport_num = port_info.vport_id; + vport_caps->esw_owner_vhca_id = port_info.esw_owner_vhca_id; + + if (port_info.query_flags & MLX5_PORT_QUERY_REG_C0) { + vport_caps->metadata_c = port_info.vport_meta_tag; + vport_caps->metadata_c_mask = port_info.vport_meta_mask; + } + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h new file mode 100644 index 0000000000..2548b2b238 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h @@ -0,0 +1,230 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_CMD_H_ +#define MLX5DR_CMD_H_ + +struct mlx5dr_cmd_ft_create_attr { + uint8_t type; + uint8_t level; + bool rtc_valid; +}; + +struct mlx5dr_cmd_ft_modify_attr { + uint8_t type; + uint32_t rtc_id_0; + uint32_t rtc_id_1; + uint32_t table_miss_id; + uint8_t table_miss_action; + uint64_t modify_fs; +}; + +struct mlx5dr_cmd_fg_attr { + uint32_t table_id; + uint32_t table_type; +}; + +struct mlx5dr_cmd_forward_tbl { + struct mlx5dr_devx_obj *ft; + struct mlx5dr_devx_obj *fg; + struct mlx5dr_devx_obj *fte; + uint32_t refcount; +}; + +struct mlx5dr_cmd_rtc_create_attr { + uint32_t pd; + uint32_t stc_base; + uint32_t ste_base; + uint32_t ste_offset; + uint32_t miss_ft_id; + uint8_t update_index_mode; + uint8_t log_depth; + uint8_t log_size; + uint8_t table_type; + uint8_t definer_id; + bool is_jumbo; +}; + +struct mlx5dr_cmd_stc_create_attr { + uint8_t log_obj_range; + uint8_t table_type; +}; + +struct mlx5dr_cmd_stc_modify_attr { + uint32_t stc_offset; + uint8_t action_offset; + enum mlx5_ifc_stc_action_type action_type; + union { + uint32_t id; /* TIRN, TAG, FT ID, STE ID */ + struct { + uint8_t decap; + uint16_t start_anchor; + uint16_t end_anchor; + } remove_header; + struct { + uint32_t arg_id; + uint32_t pattern_id; + } modify_header; + struct { + __be64 data; + } modify_action; + struct { + uint32_t arg_id; + uint32_t header_size; + uint8_t is_inline; + uint8_t encap; + uint16_t insert_anchor; + uint16_t insert_offset; + } insert_header; + struct { + uint8_t aso_type; + uint32_t devx_obj_id; + uint8_t return_reg_id; + } aso; + struct { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + } vport; + struct { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_pool *ste_pool; + uint32_t ste_obj_id; /* Internal */ + uint32_t match_definer_id; + uint8_t log_hash_size; + } ste_table; + struct { + uint16_t start_anchor; + uint16_t num_of_words; + } remove_words; + + uint32_t dest_table_id; + uint32_t dest_tir_num; + }; +}; + +struct mlx5dr_cmd_ste_create_attr { + uint8_t log_obj_range; + uint8_t table_type; +}; + +struct mlx5dr_cmd_definer_create_attr { + uint8_t *dw_selector; + uint8_t *byte_selector; + uint8_t *match_mask; +}; + +struct mlx5dr_cmd_sq_create_attr { + uint32_t cqn; + uint32_t pdn; + uint32_t page_id; + uint32_t dbr_id; + uint32_t wq_id; + uint32_t log_wq_sz; +}; + +struct mlx5dr_cmd_query_ft_caps { + uint8_t max_level; + uint8_t reparse; +}; + +struct mlx5dr_cmd_query_vport_caps { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + uint32_t metadata_c; + uint32_t metadata_c_mask; +}; + +struct mlx5dr_cmd_query_caps { + uint32_t wire_regc; + uint32_t wire_regc_mask; + uint32_t flex_protocols; + uint8_t wqe_based_update; + uint8_t rtc_reparse_mode; + uint16_t ste_format; + uint8_t rtc_index_mode; + uint8_t ste_alloc_log_max; + uint8_t ste_alloc_log_gran; + uint8_t stc_alloc_log_max; + uint8_t stc_alloc_log_gran; + uint8_t rtc_log_depth_max; + uint8_t format_select_gtpu_dw_0; + uint8_t format_select_gtpu_dw_1; + uint8_t format_select_gtpu_dw_2; + uint8_t format_select_gtpu_ext_dw_0; + bool full_dw_jumbo_support; + struct mlx5dr_cmd_query_ft_caps nic_ft; + struct mlx5dr_cmd_query_ft_caps fdb_ft; + bool eswitch_manager; + uint32_t eswitch_manager_vport_number; + uint8_t log_header_modify_argument_granularity; + uint8_t log_header_modify_argument_max_alloc; + uint64_t definer_format_sup; + uint32_t trivial_match_definer; + char fw_ver[64]; +}; + +int mlx5dr_cmd_destroy_obj(struct mlx5dr_devx_obj *devx_obj); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_table_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr); + +int +mlx5dr_cmd_flow_table_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_ft_modify_attr *ft_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_rtc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_stc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_stc_create_attr *stc_attr); + +int +mlx5dr_cmd_stc_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_stc_modify_attr *stc_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_ste_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ste_create_attr *ste_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_definer_create(struct ibv_context *ctx, + struct mlx5dr_cmd_definer_create_attr *def_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_sq_create(struct ibv_context *ctx, + struct mlx5dr_cmd_sq_create_attr *attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_arg_create(struct ibv_context *ctx, + uint16_t log_obj_range, + uint32_t pd); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_header_modify_pattern_create(struct ibv_context *ctx, + uint32_t pattern_length, + uint8_t *actions); + +int mlx5dr_cmd_sq_modify_rdy(struct mlx5dr_devx_obj *devx_obj); + +int mlx5dr_cmd_query_ib_port(struct ibv_context *ctx, + struct mlx5dr_cmd_query_vport_caps *vport_caps, + uint32_t port_num); +int mlx5dr_cmd_query_caps(struct ibv_context *ctx, + struct mlx5dr_cmd_query_caps *caps); + +void mlx5dr_cmd_miss_ft_destroy(struct mlx5dr_cmd_forward_tbl *tbl); + +struct mlx5dr_cmd_forward_tbl * +mlx5dr_cmd_miss_ft_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr, + uint32_t vport); + +void mlx5dr_cmd_set_attr_connect_miss_tbl(struct mlx5dr_context *ctx, + uint32_t fw_ft_type, + enum mlx5dr_table_type type, + struct mlx5dr_cmd_ft_modify_attr *ft_attr); +#endif /* MLX5DR_CMD_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 09/18] net/mlx5/hws: Add HWS pool and buddy 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (7 preceding siblings ...) 2022-10-19 14:42 ` [v4 08/18] net/mlx5/hws: Add HWS command layer Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 10/18] net/mlx5/hws: Add HWS send layer Alex Vesker ` (8 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit From: Erez Shitrit <erezsh@nvidia.com> HWS needs to manage different types of device memory in an efficient and quick way. For this, memory pools are being used. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_buddy.c | 201 +++++++++ drivers/net/mlx5/hws/mlx5dr_buddy.h | 22 + drivers/net/mlx5/hws/mlx5dr_pool.c | 672 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_pool.h | 152 +++++++ 4 files changed, 1047 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.h diff --git a/drivers/net/mlx5/hws/mlx5dr_buddy.c b/drivers/net/mlx5/hws/mlx5dr_buddy.c new file mode 100644 index 0000000000..9dba95f0b1 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_buddy.c @@ -0,0 +1,201 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include <rte_bitmap.h> +#include <rte_malloc.h> +#include "mlx5dr_internal.h" +#include "mlx5dr_buddy.h" + +static struct rte_bitmap *bitmap_alloc0(int s) +{ + struct rte_bitmap *bitmap; + uint32_t bmp_size; + void *mem; + + bmp_size = rte_bitmap_get_memory_footprint(s); + mem = rte_zmalloc("create_bmap", bmp_size, RTE_CACHE_LINE_SIZE); + if (!mem) { + DR_LOG(ERR, "No mem for bitmap"); + rte_errno = ENOMEM; + return NULL; + } + + bitmap = rte_bitmap_init(s, mem, bmp_size); + if (!bitmap) { + DR_LOG(ERR, "%s Failed to initialize bitmap", __func__); + rte_errno = EINVAL; + goto err_mem_alloc; + } + + return bitmap; + +err_mem_alloc: + rte_free(mem); + return NULL; +} + +static void bitmap_set_bit(struct rte_bitmap *bmp, uint32_t pos) +{ + rte_bitmap_set(bmp, pos); +} + +static void bitmap_clear_bit(struct rte_bitmap *bmp, uint32_t pos) +{ + rte_bitmap_clear(bmp, pos); +} + +static bool bitmap_test_bit(struct rte_bitmap *bmp, unsigned long n) +{ + return !!rte_bitmap_get(bmp, n); +} + +static unsigned long bitmap_ffs(struct rte_bitmap *bmap, + unsigned long n, unsigned long m) +{ + uint64_t out_slab = 0; + uint32_t pos = 0; /* Compilation warn */ + + __rte_bitmap_scan_init(bmap); + if (!rte_bitmap_scan(bmap, &pos, &out_slab)) { + DR_LOG(ERR, "Failed to get slab from bitmap."); + return m; + } + pos = pos + __builtin_ctzll(out_slab); + + if (pos < n) { + DR_LOG(ERR, "Unexpected bit (%d < %"PRIx64") from bitmap", pos, n); + return m; + } + return pos; +} + +static unsigned long mlx5dr_buddy_find_first_bit(struct rte_bitmap *addr, + uint32_t size) +{ + return bitmap_ffs(addr, 0, size); +} + +static int mlx5dr_buddy_init(struct mlx5dr_buddy_mem *buddy, uint32_t max_order) +{ + int i, s; + + buddy->max_order = max_order; + + buddy->bits = simple_calloc(buddy->max_order + 1, sizeof(long *)); + if (!buddy->bits) { + rte_errno = ENOMEM; + return -1; + } + + buddy->num_free = simple_calloc(buddy->max_order + 1, sizeof(*buddy->num_free)); + if (!buddy->num_free) { + rte_errno = ENOMEM; + goto err_out_free_bits; + } + + for (i = 0; i <= (int)buddy->max_order; ++i) { + s = 1 << (buddy->max_order - i); + buddy->bits[i] = bitmap_alloc0(s); + if (!buddy->bits[i]) + goto err_out_free_num_free; + } + + bitmap_set_bit(buddy->bits[buddy->max_order], 0); + + buddy->num_free[buddy->max_order] = 1; + + return 0; + +err_out_free_num_free: + for (i = 0; i <= (int)buddy->max_order; ++i) + rte_free(buddy->bits[i]); + + simple_free(buddy->num_free); + +err_out_free_bits: + simple_free(buddy->bits); + return -1; +} + +struct mlx5dr_buddy_mem *mlx5dr_buddy_create(uint32_t max_order) +{ + struct mlx5dr_buddy_mem *buddy; + + buddy = simple_calloc(1, sizeof(*buddy)); + if (!buddy) { + rte_errno = ENOMEM; + return NULL; + } + + if (mlx5dr_buddy_init(buddy, max_order)) + goto free_buddy; + + return buddy; + +free_buddy: + simple_free(buddy); + return NULL; +} + +void mlx5dr_buddy_cleanup(struct mlx5dr_buddy_mem *buddy) +{ + int i; + + for (i = 0; i <= (int)buddy->max_order; ++i) { + rte_free(buddy->bits[i]); + } + + simple_free(buddy->num_free); + simple_free(buddy->bits); +} + +int mlx5dr_buddy_alloc_mem(struct mlx5dr_buddy_mem *buddy, int order) +{ + int seg; + int o, m; + + for (o = order; o <= (int)buddy->max_order; ++o) + if (buddy->num_free[o]) { + m = 1 << (buddy->max_order - o); + seg = mlx5dr_buddy_find_first_bit(buddy->bits[o], m); + if (m <= seg) + return -1; + + goto found; + } + + return -1; + +found: + bitmap_clear_bit(buddy->bits[o], seg); + --buddy->num_free[o]; + + while (o > order) { + --o; + seg <<= 1; + bitmap_set_bit(buddy->bits[o], seg ^ 1); + ++buddy->num_free[o]; + } + + seg <<= order; + + return seg; +} + +void mlx5dr_buddy_free_mem(struct mlx5dr_buddy_mem *buddy, uint32_t seg, int order) +{ + seg >>= order; + + while (bitmap_test_bit(buddy->bits[order], seg ^ 1)) { + bitmap_clear_bit(buddy->bits[order], seg ^ 1); + --buddy->num_free[order]; + seg >>= 1; + ++order; + } + + bitmap_set_bit(buddy->bits[order], seg); + + ++buddy->num_free[order]; +} + diff --git a/drivers/net/mlx5/hws/mlx5dr_buddy.h b/drivers/net/mlx5/hws/mlx5dr_buddy.h new file mode 100644 index 0000000000..b9ec446b99 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_buddy.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_BUDDY_H_ +#define MLX5DR_BUDDY_H_ + +struct mlx5dr_buddy_mem { + struct rte_bitmap **bits; + unsigned int *num_free; + uint32_t max_order; +}; + +struct mlx5dr_buddy_mem *mlx5dr_buddy_create(uint32_t max_order); + +void mlx5dr_buddy_cleanup(struct mlx5dr_buddy_mem *buddy); + +int mlx5dr_buddy_alloc_mem(struct mlx5dr_buddy_mem *buddy, int order); + +void mlx5dr_buddy_free_mem(struct mlx5dr_buddy_mem *buddy, uint32_t seg, int order); + +#endif /* MLX5DR_BUDDY_H_ */ diff --git a/drivers/net/mlx5/hws/mlx5dr_pool.c b/drivers/net/mlx5/hws/mlx5dr_pool.c new file mode 100644 index 0000000000..2bfda5b4a5 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pool.c @@ -0,0 +1,672 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include <rte_bitmap.h> +#include <rte_malloc.h> +#include "mlx5dr_buddy.h" +#include "mlx5dr_internal.h" + +static void mlx5dr_pool_free_one_resource(struct mlx5dr_pool_resource *resource) +{ + mlx5dr_cmd_destroy_obj(resource->devx_obj); + + simple_free(resource); +} + +static void mlx5dr_pool_resource_free(struct mlx5dr_pool *pool, + int resource_idx) +{ + mlx5dr_pool_free_one_resource(pool->resource[resource_idx]); + pool->resource[resource_idx] = NULL; + + if (pool->tbl_type == MLX5DR_TABLE_TYPE_FDB) { + mlx5dr_pool_free_one_resource(pool->mirror_resource[resource_idx]); + pool->mirror_resource[resource_idx] = NULL; + } +} + +static struct mlx5dr_pool_resource * +mlx5dr_pool_create_one_resource(struct mlx5dr_pool *pool, uint32_t log_range, + uint32_t fw_ft_type) +{ + struct mlx5dr_cmd_ste_create_attr ste_attr; + struct mlx5dr_cmd_stc_create_attr stc_attr; + struct mlx5dr_pool_resource *resource; + struct mlx5dr_devx_obj *devx_obj; + + resource = simple_malloc(sizeof(*resource)); + if (!resource) { + rte_errno = ENOMEM; + return NULL; + } + + switch (pool->type) { + case MLX5DR_POOL_TYPE_STE: + ste_attr.log_obj_range = log_range; + ste_attr.table_type = fw_ft_type; + devx_obj = mlx5dr_cmd_ste_create(pool->ctx->ibv_ctx, &ste_attr); + break; + case MLX5DR_POOL_TYPE_STC: + stc_attr.log_obj_range = log_range; + stc_attr.table_type = fw_ft_type; + devx_obj = mlx5dr_cmd_stc_create(pool->ctx->ibv_ctx, &stc_attr); + break; + default: + assert(0); + break; + } + + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate resource objects"); + goto free_resource; + } + + resource->pool = pool; + resource->devx_obj = devx_obj; + resource->range = 1 << log_range; + resource->base_id = devx_obj->id; + + return resource; + +free_resource: + simple_free(resource); + return NULL; +} + +static int +mlx5dr_pool_resource_alloc(struct mlx5dr_pool *pool, uint32_t log_range, int idx) +{ + struct mlx5dr_pool_resource *resource; + uint32_t fw_ft_type, opt_log_range; + + fw_ft_type = mlx5dr_table_get_res_fw_ft_type(pool->tbl_type, false); + opt_log_range = pool->opt_type == MLX5DR_POOL_OPTIMIZE_ORIG ? 0 : log_range; + resource = mlx5dr_pool_create_one_resource(pool, opt_log_range, fw_ft_type); + if (!resource) { + DR_LOG(ERR, "Failed allocating resource"); + return rte_errno; + } + pool->resource[idx] = resource; + + if (pool->tbl_type == MLX5DR_TABLE_TYPE_FDB) { + struct mlx5dr_pool_resource *mir_resource; + + fw_ft_type = mlx5dr_table_get_res_fw_ft_type(pool->tbl_type, true); + opt_log_range = pool->opt_type == MLX5DR_POOL_OPTIMIZE_MIRROR ? 0 : log_range; + mir_resource = mlx5dr_pool_create_one_resource(pool, opt_log_range, fw_ft_type); + if (!mir_resource) { + DR_LOG(ERR, "Failed allocating mirrored resource"); + mlx5dr_pool_free_one_resource(resource); + pool->resource[idx] = NULL; + return rte_errno; + } + pool->mirror_resource[idx] = mir_resource; + } + + return 0; +} + +static int mlx5dr_pool_bitmap_get_free_slot(struct rte_bitmap *bitmap, uint32_t *iidx) +{ + uint64_t slab = 0; + + __rte_bitmap_scan_init(bitmap); + + if (!rte_bitmap_scan(bitmap, iidx, &slab)) + return ENOMEM; + + *iidx += __builtin_ctzll(slab); + + rte_bitmap_clear(bitmap, *iidx); + + return 0; +} + +static struct rte_bitmap *mlx5dr_pool_create_and_init_bitmap(uint32_t log_range) +{ + struct rte_bitmap *cur_bmp; + uint32_t bmp_size; + void *mem; + + bmp_size = rte_bitmap_get_memory_footprint(1 << log_range); + mem = rte_zmalloc("create_stc_bmap", bmp_size, RTE_CACHE_LINE_SIZE); + if (!mem) { + DR_LOG(ERR, "No mem for bitmap"); + rte_errno = ENOMEM; + return NULL; + } + + cur_bmp = rte_bitmap_init_with_all_set(1 << log_range, mem, bmp_size); + if (!cur_bmp) { + rte_free(mem); + DR_LOG(ERR, "Failed to initialize stc bitmap."); + rte_errno = ENOMEM; + return NULL; + } + + return cur_bmp; +} + +static void mlx5dr_pool_buddy_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + struct mlx5dr_buddy_mem *buddy; + + buddy = pool->db.buddy_manager->buddies[chunk->resource_idx]; + if (!buddy) { + assert(false); + DR_LOG(ERR, "No such buddy (%d)", chunk->resource_idx); + return; + } + + mlx5dr_buddy_free_mem(buddy, chunk->offset, chunk->order); +} + +static struct mlx5dr_buddy_mem * +mlx5dr_pool_buddy_get_next_buddy(struct mlx5dr_pool *pool, int idx, + uint32_t order, bool *is_new_buddy) +{ + static struct mlx5dr_buddy_mem *buddy; + uint32_t new_buddy_size; + + buddy = pool->db.buddy_manager->buddies[idx]; + if (buddy) + return buddy; + + new_buddy_size = RTE_MAX(pool->alloc_log_sz, order); + *is_new_buddy = true; + buddy = mlx5dr_buddy_create(new_buddy_size); + if (!buddy) { + DR_LOG(ERR, "Failed to create buddy order: %d index: %d", + new_buddy_size, idx); + return NULL; + } + + if (mlx5dr_pool_resource_alloc(pool, new_buddy_size, idx) != 0) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, new_buddy_size, idx); + mlx5dr_buddy_cleanup(buddy); + return NULL; + } + + pool->db.buddy_manager->buddies[idx] = buddy; + + return buddy; +} + +static int mlx5dr_pool_buddy_get_mem_chunk(struct mlx5dr_pool *pool, + int order, + uint32_t *buddy_idx, + int *seg) +{ + struct mlx5dr_buddy_mem *buddy; + bool new_mem = false; + int err = 0; + int i; + + *seg = -1; + + /* Find the next free place from the buddy array */ + while (*seg == -1) { + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + buddy = mlx5dr_pool_buddy_get_next_buddy(pool, i, + order, + &new_mem); + if (!buddy) { + err = rte_errno; + goto out; + } + + *seg = mlx5dr_buddy_alloc_mem(buddy, order); + if (*seg != -1) + goto found; + + if (pool->flags & MLX5DR_POOL_FLAGS_ONE_RESOURCE) { + DR_LOG(ERR, "Fail to allocate seg for one resource pool"); + err = rte_errno; + goto out; + } + + if (new_mem) { + /* We have new memory pool, should be place for us */ + assert(false); + DR_LOG(ERR, "No memory for order: %d with buddy no: %d", + order, i); + rte_errno = ENOMEM; + err = ENOMEM; + goto out; + } + } + } + +found: + *buddy_idx = i; +out: + return err; +} + +static int mlx5dr_pool_buddy_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret = 0; + + /* Go over the buddies and find next free slot */ + ret = mlx5dr_pool_buddy_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_pool_buddy_db_uninit(struct mlx5dr_pool *pool) +{ + struct mlx5dr_buddy_mem *buddy; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + buddy = pool->db.buddy_manager->buddies[i]; + if (buddy) { + mlx5dr_buddy_cleanup(buddy); + simple_free(buddy); + pool->db.buddy_manager->buddies[i] = NULL; + } + } + + simple_free(pool->db.buddy_manager); +} + +static int mlx5dr_pool_buddy_db_init(struct mlx5dr_pool *pool, uint32_t log_range) +{ + pool->db.buddy_manager = simple_calloc(1, sizeof(*pool->db.buddy_manager)); + if (!pool->db.buddy_manager) { + DR_LOG(ERR, "No mem for buddy_manager with log_range: %d", log_range); + rte_errno = ENOMEM; + return rte_errno; + } + + if (pool->flags & MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE) { + bool new_buddy; + + if (!mlx5dr_pool_buddy_get_next_buddy(pool, 0, log_range, &new_buddy)) { + DR_LOG(ERR, "Failed allocating memory on create log_sz: %d", log_range); + simple_free(pool->db.buddy_manager); + return rte_errno; + } + } + + pool->p_db_uninit = &mlx5dr_pool_buddy_db_uninit; + pool->p_get_chunk = &mlx5dr_pool_buddy_db_get_chunk; + pool->p_put_chunk = &mlx5dr_pool_buddy_db_put_chunk; + + return 0; +} + +static int mlx5dr_pool_create_resource_on_index(struct mlx5dr_pool *pool, + uint32_t alloc_size, int idx) +{ + if (mlx5dr_pool_resource_alloc(pool, alloc_size, idx) != 0) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, alloc_size, idx); + return rte_errno; + } + + return 0; +} + +static struct mlx5dr_pool_elements * +mlx5dr_pool_element_create_new_elem(struct mlx5dr_pool *pool, uint32_t order, int idx) +{ + struct mlx5dr_pool_elements *elem; + uint32_t alloc_size; + + alloc_size = pool->alloc_log_sz; + + elem = simple_calloc(1, sizeof(*elem)); + if (!elem) { + DR_LOG(ERR, "Failed to create elem order: %d index: %d", + order, idx); + rte_errno = ENOMEM; + return NULL; + } + /*sharing the same resource, also means that all the elements are with size 1*/ + if ((pool->flags & MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS) && + !(pool->flags & MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK)) { + /* Currently all chunks in size 1 */ + elem->bitmap = mlx5dr_pool_create_and_init_bitmap(alloc_size - order); + if (!elem->bitmap) { + DR_LOG(ERR, "Failed to create bitmap type: %d: size %d index: %d", + pool->type, alloc_size, idx); + goto free_elem; + } + } + + if (mlx5dr_pool_create_resource_on_index(pool, alloc_size, idx)) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, alloc_size, idx); + goto free_db; + } + + pool->db.element_manager->elements[idx] = elem; + + return elem; + +free_db: + rte_free(elem->bitmap); +free_elem: + simple_free(elem); + return NULL; +} + +static int mlx5dr_pool_element_find_seg(struct mlx5dr_pool_elements *elem, int *seg) +{ + if (mlx5dr_pool_bitmap_get_free_slot(elem->bitmap, (uint32_t *)seg)) { + elem->is_full = true; + return ENOMEM; + } + return 0; +} + +static int +mlx5dr_pool_onesize_element_get_mem_chunk(struct mlx5dr_pool *pool, uint32_t order, + uint32_t *idx, int *seg) +{ + struct mlx5dr_pool_elements *elem; + + elem = pool->db.element_manager->elements[0]; + if (!elem) + elem = mlx5dr_pool_element_create_new_elem(pool, order, 0); + if (!elem) + goto err_no_elem; + + *idx = 0; + + if (mlx5dr_pool_element_find_seg(elem, seg) != 0) { + DR_LOG(ERR, "No more resources (last request order: %d)", order); + rte_errno = ENOMEM; + return ENOMEM; + } + + elem->num_of_elements++; + return 0; + +err_no_elem: + DR_LOG(ERR, "Failed to allocate element for order: %d", order); + return ENOMEM; +} + +static int +mlx5dr_pool_general_element_get_mem_chunk(struct mlx5dr_pool *pool, uint32_t order, + uint32_t *idx, int *seg) +{ + int ret; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + if (!pool->resource[i]) { + ret = mlx5dr_pool_create_resource_on_index(pool, order, i); + if (ret) + goto err_no_res; + *idx = i; + *seg = 0; /* One memory slot in that element */ + return 0; + } + } + + rte_errno = ENOMEM; + DR_LOG(ERR, "No more resources (last request order: %d)", order); + return ENOMEM; + +err_no_res: + DR_LOG(ERR, "Failed to allocate element for order: %d", order); + return ENOMEM; +} + +static int mlx5dr_pool_general_element_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret; + + /* Go over all memory elements and find/allocate free slot */ + ret = mlx5dr_pool_general_element_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_pool_general_element_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + assert(pool->resource[chunk->resource_idx]); + + if (pool->flags & MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE) + mlx5dr_pool_resource_free(pool, chunk->resource_idx); +} + +static void mlx5dr_pool_general_element_db_uninit(struct mlx5dr_pool *pool) +{ + (void)pool; +} + +/* This memory management works as the following: + * - At start doesn't allocate no mem at all. + * - When new request for chunk arrived: + * allocate resource and give it. + * - When free that chunk: + * the resource is freed. + */ +static int mlx5dr_pool_general_element_db_init(struct mlx5dr_pool *pool) +{ + pool->db.element_manager = simple_calloc(1, sizeof(*pool->db.element_manager)); + if (!pool->db.element_manager) { + DR_LOG(ERR, "No mem for general elemnt_manager"); + rte_errno = ENOMEM; + return rte_errno; + } + + pool->p_db_uninit = &mlx5dr_pool_general_element_db_uninit; + pool->p_get_chunk = &mlx5dr_pool_general_element_db_get_chunk; + pool->p_put_chunk = &mlx5dr_pool_general_element_db_put_chunk; + + return 0; +} + +static void mlx5dr_onesize_element_db_destroy_element(struct mlx5dr_pool *pool, + struct mlx5dr_pool_elements *elem, + struct mlx5dr_pool_chunk *chunk) +{ + assert(pool->resource[chunk->resource_idx]); + + mlx5dr_pool_resource_free(pool, chunk->resource_idx); + + simple_free(elem); + pool->db.element_manager->elements[chunk->resource_idx] = NULL; +} + +static void mlx5dr_onesize_element_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + struct mlx5dr_pool_elements *elem; + + assert(chunk->resource_idx == 0); + + elem = pool->db.element_manager->elements[chunk->resource_idx]; + if (!elem) { + assert(false); + DR_LOG(ERR, "No such element (%d)", chunk->resource_idx); + return; + } + + rte_bitmap_set(elem->bitmap, chunk->offset); + elem->is_full = false; + elem->num_of_elements--; + + if (pool->flags & MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE && + !elem->num_of_elements) + mlx5dr_onesize_element_db_destroy_element(pool, elem, chunk); +} + +static int mlx5dr_onesize_element_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret = 0; + + /* Go over all memory elements and find/allocate free slot */ + ret = mlx5dr_pool_onesize_element_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_onesize_element_db_uninit(struct mlx5dr_pool *pool) +{ + struct mlx5dr_pool_elements *elem; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + elem = pool->db.element_manager->elements[i]; + if (elem) { + if (elem->bitmap) + rte_free(elem->bitmap); + simple_free(elem); + pool->db.element_manager->elements[i] = NULL; + } + } + simple_free(pool->db.element_manager); +} + +/* This memory management works as the following: + * - At start doesn't allocate no mem at all. + * - When new request for chunk arrived: + * aloocate the first and only slot of memory/resource + * when it ended return error. + */ +static int mlx5dr_pool_onesize_element_db_init(struct mlx5dr_pool *pool) +{ + pool->db.element_manager = simple_calloc(1, sizeof(*pool->db.element_manager)); + if (!pool->db.element_manager) { + DR_LOG(ERR, "No mem for general elemnt_manager"); + rte_errno = ENOMEM; + return rte_errno; + } + + pool->p_db_uninit = &mlx5dr_onesize_element_db_uninit; + pool->p_get_chunk = &mlx5dr_onesize_element_db_get_chunk; + pool->p_put_chunk = &mlx5dr_onesize_element_db_put_chunk; + + return 0; +} + +static int mlx5dr_pool_db_init(struct mlx5dr_pool *pool, + enum mlx5dr_db_type db_type) +{ + int ret; + + if (db_type == MLX5DR_POOL_DB_TYPE_GENERAL_SIZE) + ret = mlx5dr_pool_general_element_db_init(pool); + else if (db_type == MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE) + ret = mlx5dr_pool_onesize_element_db_init(pool); + else + ret = mlx5dr_pool_buddy_db_init(pool, pool->alloc_log_sz); + + if (ret) { + DR_LOG(ERR, "Failed to init general db : %d (ret: %d)", db_type, ret); + return ret; + } + + return 0; +} + +static void mlx5dr_pool_db_unint(struct mlx5dr_pool *pool) +{ + pool->p_db_uninit(pool); +} + +int +mlx5dr_pool_chunk_alloc(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret; + + pthread_spin_lock(&pool->lock); + ret = pool->p_get_chunk(pool, chunk); + pthread_spin_unlock(&pool->lock); + + return ret; +} + +void mlx5dr_pool_chunk_free(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + pthread_spin_lock(&pool->lock); + pool->p_put_chunk(pool, chunk); + pthread_spin_unlock(&pool->lock); +} + +struct mlx5dr_pool * +mlx5dr_pool_create(struct mlx5dr_context *ctx, struct mlx5dr_pool_attr *pool_attr) +{ + enum mlx5dr_db_type res_db_type; + struct mlx5dr_pool *pool; + + pool = simple_calloc(1, sizeof(*pool)); + if (!pool) + return NULL; + + pool->ctx = ctx; + pool->type = pool_attr->pool_type; + pool->alloc_log_sz = pool_attr->alloc_log_sz; + pool->flags = pool_attr->flags; + pool->tbl_type = pool_attr->table_type; + pool->opt_type = pool_attr->opt_type; + + pthread_spin_init(&pool->lock, PTHREAD_PROCESS_PRIVATE); + + /* Support general db */ + if (pool->flags == (MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE | + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK)) + res_db_type = MLX5DR_POOL_DB_TYPE_GENERAL_SIZE; + else if (pool->flags == (MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS)) + res_db_type = MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE; + else + res_db_type = MLX5DR_POOL_DB_TYPE_BUDDY; + + pool->alloc_log_sz = pool_attr->alloc_log_sz; + + if (mlx5dr_pool_db_init(pool, res_db_type)) + goto free_pool; + + return pool; + +free_pool: + pthread_spin_destroy(&pool->lock); + simple_free(pool); + return NULL; +} + +int mlx5dr_pool_destroy(struct mlx5dr_pool *pool) +{ + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) + if (pool->resource[i]) + mlx5dr_pool_resource_free(pool, i); + + mlx5dr_pool_db_unint(pool); + + pthread_spin_destroy(&pool->lock); + simple_free(pool); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_pool.h b/drivers/net/mlx5/hws/mlx5dr_pool.h new file mode 100644 index 0000000000..cd12c3ab9a --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pool.h @@ -0,0 +1,152 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_POOL_H_ +#define MLX5DR_POOL_H_ + +enum mlx5dr_pool_type { + MLX5DR_POOL_TYPE_STE, + MLX5DR_POOL_TYPE_STC, +}; + +#define MLX5DR_POOL_STC_LOG_SZ 14 + +#define MLX5DR_POOL_RESOURCE_ARR_SZ 100 + +struct mlx5dr_pool_chunk { + uint32_t resource_idx; + /* Internal offset, relative to base index */ + int offset; + int order; +}; + +struct mlx5dr_pool_resource { + struct mlx5dr_pool *pool; + struct mlx5dr_devx_obj *devx_obj; + uint32_t base_id; + uint32_t range; +}; + +enum mlx5dr_pool_flags { + /* Only a one resource in that pool */ + MLX5DR_POOL_FLAGS_ONE_RESOURCE = 1 << 0, + MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE = 1 << 1, + /* No sharing resources between chunks */ + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK = 1 << 2, + /* All objects are in the same size */ + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS = 1 << 3, + /* Manged by buddy allocator */ + MLX5DR_POOL_FLAGS_BUDDY_MANAGED = 1 << 4, + /* Allocate pool_type memory on pool creation */ + MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE = 1 << 5, + + /* These values should be used by the caller */ + MLX5DR_POOL_FLAGS_FOR_STC_POOL = + MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS, + MLX5DR_POOL_FLAGS_FOR_MATCHER_STE_POOL = + MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE | + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK, + MLX5DR_POOL_FLAGS_FOR_STE_ACTION_POOL = + MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_BUDDY_MANAGED | + MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE, +}; + +enum mlx5dr_pool_optimize { + MLX5DR_POOL_OPTIMIZE_NONE = 0x0, + MLX5DR_POOL_OPTIMIZE_ORIG = 0x1, + MLX5DR_POOL_OPTIMIZE_MIRROR = 0x2, +}; + +struct mlx5dr_pool_attr { + enum mlx5dr_pool_type pool_type; + enum mlx5dr_table_type table_type; + enum mlx5dr_pool_flags flags; + enum mlx5dr_pool_optimize opt_type; + /* Allocation size once memory is depleted */ + size_t alloc_log_sz; +}; + +enum mlx5dr_db_type { + /* Uses for allocating chunk of big memory, each element has its own resource in the FW*/ + MLX5DR_POOL_DB_TYPE_GENERAL_SIZE, + /* One resource only, all the elements are with same one size */ + MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE, + /* Many resources, the memory allocated with buddy mechanism */ + MLX5DR_POOL_DB_TYPE_BUDDY, +}; + +struct mlx5dr_buddy_manager { + struct mlx5dr_buddy_mem *buddies[MLX5DR_POOL_RESOURCE_ARR_SZ]; +}; + +struct mlx5dr_pool_elements { + uint32_t num_of_elements; + struct rte_bitmap *bitmap; + bool is_full; +}; + +struct mlx5dr_element_manager { + struct mlx5dr_pool_elements *elements[MLX5DR_POOL_RESOURCE_ARR_SZ]; +}; + +struct mlx5dr_pool_db { + enum mlx5dr_db_type type; + union { + struct mlx5dr_element_manager *element_manager; + struct mlx5dr_buddy_manager *buddy_manager; + }; +}; + +typedef int (*mlx5dr_pool_db_get_chunk)(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); +typedef void (*mlx5dr_pool_db_put_chunk)(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); +typedef void (*mlx5dr_pool_unint_db)(struct mlx5dr_pool *pool); + +struct mlx5dr_pool { + struct mlx5dr_context *ctx; + enum mlx5dr_pool_type type; + enum mlx5dr_pool_flags flags; + pthread_spinlock_t lock; + size_t alloc_log_sz; + enum mlx5dr_table_type tbl_type; + enum mlx5dr_pool_optimize opt_type; + struct mlx5dr_pool_resource *resource[MLX5DR_POOL_RESOURCE_ARR_SZ]; + struct mlx5dr_pool_resource *mirror_resource[MLX5DR_POOL_RESOURCE_ARR_SZ]; + /* DB */ + struct mlx5dr_pool_db db; + /* Functions */ + mlx5dr_pool_unint_db p_db_uninit; + mlx5dr_pool_db_get_chunk p_get_chunk; + mlx5dr_pool_db_put_chunk p_put_chunk; +}; + +struct mlx5dr_pool * +mlx5dr_pool_create(struct mlx5dr_context *ctx, + struct mlx5dr_pool_attr *pool_attr); + +int mlx5dr_pool_destroy(struct mlx5dr_pool *pool); + +int mlx5dr_pool_chunk_alloc(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); + +void mlx5dr_pool_chunk_free(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); + +static inline struct mlx5dr_devx_obj * +mlx5dr_pool_chunk_get_base_devx_obj(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + return pool->resource[chunk->resource_idx]->devx_obj; +} + +static inline struct mlx5dr_devx_obj * +mlx5dr_pool_chunk_get_base_devx_obj_mirror(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + return pool->mirror_resource[chunk->resource_idx]->devx_obj; +} +#endif /* MLX5DR_POOL_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 10/18] net/mlx5/hws: Add HWS send layer 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (8 preceding siblings ...) 2022-10-19 14:42 ` [v4 09/18] net/mlx5/hws: Add HWS pool and buddy Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 11/18] net/mlx5/hws: Add HWS definer layer Alex Vesker ` (7 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Mark Bloch HWS configures flows to the HW using a QP, each WQE has the details of the flow we want to offload. The send layer allocates the resources needed to send the request to the HW as well as managing the queues, getting completions and handling failures. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_send.c | 844 +++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_send.h | 275 ++++++++++ 2 files changed, 1119 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.h diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c new file mode 100644 index 0000000000..26904a9040 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_send.c @@ -0,0 +1,844 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +struct mlx5dr_send_ring_dep_wqe * +mlx5dr_send_add_new_dep_wqe(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *send_sq = &queue->send_ring->send_sq; + unsigned int idx = send_sq->head_dep_idx++ & (queue->num_entries - 1); + + memset(&send_sq->dep_wqe[idx].wqe_data.tag, 0, MLX5DR_MATCH_TAG_SZ); + + return &send_sq->dep_wqe[idx]; +} + +void mlx5dr_send_abort_new_dep_wqe(struct mlx5dr_send_engine *queue) +{ + queue->send_ring->send_sq.head_dep_idx--; +} + +void mlx5dr_send_all_dep_wqe(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *send_sq = &queue->send_ring->send_sq; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE; + + /* Fence first from previous depend WQEs */ + ste_attr.send_attr.fence = 1; + + while (send_sq->head_dep_idx != send_sq->tail_dep_idx) { + dep_wqe = &send_sq->dep_wqe[send_sq->tail_dep_idx++ & (queue->num_entries - 1)]; + + /* Notify HW on the last WQE */ + ste_attr.send_attr.notify_hw = (send_sq->tail_dep_idx == send_sq->head_dep_idx); + ste_attr.send_attr.user_data = dep_wqe->user_data; + ste_attr.send_attr.rule = dep_wqe->rule; + + ste_attr.rtc_0 = dep_wqe->rtc_0; + ste_attr.rtc_1 = dep_wqe->rtc_1; + ste_attr.retry_rtc_0 = dep_wqe->retry_rtc_0; + ste_attr.retry_rtc_1 = dep_wqe->retry_rtc_1; + ste_attr.used_id_rtc_0 = &dep_wqe->rule->rtc_0; + ste_attr.used_id_rtc_1 = &dep_wqe->rule->rtc_1; + ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; + ste_attr.wqe_data = &dep_wqe->wqe_data; + + mlx5dr_send_ste(queue, &ste_attr); + + /* Fencing is done only on the first WQE */ + ste_attr.send_attr.fence = 0; + } +} + +struct mlx5dr_send_engine_post_ctrl +mlx5dr_send_engine_post_start(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_engine_post_ctrl ctrl; + + ctrl.queue = queue; + /* Currently only one send ring is supported */ + ctrl.send_ring = &queue->send_ring[0]; + ctrl.num_wqebbs = 0; + + return ctrl; +} + +void mlx5dr_send_engine_post_req_wqe(struct mlx5dr_send_engine_post_ctrl *ctrl, + char **buf, size_t *len) +{ + struct mlx5dr_send_ring_sq *send_sq = &ctrl->send_ring->send_sq; + unsigned int idx; + + idx = (send_sq->cur_post + ctrl->num_wqebbs) & send_sq->buf_mask; + + *buf = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + *len = MLX5_SEND_WQE_BB; + + if (!ctrl->num_wqebbs) { + *buf += sizeof(struct mlx5dr_wqe_ctrl_seg); + *len -= sizeof(struct mlx5dr_wqe_ctrl_seg); + } + + ctrl->num_wqebbs++; +} + +static void mlx5dr_send_engine_post_ring(struct mlx5dr_send_ring_sq *sq, + struct mlx5dv_devx_uar *uar, + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl) +{ + rte_compiler_barrier(); + sq->db[MLX5_SND_DBR] = rte_cpu_to_be_32(sq->cur_post); + + rte_wmb(); + mlx5dr_uar_write64_relaxed(*((uint64_t *)wqe_ctrl), uar->reg_addr); + rte_wmb(); +} + +static void +mlx5dr_send_wqe_set_tag(struct mlx5dr_wqe_gta_data_seg_ste *wqe_data, + struct mlx5dr_rule_match_tag *tag, + bool is_jumbo) +{ + if (is_jumbo) { + /* Clear previous possibly dirty control */ + memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ); + memcpy(wqe_data->action, tag->jumbo, MLX5DR_JUMBO_TAG_SZ); + } else { + /* Clear previous possibly dirty control and actions */ + memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ + MLX5DR_ACTIONS_SZ); + memcpy(wqe_data->tag, tag->match, MLX5DR_MATCH_TAG_SZ); + } +} + +void mlx5dr_send_engine_post_end(struct mlx5dr_send_engine_post_ctrl *ctrl, + struct mlx5dr_send_engine_post_attr *attr) +{ + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_ring_sq *sq; + uint32_t flags = 0; + unsigned int idx; + + sq = &ctrl->send_ring->send_sq; + idx = sq->cur_post & sq->buf_mask; + sq->last_idx = idx; + + wqe_ctrl = (void *)(sq->buf + (idx << MLX5_SEND_WQE_SHIFT)); + + wqe_ctrl->opmod_idx_opcode = + rte_cpu_to_be_32((attr->opmod << 24) | + ((sq->cur_post & 0xffff) << 8) | + attr->opcode); + wqe_ctrl->qpn_ds = + rte_cpu_to_be_32((attr->len + sizeof(struct mlx5dr_wqe_ctrl_seg)) / 16 | + sq->sqn << 8); + + wqe_ctrl->imm = rte_cpu_to_be_32(attr->id); + + flags |= attr->notify_hw ? MLX5_WQE_CTRL_CQ_UPDATE : 0; + flags |= attr->fence ? MLX5_WQE_CTRL_INITIATOR_SMALL_FENCE : 0; + wqe_ctrl->flags = rte_cpu_to_be_32(flags); + + sq->wr_priv[idx].id = attr->id; + sq->wr_priv[idx].retry_id = attr->retry_id; + + sq->wr_priv[idx].rule = attr->rule; + sq->wr_priv[idx].user_data = attr->user_data; + sq->wr_priv[idx].num_wqebbs = ctrl->num_wqebbs; + + if (attr->rule) { + sq->wr_priv[idx].rule->pending_wqes++; + sq->wr_priv[idx].used_id = attr->used_id; + } + + sq->cur_post += ctrl->num_wqebbs; + + if (attr->notify_hw) + mlx5dr_send_engine_post_ring(sq, ctrl->queue->uar, wqe_ctrl); +} + +static void mlx5dr_send_wqe(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_engine_post_attr *send_attr, + struct mlx5dr_wqe_gta_ctrl_seg *send_wqe_ctrl, + void *send_wqe_data, + void *send_wqe_tag, + bool is_jumbo, + uint8_t gta_opcode, + uint32_t direct_index) +{ + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_engine_post_ctrl ctrl; + size_t wqe_len; + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_data, &wqe_len); + + wqe_ctrl->op_dirix = htobe32(gta_opcode << 28 | direct_index); + memcpy(wqe_ctrl->stc_ix, send_wqe_ctrl->stc_ix, sizeof(send_wqe_ctrl->stc_ix)); + + if (send_wqe_data) + memcpy(wqe_data, send_wqe_data, sizeof(*wqe_data)); + else + mlx5dr_send_wqe_set_tag(wqe_data, send_wqe_tag, is_jumbo); + + mlx5dr_send_engine_post_end(&ctrl, send_attr); +} + +void mlx5dr_send_ste(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ste_attr *ste_attr) +{ + struct mlx5dr_send_engine_post_attr *send_attr = &ste_attr->send_attr; + uint8_t notify_hw = send_attr->notify_hw; + uint8_t fence = send_attr->fence; + + if (ste_attr->rtc_1) { + send_attr->id = ste_attr->rtc_1; + send_attr->used_id = ste_attr->used_id_rtc_1; + send_attr->retry_id = ste_attr->retry_rtc_1; + send_attr->fence = fence; + send_attr->notify_hw = notify_hw && !ste_attr->rtc_0; + mlx5dr_send_wqe(queue, send_attr, + ste_attr->wqe_ctrl, + ste_attr->wqe_data, + ste_attr->wqe_tag, + ste_attr->wqe_tag_is_jumbo, + ste_attr->gta_opcode, + ste_attr->direct_index); + } + + if (ste_attr->rtc_0) { + send_attr->id = ste_attr->rtc_0; + send_attr->used_id = ste_attr->used_id_rtc_0; + send_attr->retry_id = ste_attr->retry_rtc_0; + send_attr->fence = fence && !ste_attr->rtc_1; + send_attr->notify_hw = notify_hw; + mlx5dr_send_wqe(queue, send_attr, + ste_attr->wqe_ctrl, + ste_attr->wqe_data, + ste_attr->wqe_tag, + ste_attr->wqe_tag_is_jumbo, + ste_attr->gta_opcode, + ste_attr->direct_index); + } + + /* Restore to ortginal requested values */ + send_attr->notify_hw = notify_hw; + send_attr->fence = fence; +} + +static void mlx5dr_send_engine_retry_post_send(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_priv *priv, + uint16_t wqe_cnt) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_send_ring_sq *send_sq; + unsigned int idx; + size_t wqe_len; + char *p; + + send_attr.rule = priv->rule; + send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + send_attr.len = MLX5_SEND_WQE_BB * 2 - sizeof(struct mlx5dr_wqe_ctrl_seg); + send_attr.notify_hw = 1; + send_attr.fence = 0; + send_attr.user_data = priv->user_data; + send_attr.id = priv->retry_id; + send_attr.used_id = priv->used_id; + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_data, &wqe_len); + + send_sq = &ctrl.send_ring->send_sq; + idx = wqe_cnt & send_sq->buf_mask; + p = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + + /* Copy old gta ctrl */ + memcpy(wqe_ctrl, p + sizeof(struct mlx5dr_wqe_ctrl_seg), + MLX5_SEND_WQE_BB - sizeof(struct mlx5dr_wqe_ctrl_seg)); + + idx = (wqe_cnt + 1) & send_sq->buf_mask; + p = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + + /* Copy old gta data */ + memcpy(wqe_data, p, MLX5_SEND_WQE_BB); + + mlx5dr_send_engine_post_end(&ctrl, &send_attr); +} + +void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *sq = &queue->send_ring[0].send_sq; + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl; + + wqe_ctrl = (void *)(sq->buf + (sq->last_idx << MLX5_SEND_WQE_SHIFT)); + + wqe_ctrl->flags |= rte_cpu_to_be_32(MLX5_WQE_CTRL_CQ_UPDATE); + + mlx5dr_send_engine_post_ring(sq, queue->uar, wqe_ctrl); +} + +static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_priv *priv, + uint16_t wqe_cnt, + enum rte_flow_op_status *status) +{ + priv->rule->pending_wqes--; + + if (*status == RTE_FLOW_OP_ERROR) { + if (priv->retry_id) { + mlx5dr_send_engine_retry_post_send(queue, priv, wqe_cnt); + return; + } + /* Some part of the rule failed */ + priv->rule->status = MLX5DR_RULE_STATUS_FAILING; + *priv->used_id = 0; + } else { + *priv->used_id = priv->id; + } + + /* Update rule status for the last completion */ + if (!priv->rule->pending_wqes) { + if (unlikely(priv->rule->status == MLX5DR_RULE_STATUS_FAILING)) { + /* Rule completely failed and doesn't require cleanup */ + if (!priv->rule->rtc_0 && !priv->rule->rtc_1) + priv->rule->status = MLX5DR_RULE_STATUS_FAILED; + + *status = RTE_FLOW_OP_ERROR; + } else { + /* Increase the status, this only works on good flow as the enum + * is arrange it away creating -> created -> deleting -> deleted + */ + priv->rule->status++; + *status = RTE_FLOW_OP_SUCCESS; + /* Rule was deleted now we can safely release action STEs */ + if (priv->rule->status == MLX5DR_RULE_STATUS_DELETED) + mlx5dr_rule_free_action_ste_idx(priv->rule); + } + } +} + +static void mlx5dr_send_engine_update(struct mlx5dr_send_engine *queue, + struct mlx5_cqe64 *cqe, + struct mlx5dr_send_ring_priv *priv, + struct rte_flow_op_result res[], + int64_t *i, + uint32_t res_nb, + uint16_t wqe_cnt) +{ + enum rte_flow_op_status status; + + if (!cqe || (likely(rte_be_to_cpu_32(cqe->byte_cnt) >> 31 == 0) && + likely(mlx5dv_get_cqe_opcode(cqe) == MLX5_CQE_REQ))) { + status = RTE_FLOW_OP_SUCCESS; + } else { + status = RTE_FLOW_OP_ERROR; + } + + if (priv->user_data) { + if (priv->rule) { + mlx5dr_send_engine_update_rule(queue, priv, wqe_cnt, &status); + /* Completion is provided on the last rule WQE */ + if (priv->rule->pending_wqes) + return; + } + + if (*i < res_nb) { + res[*i].user_data = priv->user_data; + res[*i].status = status; + (*i)++; + mlx5dr_send_engine_dec_rule(queue); + } else { + mlx5dr_send_engine_gen_comp(queue, priv->user_data, status); + } + } +} + +static void mlx5dr_send_engine_poll_cq(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring *send_ring, + struct rte_flow_op_result res[], + int64_t *i, + uint32_t res_nb) +{ + struct mlx5dr_send_ring_cq *cq = &send_ring->send_cq; + struct mlx5dr_send_ring_sq *sq = &send_ring->send_sq; + uint32_t cq_idx = cq->cons_index & cq->ncqe_mask; + struct mlx5dr_send_ring_priv *priv; + struct mlx5_cqe64 *cqe; + uint32_t offset_cqe64; + uint8_t cqe_opcode; + uint8_t cqe_owner; + uint16_t wqe_cnt; + uint8_t sw_own; + + offset_cqe64 = RTE_CACHE_LINE_SIZE - sizeof(struct mlx5_cqe64); + cqe = (void *)(cq->buf + (cq_idx << cq->cqe_log_sz) + offset_cqe64); + + sw_own = (cq->cons_index & cq->ncqe) ? 1 : 0; + cqe_opcode = mlx5dv_get_cqe_opcode(cqe); + cqe_owner = mlx5dv_get_cqe_owner(cqe); + + if (cqe_opcode == MLX5_CQE_INVALID || + cqe_owner != sw_own) + return; + + if (unlikely(mlx5dv_get_cqe_opcode(cqe) != MLX5_CQE_REQ)) + queue->err = true; + + rte_io_rmb(); + + wqe_cnt = be16toh(cqe->wqe_counter) & sq->buf_mask; + + while (cq->poll_wqe != wqe_cnt) { + priv = &sq->wr_priv[cq->poll_wqe]; + mlx5dr_send_engine_update(queue, NULL, priv, res, i, res_nb, 0); + cq->poll_wqe = (cq->poll_wqe + priv->num_wqebbs) & sq->buf_mask; + } + + priv = &sq->wr_priv[wqe_cnt]; + cq->poll_wqe = (wqe_cnt + priv->num_wqebbs) & sq->buf_mask; + mlx5dr_send_engine_update(queue, cqe, priv, res, i, res_nb, wqe_cnt); + cq->cons_index++; +} + +static void mlx5dr_send_engine_poll_cqs(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + int64_t *polled, + uint32_t res_nb) +{ + int j; + + for (j = 0; j < MLX5DR_NUM_SEND_RINGS; j++) { + mlx5dr_send_engine_poll_cq(queue, &queue->send_ring[j], + res, polled, res_nb); + + *queue->send_ring[j].send_cq.db = + htobe32(queue->send_ring[j].send_cq.cons_index & 0xffffff); + } +} + +static void mlx5dr_send_engine_poll_list(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + int64_t *polled, + uint32_t res_nb) +{ + struct mlx5dr_completed_poll *comp = &queue->completed; + + while (comp->ci != comp->pi) { + if (*polled < res_nb) { + res[*polled].status = + comp->entries[comp->ci].status; + res[*polled].user_data = + comp->entries[comp->ci].user_data; + (*polled)++; + comp->ci = (comp->ci + 1) & comp->mask; + mlx5dr_send_engine_dec_rule(queue); + } else { + return; + } + } +} + +static int mlx5dr_send_engine_poll(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + uint32_t res_nb) +{ + int64_t polled = 0; + + mlx5dr_send_engine_poll_list(queue, res, &polled, res_nb); + + if (polled >= res_nb) + return polled; + + mlx5dr_send_engine_poll_cqs(queue, res, &polled, res_nb); + + return polled; +} + +int mlx5dr_send_queue_poll(struct mlx5dr_context *ctx, + uint16_t queue_id, + struct rte_flow_op_result res[], + uint32_t res_nb) +{ + return mlx5dr_send_engine_poll(&ctx->send_queue[queue_id], + res, res_nb); +} + +static int mlx5dr_send_ring_create_sq_obj(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_sq *sq, + struct mlx5dr_send_ring_cq *cq, + size_t log_wq_sz) +{ + struct mlx5dr_cmd_sq_create_attr attr = {0}; + int err; + + attr.cqn = cq->cqn; + attr.pdn = ctx->pd_num; + attr.page_id = queue->uar->page_id; + attr.dbr_id = sq->db_umem->umem_id; + attr.wq_id = sq->buf_umem->umem_id; + attr.log_wq_sz = log_wq_sz; + + sq->obj = mlx5dr_cmd_sq_create(ctx->ibv_ctx, &attr); + if (!sq->obj) + return rte_errno; + + sq->sqn = sq->obj->id; + + err = mlx5dr_cmd_sq_modify_rdy(sq->obj); + if (err) + goto free_sq; + + return 0; + +free_sq: + mlx5dr_cmd_destroy_obj(sq->obj); + + return err; +} + +static inline unsigned long align(unsigned long val, unsigned long align) +{ + return (val + align - 1) & ~(align - 1); +} + +static int mlx5dr_send_ring_open_sq(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_sq *sq, + struct mlx5dr_send_ring_cq *cq) +{ + size_t sq_log_buf_sz; + size_t buf_aligned; + size_t sq_buf_sz; + size_t buf_sz; + int err; + + buf_sz = queue->num_entries * MAX_WQES_PER_RULE; + sq_log_buf_sz = log2above(buf_sz); + sq_buf_sz = 1 << (sq_log_buf_sz + log2above(MLX5_SEND_WQE_BB)); + sq->reg_addr = queue->uar->reg_addr; + + buf_aligned = align(sq_buf_sz, sysconf(_SC_PAGESIZE)); + err = posix_memalign((void **)&sq->buf, sysconf(_SC_PAGESIZE), buf_aligned); + if (err) { + rte_errno = ENOMEM; + return err; + } + memset(sq->buf, 0, buf_aligned); + + err = posix_memalign((void **)&sq->db, 8, 8); + if (err) + goto free_buf; + + sq->buf_umem = mlx5_glue->devx_umem_reg(ctx->ibv_ctx, sq->buf, sq_buf_sz, 0); + + if (!sq->buf_umem) { + err = errno; + goto free_db; + } + + sq->db_umem = mlx5_glue->devx_umem_reg(ctx->ibv_ctx, sq->db, 8, 0); + if (!sq->db_umem) { + err = errno; + goto free_buf_umem; + } + + err = mlx5dr_send_ring_create_sq_obj(ctx, queue, sq, cq, sq_log_buf_sz); + + if (err) + goto free_db_umem; + + sq->wr_priv = simple_malloc(sizeof(*sq->wr_priv) * buf_sz); + if (!sq->wr_priv) { + err = ENOMEM; + goto destroy_sq_obj; + } + + sq->dep_wqe = simple_calloc(queue->num_entries, sizeof(*sq->dep_wqe)); + if (!sq->dep_wqe) { + err = ENOMEM; + goto destroy_wr_priv; + } + + sq->buf_mask = buf_sz - 1; + + return 0; + +destroy_wr_priv: + simple_free(sq->wr_priv); +destroy_sq_obj: + mlx5dr_cmd_destroy_obj(sq->obj); +free_db_umem: + mlx5_glue->devx_umem_dereg(sq->db_umem); +free_buf_umem: + mlx5_glue->devx_umem_dereg(sq->buf_umem); +free_db: + free(sq->db); +free_buf: + free(sq->buf); + rte_errno = err; + return err; +} + +static void mlx5dr_send_ring_close_sq(struct mlx5dr_send_ring_sq *sq) +{ + simple_free(sq->dep_wqe); + mlx5dr_cmd_destroy_obj(sq->obj); + mlx5_glue->devx_umem_dereg(sq->db_umem); + mlx5_glue->devx_umem_dereg(sq->buf_umem); + simple_free(sq->wr_priv); + free(sq->db); + free(sq->buf); +} + +static int mlx5dr_send_ring_open_cq(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_cq *cq) +{ + struct mlx5dv_cq mlx5_cq = {0}; + struct mlx5dv_obj obj; + struct ibv_cq *ibv_cq; + size_t cq_size; + int err; + + cq_size = queue->num_entries; + ibv_cq = mlx5_glue->create_cq(ctx->ibv_ctx, cq_size, NULL, NULL, 0); + if (!ibv_cq) { + DR_LOG(ERR, "Failed to create CQ"); + rte_errno = errno; + return rte_errno; + } + + obj.cq.in = ibv_cq; + obj.cq.out = &mlx5_cq; + err = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_CQ); + if (err) { + err = errno; + goto close_cq; + } + + cq->buf = mlx5_cq.buf; + cq->db = mlx5_cq.dbrec; + cq->ncqe = mlx5_cq.cqe_cnt; + cq->cqe_sz = mlx5_cq.cqe_size; + cq->cqe_log_sz = log2above(cq->cqe_sz); + cq->ncqe_mask = cq->ncqe - 1; + cq->buf_sz = cq->cqe_sz * cq->ncqe; + cq->cqn = mlx5_cq.cqn; + cq->ibv_cq = ibv_cq; + + return 0; + +close_cq: + mlx5_glue->destroy_cq(ibv_cq); + rte_errno = err; + return err; +} + +static void mlx5dr_send_ring_close_cq(struct mlx5dr_send_ring_cq *cq) +{ + mlx5_glue->destroy_cq(cq->ibv_cq); +} + +static void mlx5dr_send_ring_close(struct mlx5dr_send_ring *ring) +{ + mlx5dr_send_ring_close_sq(&ring->send_sq); + mlx5dr_send_ring_close_cq(&ring->send_cq); +} + +static int mlx5dr_send_ring_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring *ring) +{ + int err; + + err = mlx5dr_send_ring_open_cq(ctx, queue, &ring->send_cq); + if (err) + return err; + + err = mlx5dr_send_ring_open_sq(ctx, queue, &ring->send_sq, &ring->send_cq); + if (err) + goto close_cq; + + return err; + +close_cq: + mlx5dr_send_ring_close_cq(&ring->send_cq); + + return err; +} + +static void __mlx5dr_send_rings_close(struct mlx5dr_send_engine *queue, + uint16_t i) +{ + while (i--) + mlx5dr_send_ring_close(&queue->send_ring[i]); +} + +static void mlx5dr_send_rings_close(struct mlx5dr_send_engine *queue) +{ + __mlx5dr_send_rings_close(queue, queue->rings); +} + +static int mlx5dr_send_rings_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue) +{ + uint16_t i; + int err; + + for (i = 0; i < queue->rings; i++) { + err = mlx5dr_send_ring_open(ctx, queue, &queue->send_ring[i]); + if (err) + goto free_rings; + } + + return 0; + +free_rings: + __mlx5dr_send_rings_close(queue, i); + + return err; +} + +void mlx5dr_send_queue_close(struct mlx5dr_send_engine *queue) +{ + mlx5dr_send_rings_close(queue); + simple_free(queue->completed.entries); + mlx5_glue->devx_free_uar(queue->uar); +} + +int mlx5dr_send_queue_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + uint16_t queue_size) +{ + struct mlx5dv_devx_uar *uar; + int err; + +#ifdef MLX5DV_UAR_ALLOC_TYPE_NC + uar = mlx5_glue->devx_alloc_uar(ctx->ibv_ctx, MLX5_IB_UAPI_UAR_ALLOC_TYPE_NC); + if (!uar) { + rte_errno = errno; + return rte_errno; + } +#else + uar = NULL; + rte_errno = ENOTSUP; + return rte_errno; +#endif + + queue->uar = uar; + queue->rings = MLX5DR_NUM_SEND_RINGS; + queue->num_entries = roundup_pow_of_two(queue_size); + queue->used_entries = 0; + queue->th_entries = queue->num_entries; + + queue->completed.entries = simple_calloc(queue->num_entries, + sizeof(queue->completed.entries[0])); + if (!queue->completed.entries) { + rte_errno = ENOMEM; + goto free_uar; + } + queue->completed.pi = 0; + queue->completed.ci = 0; + queue->completed.mask = queue->num_entries - 1; + + err = mlx5dr_send_rings_open(ctx, queue); + if (err) + goto free_completed_entries; + + return 0; + +free_completed_entries: + simple_free(queue->completed.entries); +free_uar: + mlx5_glue->devx_free_uar(uar); + return rte_errno; +} + +static void __mlx5dr_send_queues_close(struct mlx5dr_context *ctx, uint16_t queues) +{ + struct mlx5dr_send_engine *queue; + + while (queues--) { + queue = &ctx->send_queue[queues]; + + mlx5dr_send_queue_close(queue); + } +} + +void mlx5dr_send_queues_close(struct mlx5dr_context *ctx) +{ + __mlx5dr_send_queues_close(ctx, ctx->queues); + simple_free(ctx->send_queue); +} + +int mlx5dr_send_queues_open(struct mlx5dr_context *ctx, + uint16_t queues, + uint16_t queue_size) +{ + int err = 0; + uint32_t i; + + /* Open one extra queue for control path */ + ctx->queues = queues + 1; + + ctx->send_queue = simple_calloc(ctx->queues, sizeof(*ctx->send_queue)); + if (!ctx->send_queue) { + rte_errno = ENOMEM; + return rte_errno; + } + + for (i = 0; i < ctx->queues; i++) { + err = mlx5dr_send_queue_open(ctx, &ctx->send_queue[i], queue_size); + if (err) + goto close_send_queues; + } + + return 0; + +close_send_queues: + __mlx5dr_send_queues_close(ctx, i); + + simple_free(ctx->send_queue); + + return err; +} + +int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, + uint16_t queue_id, + uint32_t actions) +{ + struct mlx5dr_send_ring_sq *send_sq; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[queue_id]; + send_sq = &queue->send_ring->send_sq; + + if (actions == MLX5DR_SEND_QUEUE_ACTION_DRAIN) { + if (send_sq->head_dep_idx != send_sq->tail_dep_idx) + /* Send dependent WQEs to drain the queue */ + mlx5dr_send_all_dep_wqe(queue); + else + /* Signal on the last posted WQE */ + mlx5dr_send_engine_flush_queue(queue); + } else { + rte_errno = -EINVAL; + return rte_errno; + } + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h new file mode 100644 index 0000000000..8d4769495d --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_send.h @@ -0,0 +1,275 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_SEND_H_ +#define MLX5DR_SEND_H_ + +#define MLX5DR_NUM_SEND_RINGS 1 + +/* As a single operation requires at least two WQEBBS. + * This means a maximum of 16 such operations per rule. + */ +#define MAX_WQES_PER_RULE 32 + +/* WQE Control segment. */ +struct mlx5dr_wqe_ctrl_seg { + __be32 opmod_idx_opcode; + __be32 qpn_ds; + __be32 flags; + __be32 imm; +}; + +enum mlx5dr_wqe_opcode { + MLX5DR_WQE_OPCODE_TBL_ACCESS = 0x2c, +}; + +enum mlx5dr_wqe_opmod { + MLX5DR_WQE_OPMOD_GTA_STE = 0, + MLX5DR_WQE_OPMOD_GTA_MOD_ARG = 1, +}; + +enum mlx5dr_wqe_gta_opcode { + MLX5DR_WQE_GTA_OP_ACTIVATE = 0, + MLX5DR_WQE_GTA_OP_DEACTIVATE = 1, +}; + +enum mlx5dr_wqe_gta_opmod { + MLX5DR_WQE_GTA_OPMOD_STE = 0, + MLX5DR_WQE_GTA_OPMOD_MOD_ARG = 1, +}; + +enum mlx5dr_wqe_gta_sz { + MLX5DR_WQE_SZ_GTA_CTRL = 48, + MLX5DR_WQE_SZ_GTA_DATA = 64, +}; + +struct mlx5dr_wqe_gta_ctrl_seg { + __be32 op_dirix; + __be32 stc_ix[5]; + __be32 rsvd0[6]; +}; + +struct mlx5dr_wqe_gta_data_seg_ste { + __be32 rsvd0_ctr_id; + __be32 rsvd1[4]; + __be32 action[3]; + __be32 tag[8]; +}; + +struct mlx5dr_wqe_gta_data_seg_arg { + __be32 action_args[8]; +}; + +struct mlx5dr_wqe_gta { + struct mlx5dr_wqe_gta_ctrl_seg gta_ctrl; + union { + struct mlx5dr_wqe_gta_data_seg_ste seg_ste; + struct mlx5dr_wqe_gta_data_seg_arg seg_arg; + }; +}; + +struct mlx5dr_send_ring_cq { + uint8_t *buf; + uint32_t cons_index; + uint32_t ncqe_mask; + uint32_t buf_sz; + uint32_t ncqe; + uint32_t cqe_log_sz; + __be32 *db; + uint16_t poll_wqe; + struct ibv_cq *ibv_cq; + uint32_t cqn; + uint32_t cqe_sz; +}; + +struct mlx5dr_send_ring_priv { + struct mlx5dr_rule *rule; + void *user_data; + uint32_t num_wqebbs; + uint32_t id; + uint32_t retry_id; + uint32_t *used_id; +}; + +struct mlx5dr_send_ring_dep_wqe { + struct mlx5dr_wqe_gta_ctrl_seg wqe_ctrl; + struct mlx5dr_wqe_gta_data_seg_ste wqe_data; + struct mlx5dr_rule *rule; + uint32_t rtc_0; + uint32_t rtc_1; + uint32_t retry_rtc_0; + uint32_t retry_rtc_1; + void *user_data; +}; + +struct mlx5dr_send_ring_sq { + char *buf; + uint32_t sqn; + __be32 *db; + void *reg_addr; + uint16_t cur_post; + uint16_t buf_mask; + struct mlx5dr_send_ring_priv *wr_priv; + unsigned int last_idx; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + unsigned int head_dep_idx; + unsigned int tail_dep_idx; + struct mlx5dr_devx_obj *obj; + struct mlx5dv_devx_umem *buf_umem; + struct mlx5dv_devx_umem *db_umem; +}; + +struct mlx5dr_send_ring { + struct mlx5dr_send_ring_cq send_cq; + struct mlx5dr_send_ring_sq send_sq; +}; + +struct mlx5dr_completed_poll_entry { + void *user_data; + enum rte_flow_op_status status; +}; + +struct mlx5dr_completed_poll { + struct mlx5dr_completed_poll_entry *entries; + uint16_t ci; + uint16_t pi; + uint16_t mask; +}; + +struct mlx5dr_send_engine { + struct mlx5dr_send_ring send_ring[MLX5DR_NUM_SEND_RINGS]; /* For now 1:1 mapping */ + struct mlx5dv_devx_uar *uar; /* Uar is shared between rings of a queue */ + struct mlx5dr_completed_poll completed; + uint16_t used_entries; + uint16_t th_entries; + uint16_t rings; + uint16_t num_entries; + bool err; +} __rte_cache_aligned; + +struct mlx5dr_send_engine_post_ctrl { + struct mlx5dr_send_engine *queue; + struct mlx5dr_send_ring *send_ring; + size_t num_wqebbs; +}; + +struct mlx5dr_send_engine_post_attr { + uint8_t opcode; + uint8_t opmod; + uint8_t notify_hw; + uint8_t fence; + size_t len; + struct mlx5dr_rule *rule; + uint32_t id; + uint32_t retry_id; + uint32_t *used_id; + void *user_data; +}; + +struct mlx5dr_send_ste_attr { + /* rtc / retry_rtc / used_id_rtc override send_attr */ + uint32_t rtc_0; + uint32_t rtc_1; + uint32_t retry_rtc_0; + uint32_t retry_rtc_1; + uint32_t *used_id_rtc_0; + uint32_t *used_id_rtc_1; + bool wqe_tag_is_jumbo; + uint8_t gta_opcode; + uint32_t direct_index; + struct mlx5dr_send_engine_post_attr send_attr; + struct mlx5dr_rule_match_tag *wqe_tag; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; +}; + +/** + * Provide safe 64bit store operation to mlx5 UAR region for + * both 32bit and 64bit architectures. + * + * @param val + * value to write in CPU endian format. + * @param addr + * Address to write to. + * @param lock + * Address of the lock to use for that UAR access. + */ +static __rte_always_inline void +mlx5dr_uar_write64_relaxed(uint64_t val, void *addr) +{ +#ifdef RTE_ARCH_64 + *(uint64_t *)addr = val; +#else /* !RTE_ARCH_64 */ + *(uint32_t *)addr = val; + rte_io_wmb(); + *((uint32_t *)addr + 1) = val >> 32; +#endif +} + +struct mlx5dr_send_ring_dep_wqe * +mlx5dr_send_add_new_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_abort_new_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_all_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_queue_close(struct mlx5dr_send_engine *queue); + +int mlx5dr_send_queue_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + uint16_t queue_size); + +void mlx5dr_send_queues_close(struct mlx5dr_context *ctx); + +int mlx5dr_send_queues_open(struct mlx5dr_context *ctx, + uint16_t queues, + uint16_t queue_size); + +struct mlx5dr_send_engine_post_ctrl +mlx5dr_send_engine_post_start(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_engine_post_req_wqe(struct mlx5dr_send_engine_post_ctrl *ctrl, + char **buf, size_t *len); + +void mlx5dr_send_engine_post_end(struct mlx5dr_send_engine_post_ctrl *ctrl, + struct mlx5dr_send_engine_post_attr *attr); + +void mlx5dr_send_ste(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ste_attr *ste_attr); + +void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue); + +static inline bool mlx5dr_send_engine_full(struct mlx5dr_send_engine *queue) +{ + return queue->used_entries >= queue->th_entries; +} + +static inline void mlx5dr_send_engine_inc_rule(struct mlx5dr_send_engine *queue) +{ + queue->used_entries++; +} + +static inline void mlx5dr_send_engine_dec_rule(struct mlx5dr_send_engine *queue) +{ + queue->used_entries--; +} + +static inline void mlx5dr_send_engine_gen_comp(struct mlx5dr_send_engine *queue, + void *user_data, + int comp_status) +{ + struct mlx5dr_completed_poll *comp = &queue->completed; + + comp->entries[comp->pi].status = comp_status; + comp->entries[comp->pi].user_data = user_data; + + comp->pi = (comp->pi + 1) & comp->mask; +} + +static inline bool mlx5dr_send_engine_err(struct mlx5dr_send_engine *queue) +{ + return queue->err; +} + +#endif /* MLX5DR_SEND_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 11/18] net/mlx5/hws: Add HWS definer layer 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (9 preceding siblings ...) 2022-10-19 14:42 ` [v4 10/18] net/mlx5/hws: Add HWS send layer Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 12/18] net/mlx5/hws: Add HWS context object Alex Vesker ` (6 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Mark Bloch Definers are HW objects that are used for matching, rte items are translated to definers, each definer holds the fields and bit-masks used for HW flow matching. The definer layer is used for finding the most efficient definer for each set of items. In addition to definer creation we also calculate the field copy (fc) array used for efficient items to WQE conversion. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_definer.c | 1968 +++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_definer.h | 585 ++++++++ 2 files changed, 2553 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.h diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c new file mode 100644 index 0000000000..6b98eb8c96 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -0,0 +1,1968 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +#define GTP_PDU_SC 0x85 +#define BAD_PORT 0xBAD +#define ETH_TYPE_IPV4_VXLAN 0x0800 +#define ETH_TYPE_IPV6_VXLAN 0x86DD +#define ETH_VXLAN_DEFAULT_PORT 4789 + +#define STE_NO_VLAN 0x0 +#define STE_SVLAN 0x1 +#define STE_CVLAN 0x2 +#define STE_IPV4 0x1 +#define STE_IPV6 0x2 +#define STE_TCP 0x1 +#define STE_UDP 0x2 +#define STE_ICMP 0x3 + +/* Setter function based on bit offset and mask, for 32bit DW*/ +#define _DR_SET_32(p, v, byte_off, bit_off, mask) \ + do { \ + u32 _v = v; \ + *((rte_be32_t *)(p) + ((byte_off) / 4)) = \ + rte_cpu_to_be_32((rte_be_to_cpu_32(*((u32 *)(p) + \ + ((byte_off) / 4))) & \ + (~((mask) << (bit_off)))) | \ + (((_v) & (mask)) << \ + (bit_off))); \ + } while (0) + +/* Setter function based on bit offset and mask */ +#define DR_SET(p, v, byte_off, bit_off, mask) \ + do { \ + if (unlikely((bit_off) < 0)) { \ + u32 _bit_off = -1 * (bit_off); \ + u32 second_dw_mask = (mask) & ((1 << _bit_off) - 1); \ + _DR_SET_32(p, (v) >> _bit_off, byte_off, 0, (mask) >> _bit_off); \ + _DR_SET_32(p, (v) & second_dw_mask, (byte_off) + DW_SIZE, \ + (bit_off) % BITS_IN_DW, second_dw_mask); \ + } else { \ + _DR_SET_32(p, v, byte_off, (bit_off), (mask)); \ + } \ + } while (0) + +/* Setter function based on byte offset to directly set FULL BE32 value */ +#define DR_SET_BE32(p, v, byte_off, bit_off, mask) \ + (*((rte_be32_t *)((uint8_t *)(p) + (byte_off))) = (v)) + +/* Setter function based on byte offset to directly set FULL BE32 value from ptr */ +#define DR_SET_BE32P(p, v_ptr, byte_off, bit_off, mask) \ + memcpy((uint8_t *)(p) + (byte_off), v_ptr, 4) + +/* Setter function based on byte offset to directly set FULL BE16 value */ +#define DR_SET_BE16(p, v, byte_off, bit_off, mask) \ + (*((rte_be16_t *)((uint8_t *)(p) + (byte_off))) = (v)) + +/* Setter function based on byte offset to directly set FULL BE16 value from ptr */ +#define DR_SET_BE16P(p, v_ptr, byte_off, bit_off, mask) \ + memcpy((uint8_t *)(p) + (byte_off), v_ptr, 2) + +#define DR_CALC_FNAME(field, inner) \ + ((inner) ? MLX5DR_DEFINER_FNAME_##field##_I : \ + MLX5DR_DEFINER_FNAME_##field##_O) + +#define DR_CALC_SET_HDR(fc, hdr, field) \ + do { \ + (fc)->bit_mask = __mlx5_mask(definer_hl, hdr.field); \ + (fc)->bit_off = __mlx5_dw_bit_off(definer_hl, hdr.field); \ + (fc)->byte_off = MLX5_BYTE_OFF(definer_hl, hdr.field); \ + } while (0) + +/* Helper to calculate data used by DR_SET */ +#define DR_CALC_SET(fc, hdr, field, is_inner) \ + do { \ + if (is_inner) { \ + DR_CALC_SET_HDR(fc, hdr##_inner, field); \ + } else { \ + DR_CALC_SET_HDR(fc, hdr##_outer, field); \ + } \ + } while (0) + + #define DR_GET(typ, p, fld) \ + ((rte_be_to_cpu_32(*((const rte_be32_t *)(p) + \ + __mlx5_dw_off(typ, fld))) >> __mlx5_dw_bit_off(typ, fld)) & \ + __mlx5_mask(typ, fld)) + +struct mlx5dr_definer_sel_ctrl { + uint8_t allowed_full_dw; /* Full DW selectors cover all offsets */ + uint8_t allowed_lim_dw; /* Limited DW selectors cover offset < 64 */ + uint8_t allowed_bytes; /* Bytes selectors, up to offset 255 */ + uint8_t used_full_dw; + uint8_t used_lim_dw; + uint8_t used_bytes; + uint8_t full_dw_selector[DW_SELECTORS]; + uint8_t lim_dw_selector[DW_SELECTORS_LIMITED]; + uint8_t byte_selector[BYTE_SELECTORS]; +}; + +struct mlx5dr_definer_conv_data { + struct mlx5dr_cmd_query_caps *caps; + struct mlx5dr_definer_fc *fc; + uint8_t relaxed; + uint8_t tunnel; + uint8_t *hl; +}; + +/* Xmacro used to create generic item setter from items */ +#define LIST_OF_FIELDS_INFO \ + X(SET_BE16, eth_type, v->type, rte_flow_item_eth) \ + X(SET_BE32P, eth_smac_47_16, &v->src.addr_bytes[0], rte_flow_item_eth) \ + X(SET_BE16P, eth_smac_15_0, &v->src.addr_bytes[4], rte_flow_item_eth) \ + X(SET_BE32P, eth_dmac_47_16, &v->dst.addr_bytes[0], rte_flow_item_eth) \ + X(SET_BE16P, eth_dmac_15_0, &v->dst.addr_bytes[4], rte_flow_item_eth) \ + X(SET_BE16, tci, v->tci, rte_flow_item_vlan) \ + X(SET, ipv4_ihl, v->ihl, rte_ipv4_hdr) \ + X(SET, ipv4_tos, v->type_of_service, rte_ipv4_hdr) \ + X(SET, ipv4_time_to_live, v->time_to_live, rte_ipv4_hdr) \ + X(SET_BE32, ipv4_dst_addr, v->dst_addr, rte_ipv4_hdr) \ + X(SET_BE32, ipv4_src_addr, v->src_addr, rte_ipv4_hdr) \ + X(SET, ipv4_next_proto, v->next_proto_id, rte_ipv4_hdr) \ + X(SET, ipv4_version, STE_IPV4, rte_ipv4_hdr) \ + X(SET_BE16, ipv4_frag, v->fragment_offset, rte_ipv4_hdr) \ + X(SET_BE16, ipv6_payload_len, v->hdr.payload_len, rte_flow_item_ipv6) \ + X(SET, ipv6_proto, v->hdr.proto, rte_flow_item_ipv6) \ + X(SET, ipv6_hop_limits, v->hdr.hop_limits, rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_127_96, &v->hdr.src_addr[0], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_95_64, &v->hdr.src_addr[4], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_63_32, &v->hdr.src_addr[8], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_31_0, &v->hdr.src_addr[12], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_127_96, &v->hdr.dst_addr[0], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_95_64, &v->hdr.dst_addr[4], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_63_32, &v->hdr.dst_addr[8], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_31_0, &v->hdr.dst_addr[12], rte_flow_item_ipv6) \ + X(SET, ipv6_version, STE_IPV6, rte_flow_item_ipv6) \ + X(SET, ipv6_frag, v->has_frag_ext, rte_flow_item_ipv6) \ + X(SET, icmp_protocol, STE_ICMP, rte_flow_item_icmp) \ + X(SET, udp_protocol, STE_UDP, rte_flow_item_udp) \ + X(SET_BE16, udp_src_port, v->hdr.src_port, rte_flow_item_udp) \ + X(SET_BE16, udp_dst_port, v->hdr.dst_port, rte_flow_item_udp) \ + X(SET, tcp_flags, v->hdr.tcp_flags, rte_flow_item_tcp) \ + X(SET, tcp_protocol, STE_TCP, rte_flow_item_tcp) \ + X(SET_BE16, tcp_src_port, v->hdr.src_port, rte_flow_item_tcp) \ + X(SET_BE16, tcp_dst_port, v->hdr.dst_port, rte_flow_item_tcp) \ + X(SET, gtp_udp_port, RTE_GTPU_UDP_PORT, rte_flow_item_gtp) \ + X(SET_BE32, gtp_teid, v->teid, rte_flow_item_gtp) \ + X(SET, gtp_msg_type, v->msg_type, rte_flow_item_gtp) \ + X(SET, gtp_ext_flag, !!v->v_pt_rsv_flags, rte_flow_item_gtp) \ + X(SET, gtp_next_ext_hdr, GTP_PDU_SC, rte_flow_item_gtp_psc) \ + X(SET, gtp_ext_hdr_pdu, v->hdr.type, rte_flow_item_gtp_psc) \ + X(SET, gtp_ext_hdr_qfi, v->hdr.qfi, rte_flow_item_gtp_psc) \ + X(SET, vxlan_flags, v->flags, rte_flow_item_vxlan) \ + X(SET, vxlan_udp_port, ETH_VXLAN_DEFAULT_PORT, rte_flow_item_vxlan) \ + X(SET, source_qp, v->queue, mlx5_rte_flow_item_sq) \ + X(SET, tag, v->data, rte_flow_item_tag) \ + X(SET, metadata, v->data, rte_flow_item_meta) \ + X(SET_BE16, gre_c_ver, v->c_rsvd0_ver, rte_flow_item_gre) \ + X(SET_BE16, gre_protocol_type, v->protocol, rte_flow_item_gre) \ + X(SET, ipv4_protocol_gre, IPPROTO_GRE, rte_flow_item_gre) \ + X(SET_BE32, gre_opt_key, v->key.key, rte_flow_item_gre_opt) \ + X(SET_BE32, gre_opt_seq, v->sequence.sequence, rte_flow_item_gre_opt) \ + X(SET_BE16, gre_opt_checksum, v->checksum_rsvd.checksum, rte_flow_item_gre_opt) \ + X(SET, meter_color, rte_col_2_mlx5_col(v->color), rte_flow_item_meter_color) + +/* Item set function format */ +#define X(set_type, func_name, value, item_type) \ +static void mlx5dr_definer_##func_name##_set( \ + struct mlx5dr_definer_fc *fc, \ + const void *item_spec, \ + uint8_t *tag) \ +{ \ + __rte_unused const struct item_type *v = item_spec; \ + DR_##set_type(tag, value, fc->byte_off, fc->bit_off, fc->bit_mask); \ +} +LIST_OF_FIELDS_INFO +#undef X + +static void +mlx5dr_definer_ones_set(struct mlx5dr_definer_fc *fc, + __rte_unused const void *item_spec, + __rte_unused uint8_t *tag) +{ + DR_SET(tag, -1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_eth_first_vlan_q_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_eth *v = item_spec; + uint8_t vlan_type; + + vlan_type = v->has_vlan ? STE_CVLAN : STE_NO_VLAN; + + DR_SET(tag, vlan_type, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_first_vlan_q_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_vlan *v = item_spec; + uint8_t vlan_type; + + vlan_type = v->has_more_vlan ? STE_SVLAN : STE_CVLAN; + + DR_SET(tag, vlan_type, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_conntrack_mask(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_conntrack *m = item_spec; + uint32_t reg_mask = 0; + + if (m->flags & (RTE_FLOW_CONNTRACK_PKT_STATE_VALID | + RTE_FLOW_CONNTRACK_PKT_STATE_INVALID | + RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED)) + reg_mask |= (MLX5_CT_SYNDROME_VALID | MLX5_CT_SYNDROME_INVALID | + MLX5_CT_SYNDROME_TRAP); + + if (m->flags & RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED) + reg_mask |= MLX5_CT_SYNDROME_STATE_CHANGE; + + if (m->flags & RTE_FLOW_CONNTRACK_PKT_STATE_BAD) + reg_mask |= MLX5_CT_SYNDROME_BAD_PACKET; + + DR_SET(tag, reg_mask, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_conntrack_tag(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_conntrack *v = item_spec; + uint32_t reg_value = 0; + + /* The conflict should be checked in the validation. */ + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_VALID) + reg_value |= MLX5_CT_SYNDROME_VALID; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED) + reg_value |= MLX5_CT_SYNDROME_STATE_CHANGE; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_INVALID) + reg_value |= MLX5_CT_SYNDROME_INVALID; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED) + reg_value |= MLX5_CT_SYNDROME_TRAP; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_BAD) + reg_value |= MLX5_CT_SYNDROME_BAD_PACKET; + + DR_SET(tag, reg_value, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_integrity_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + bool inner = (fc->fname == MLX5DR_DEFINER_FNAME_INTEGRITY_I); + const struct rte_flow_item_integrity *v = item_spec; + uint32_t ok1_bits = 0; + + if (v->l3_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L3_OK) | + BIT(MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_L3_OK) | + BIT(MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK); + + if (v->ipv4_csum_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK); + + if (v->l4_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_OK) | + BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_OK) | + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK); + + if (v->l4_csum_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK); + + DR_SET(tag, ok1_bits, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_gre_key_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const rte_be32_t *v = item_spec; + + DR_SET_BE32(tag, *v, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_vxlan_vni_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_vxlan *v = item_spec; + + memcpy(tag + fc->byte_off, v->vni, sizeof(v->vni)); +} + +static void +mlx5dr_definer_ipv6_tos_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ipv6 *v = item_spec; + uint8_t tos = DR_GET(header_ipv6_vtc, &v->hdr.vtc_flow, tos); + + DR_SET(tag, tos, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp_dw1_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp *v = item_spec; + rte_be32_t icmp_dw1; + + icmp_dw1 = (v->hdr.icmp_type << __mlx5_dw_bit_off(header_icmp, type)) | + (v->hdr.icmp_code << __mlx5_dw_bit_off(header_icmp, code)) | + (rte_be_to_cpu_16(v->hdr.icmp_cksum) << __mlx5_dw_bit_off(header_icmp, cksum)); + + DR_SET(tag, icmp_dw1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp_dw2_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp *v = item_spec; + rte_be32_t icmp_dw2; + + icmp_dw2 = (rte_be_to_cpu_16(v->hdr.icmp_ident) << __mlx5_dw_bit_off(header_icmp, ident)) | + (rte_be_to_cpu_16(v->hdr.icmp_seq_nb) << __mlx5_dw_bit_off(header_icmp, seq_nb)); + + DR_SET(tag, icmp_dw2, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp6_dw1_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp6 *v = item_spec; + rte_be32_t icmp_dw1; + + icmp_dw1 = (v->type << __mlx5_dw_bit_off(header_icmp, type)) | + (v->code << __mlx5_dw_bit_off(header_icmp, code)) | + (rte_be_to_cpu_16(v->checksum) << __mlx5_dw_bit_off(header_icmp, cksum)); + + DR_SET(tag, icmp_dw1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_ipv6_flow_label_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ipv6 *v = item_spec; + uint32_t flow_label = DR_GET(header_ipv6_vtc, &v->hdr.vtc_flow, flow_label); + + DR_SET(tag, flow_label, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_vport_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ethdev *v = item_spec; + const struct flow_hw_port_info *port_info; + uint32_t regc_value; + + port_info = flow_hw_conv_port_id(v->port_id); + if (unlikely(!port_info)) + regc_value = BAD_PORT; + else + regc_value = port_info->regc_value >> fc->bit_off; + + /* Bit offset is set to 0 to since regc value is 32bit */ + DR_SET(tag, regc_value, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static int +mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_eth *m = item->mask; + uint8_t empty_mac[RTE_ETHER_ADDR_LEN] = {0}; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->reserved) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->type) { + fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_type_set; + DR_CALC_SET(fc, eth_l2, l3_ethertype, inner); + } + + /* Check SMAC 47_16 */ + if (memcmp(m->src.addr_bytes, empty_mac, 4)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_48_16, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_smac_47_16_set; + DR_CALC_SET(fc, eth_l2_src, smac_47_16, inner); + } + + /* Check SMAC 15_0 */ + if (memcmp(m->src.addr_bytes + 4, empty_mac + 4, 2)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_15_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_smac_15_0_set; + DR_CALC_SET(fc, eth_l2_src, smac_15_0, inner); + } + + /* Check DMAC 47_16 */ + if (memcmp(m->dst.addr_bytes, empty_mac, 4)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_48_16, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_dmac_47_16_set; + DR_CALC_SET(fc, eth_l2, dmac_47_16, inner); + } + + /* Check DMAC 15_0 */ + if (memcmp(m->dst.addr_bytes + 4, empty_mac + 4, 2)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_15_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_dmac_15_0_set; + DR_CALC_SET(fc, eth_l2, dmac_15_0, inner); + } + + if (m->has_vlan) { + /* Mark packet as tagged (CVLAN) */ + fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_eth_first_vlan_q_set; + DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_vlan *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->reserved) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (!cd->relaxed || m->has_more_vlan) { + /* Mark packet as tagged (CVLAN or SVLAN) even if TCI is not specified.*/ + fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_first_vlan_q_set; + DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner); + } + + if (m->tci) { + fc = &cd->fc[DR_CALC_FNAME(VLAN_TCI, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tci_set; + DR_CALC_SET(fc, eth_l2, tci, inner); + } + + if (m->inner_type) { + fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_type_set; + DR_CALC_SET(fc, eth_l2, l3_ethertype, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_ipv4(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_ipv4_hdr *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_VERSION, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_version_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l3_type, inner); + + /* Overwrite - Unset ethertype if present */ + memset(&cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)], 0, sizeof(*fc)); + } + + if (!m) + return 0; + + if (m->total_length || m->packet_id || + m->hdr_checksum) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->fragment_offset) { + fc = &cd->fc[DR_CALC_FNAME(IP_FRAG, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_frag_set; + DR_CALC_SET(fc, eth_l3, fragment_offset, inner); + } + + if (m->next_proto_id) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_next_proto_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (m->dst_addr) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_DST, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_dst_addr_set; + DR_CALC_SET(fc, ipv4_src_dest, destination_address, inner); + } + + if (m->src_addr) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_SRC, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_src_addr_set; + DR_CALC_SET(fc, ipv4_src_dest, source_address, inner); + } + + if (m->ihl) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_IHL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_ihl_set; + DR_CALC_SET(fc, eth_l3, ihl, inner); + } + + if (m->time_to_live) { + fc = &cd->fc[DR_CALC_FNAME(IP_TTL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_time_to_live_set; + DR_CALC_SET(fc, eth_l3, time_to_live_hop_limit, inner); + } + + if (m->type_of_service) { + fc = &cd->fc[DR_CALC_FNAME(IP_TOS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_tos_set; + DR_CALC_SET(fc, eth_l3, tos, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_ipv6(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_ipv6 *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_VERSION, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_version_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l3_type, inner); + + /* Overwrite - Unset ethertype if present */ + memset(&cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)], 0, sizeof(*fc)); + } + + if (!m) + return 0; + + if (m->has_hop_ext || m->has_route_ext || m->has_auth_ext || + m->has_esp_ext || m->has_dest_ext || m->has_mobil_ext || + m->has_hip_ext || m->has_shim6_ext) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->has_frag_ext) { + fc = &cd->fc[DR_CALC_FNAME(IP_FRAG, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_frag_set; + DR_CALC_SET(fc, eth_l4, ip_fragmented, inner); + } + + if (DR_GET(header_ipv6_vtc, &m->hdr.vtc_flow, tos)) { + fc = &cd->fc[DR_CALC_FNAME(IP_TOS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_tos_set; + DR_CALC_SET(fc, eth_l3, tos, inner); + } + + if (DR_GET(header_ipv6_vtc, &m->hdr.vtc_flow, flow_label)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_FLOW_LABEL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_flow_label_set; + DR_CALC_SET(fc, eth_l3, flow_label, inner); + } + + if (m->hdr.payload_len) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_PAYLOAD_LEN, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_payload_len_set; + DR_CALC_SET(fc, eth_l3, ipv6_payload_length, inner); + } + + if (m->hdr.proto) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_proto_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (m->hdr.hop_limits) { + fc = &cd->fc[DR_CALC_FNAME(IP_TTL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_hop_limits_set; + DR_CALC_SET(fc, eth_l3, time_to_live_hop_limit, inner); + } + + if (!is_mem_zero(m->hdr.src_addr, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_127_96, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_127_96_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_127_96, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 4, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_95_64, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_95_64_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_95_64, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 8, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_63_32, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_63_32_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_63_32, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 12, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_31_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_31_0_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_31_0, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_127_96, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_127_96_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_127_96, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 4, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_95_64, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_95_64_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_95_64, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 8, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_63_32, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_63_32_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_63_32, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 12, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_31_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_31_0_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_31_0, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_udp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_udp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Set match on L4 type UDP */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + if (!m) + return 0; + + if (m->hdr.dgram_cksum || m->hdr.dgram_len) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->hdr.src_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_SPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_src_port_set; + DR_CALC_SET(fc, eth_l4, source_port, inner); + } + + if (m->hdr.dst_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_dst_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_tcp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_tcp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on L4 type TCP */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + if (!m) + return 0; + + if (m->hdr.tcp_flags) { + fc = &cd->fc[DR_CALC_FNAME(TCP_FLAGS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_flags_set; + DR_CALC_SET(fc, eth_l4, tcp_flags, inner); + } + + if (m->hdr.src_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_SPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_src_port_set; + DR_CALC_SET(fc, eth_l4, source_port, inner); + } + + if (m->hdr.dst_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_dst_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gtp *m = item->mask; + struct mlx5dr_definer_fc *fc; + + /* Overwrite GTPU dest port if not present */ + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, false)]; + if (!fc->tag_set && !cd->relaxed) { + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_udp_port_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l4, destination_port, false); + } + + if (!m) + return 0; + + if (m->msg_len || m->v_pt_rsv_flags & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->teid) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_TEID]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_teid_set; + fc->bit_mask = __mlx5_mask(header_gtp, teid); + fc->byte_off = cd->caps->format_select_gtpu_dw_1 * DW_SIZE; + } + + if (m->v_pt_rsv_flags) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_ext_flag_set; + fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + + if (m->msg_type) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_MSG_TYPE]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_msg_type_set; + fc->bit_mask = __mlx5_mask(header_gtp, msg_type); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, msg_type); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gtp_psc *m = item->mask; + struct mlx5dr_definer_fc *fc; + + /* Overwrite GTP extension flag to be 1 */ + if (!cd->relaxed) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ones_set; + fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + /* Overwrite next extension header type */ + if (!cd->relaxed) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_2_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_NEXT_EXT_HDR]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_next_ext_hdr_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->bit_mask = __mlx5_mask(header_opt_gtp, next_ext_hdr_type); + fc->bit_off = __mlx5_dw_bit_off(header_opt_gtp, next_ext_hdr_type); + fc->byte_off = cd->caps->format_select_gtpu_dw_2 * DW_SIZE; + } + + if (!m) + return 0; + + if (m->hdr.type) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_PDU]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_ext_hdr_pdu_set; + fc->bit_mask = __mlx5_mask(header_gtp_psc, pdu_type); + fc->bit_off = __mlx5_dw_bit_off(header_gtp_psc, pdu_type); + fc->byte_off = cd->caps->format_select_gtpu_ext_dw_0 * DW_SIZE; + } + + if (m->hdr.qfi) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_QFI]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_ext_hdr_qfi_set; + fc->bit_mask = __mlx5_mask(header_gtp_psc, qfi); + fc->bit_off = __mlx5_dw_bit_off(header_gtp_psc, qfi); + fc->byte_off = cd->caps->format_select_gtpu_ext_dw_0 * DW_SIZE; + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_port(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_ethdev *m = item->mask; + struct mlx5dr_definer_fc *fc; + uint8_t bit_offset = 0; + + if (m->port_id) { + if (!cd->caps->wire_regc_mask) { + DR_LOG(ERR, "Port ID item not supported, missing wire REGC mask"); + rte_errno = ENOTSUP; + return rte_errno; + } + + while (!(cd->caps->wire_regc_mask & (1 << bit_offset))) + bit_offset++; + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VPORT_REG_C_0]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_vport_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET_HDR(fc, registers, register_c_0); + fc->bit_off = bit_offset; + fc->bit_mask = cd->caps->wire_regc_mask >> bit_offset; + } else { + DR_LOG(ERR, "Pord ID item mask must specify ID mask"); + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_vxlan(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_vxlan *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* In order to match on VXLAN we must match on ether_type, ip_protocol + * and l4_dport. + */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_udp_protocol_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_vxlan_udp_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + } + + if (!m) + return 0; + + if (m->flags) { + if (inner) { + DR_LOG(ERR, "Inner VXLAN flags item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VXLAN_FLAGS]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_vxlan_flags_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_vxlan, flags); + fc->bit_off = __mlx5_dw_bit_off(header_vxlan, flags); + } + + if (!is_mem_zero(m->vni, 3)) { + if (inner) { + DR_LOG(ERR, "Inner VXLAN vni item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VXLAN_VNI]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_vxlan_vni_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_1); + fc->bit_mask = __mlx5_mask(header_vxlan, vni); + fc->bit_off = __mlx5_dw_bit_off(header_vxlan, vni); + } + + return 0; +} + +static struct mlx5dr_definer_fc * +mlx5dr_definer_get_register_fc(struct mlx5dr_definer_conv_data *cd, int reg) +{ + struct mlx5dr_definer_fc *fc; + + switch (reg) { + case REG_C_0: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_0]; + DR_CALC_SET_HDR(fc, registers, register_c_0); + break; + case REG_C_1: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_1]; + DR_CALC_SET_HDR(fc, registers, register_c_1); + break; + case REG_C_2: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_2]; + DR_CALC_SET_HDR(fc, registers, register_c_2); + break; + case REG_C_3: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_3]; + DR_CALC_SET_HDR(fc, registers, register_c_3); + break; + case REG_C_4: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_4]; + DR_CALC_SET_HDR(fc, registers, register_c_4); + break; + case REG_C_5: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_5]; + DR_CALC_SET_HDR(fc, registers, register_c_5); + break; + case REG_C_6: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_6]; + DR_CALC_SET_HDR(fc, registers, register_c_6); + break; + case REG_C_7: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_7]; + DR_CALC_SET_HDR(fc, registers, register_c_7); + break; + case REG_A: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_A]; + DR_CALC_SET_HDR(fc, metadata, general_purpose); + break; + case REG_B: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_B]; + DR_CALC_SET_HDR(fc, metadata, metadata_to_cqe); + break; + default: + rte_errno = ENOTSUP; + return NULL; + } + + return fc; +} + +static int +mlx5dr_definer_conv_item_tag(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_tag *m = item->mask; + const struct rte_flow_item_tag *v = item->spec; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m || !v) + return 0; + + if (item->type == RTE_FLOW_ITEM_TYPE_TAG) + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_TAG, v->index); + else + reg = (int)v->index; + + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item tag"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tag_set; + return 0; +} + +static int +mlx5dr_definer_conv_item_metadata(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_meta *m = item->mask; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m) + return 0; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_META, -1); + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item metadata"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_metadata_set; + return 0; +} + +static int +mlx5dr_definer_conv_item_sq(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct mlx5_rte_flow_item_sq *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!m) + return 0; + + if (m->queue) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_SOURCE_QP]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_source_qp_set; + DR_CALC_SET_HDR(fc, source_qp_gvmi, source_qp); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gre *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (inner) { + DR_LOG(ERR, "Inner GRE item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (!m) + return 0; + + if (m->c_rsvd0_ver) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_C_VER]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_c_ver_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_gre, c_rsvd0_ver); + fc->bit_off = __mlx5_dw_bit_off(header_gre, c_rsvd0_ver); + } + + if (m->protocol) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_PROTOCOL]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_protocol_type_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->byte_off += MLX5_BYTE_OFF(header_gre, gre_protocol); + fc->bit_mask = __mlx5_mask(header_gre, gre_protocol); + fc->bit_off = __mlx5_dw_bit_off(header_gre, gre_protocol); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gre_opt(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gre_opt *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, false)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, false); + } + } + + if (!m) + return 0; + + if (m->checksum_rsvd.checksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_CHECKSUM]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_checksum_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_1); + } + + if (m->key.key) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_KEY]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_key_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_2); + } + + if (m->sequence.sequence) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_SEQ]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_seq_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_3); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gre_key(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const rte_be32_t *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!cd->relaxed) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_KEY_PRESENT]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ones_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_gre, gre_k_present); + fc->bit_off = __mlx5_dw_bit_off(header_gre, gre_k_present); + + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, false)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, false); + } + } + + if (!m) + return 0; + + if (*m) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_KEY]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_key_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_2); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_integrity(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_integrity *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->packet_ok || m->l2_ok || m->l2_crc_ok || m->l3_len_ok) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->l3_ok || m->ipv4_csum_ok || m->l4_ok || m->l4_csum_ok) { + fc = &cd->fc[DR_CALC_FNAME(INTEGRITY, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_integrity_set; + DR_CALC_SET_HDR(fc, oks1, oks1_bits); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_conntrack(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_conntrack *m = item->mask; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m) + return 0; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_CONNTRACK, -1); + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item conntrack"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_conntrack_mask; + fc->tag_set = &mlx5dr_definer_conntrack_tag; + + return 0; +} + +static int +mlx5dr_definer_conv_item_icmp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_icmp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on L4 type ICMP */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type, inner); + } + + if (!m) + return 0; + + if (m->hdr.icmp_type || m->hdr.icmp_code || m->hdr.icmp_cksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW1]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_dw1_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw1); + } + + if (m->hdr.icmp_ident || m->hdr.icmp_seq_nb) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW2]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_dw2_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw2); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_icmp6(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_icmp6 *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on L4 type ICMP6 */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type, inner); + } + + if (!m) + return 0; + + if (m->type || m->code || m->checksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW1]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp6_dw1_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw1); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_meter_color(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_meter_color *m = item->mask; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m) + return 0; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_METER_COLOR, 0); + MLX5_ASSERT(reg > 0); + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_meter_color_set; + return 0; +} + +static int +mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt, + uint8_t *hl) +{ + struct mlx5dr_definer_fc fc[MLX5DR_DEFINER_FNAME_MAX] = {{0}}; + struct mlx5dr_definer_conv_data cd = {0}; + struct rte_flow_item *items = mt->items; + uint64_t item_flags = 0; + uint32_t total = 0; + int i, j; + int ret; + + cd.fc = fc; + cd.hl = hl; + cd.caps = ctx->caps; + cd.relaxed = mt->flags & MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH; + + /* Collect all RTE fields to the field array and set header layout */ + for (i = 0; items->type != RTE_FLOW_ITEM_TYPE_END; i++, items++) { + cd.tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + + switch ((int)items->type) { + case RTE_FLOW_ITEM_TYPE_ETH: + ret = mlx5dr_definer_conv_item_eth(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + ret = mlx5dr_definer_conv_item_vlan(&cd, items, i); + item_flags |= cd.tunnel ? + (MLX5_FLOW_LAYER_INNER_VLAN | MLX5_FLOW_LAYER_INNER_L2) : + (MLX5_FLOW_LAYER_OUTER_VLAN | MLX5_FLOW_LAYER_OUTER_L2); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + ret = mlx5dr_definer_conv_item_ipv4(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + ret = mlx5dr_definer_conv_item_ipv6(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + break; + case RTE_FLOW_ITEM_TYPE_UDP: + ret = mlx5dr_definer_conv_item_udp(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; + break; + case RTE_FLOW_ITEM_TYPE_TCP: + ret = mlx5dr_definer_conv_item_tcp(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + ret = mlx5dr_definer_conv_item_gtp(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = mlx5dr_definer_conv_item_gtp_psc(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + ret = mlx5dr_definer_conv_item_port(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_REPRESENTED_PORT; + mt->vport_item_id = i; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + ret = mlx5dr_definer_conv_item_vxlan(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_VXLAN; + break; + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: + ret = mlx5dr_definer_conv_item_sq(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_SQ; + break; + case RTE_FLOW_ITEM_TYPE_TAG: + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + ret = mlx5dr_definer_conv_item_tag(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_TAG; + break; + case RTE_FLOW_ITEM_TYPE_META: + ret = mlx5dr_definer_conv_item_metadata(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_METADATA; + break; + case RTE_FLOW_ITEM_TYPE_GRE: + ret = mlx5dr_definer_conv_item_gre(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + ret = mlx5dr_definer_conv_item_gre_opt(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + ret = mlx5dr_definer_conv_item_gre_key(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE_KEY; + break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + ret = mlx5dr_definer_conv_item_integrity(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_INTEGRITY : + MLX5_FLOW_ITEM_OUTER_INTEGRITY; + break; + case RTE_FLOW_ITEM_TYPE_CONNTRACK: + ret = mlx5dr_definer_conv_item_conntrack(&cd, items, i); + break; + case RTE_FLOW_ITEM_TYPE_ICMP: + ret = mlx5dr_definer_conv_item_icmp(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_ICMP; + break; + case RTE_FLOW_ITEM_TYPE_ICMP6: + ret = mlx5dr_definer_conv_item_icmp6(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_ICMP6; + break; + case RTE_FLOW_ITEM_TYPE_METER_COLOR: + ret = mlx5dr_definer_conv_item_meter_color(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_METER_COLOR; + break; + default: + DR_LOG(ERR, "Unsupported item type %d", items->type); + rte_errno = ENOTSUP; + return rte_errno; + } + + if (ret) { + DR_LOG(ERR, "Failed processing item type: %d", items->type); + return ret; + } + } + + mt->item_flags = item_flags; + + /* Fill in headers layout and calculate total number of fields */ + for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) { + if (fc[i].tag_set) { + total++; + DR_SET(hl, -1, fc[i].byte_off, fc[i].bit_off, fc[i].bit_mask); + } + } + + mt->fc_sz = total; + mt->fc = simple_calloc(total, sizeof(*mt->fc)); + if (!mt->fc) { + DR_LOG(ERR, "Failed to allocate field copy array"); + rte_errno = ENOMEM; + return rte_errno; + } + + j = 0; + for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) { + if (fc[i].tag_set) { + memcpy(&mt->fc[j], &fc[i], sizeof(*mt->fc)); + mt->fc[j].fname = i; + j++; + } + } + + return 0; +} + +static int +mlx5dr_definer_find_byte_in_tag(struct mlx5dr_definer *definer, + uint32_t hl_byte_off, + uint32_t *tag_byte_off) +{ + uint8_t byte_offset; + int i; + + /* Add offset since each DW covers multiple BYTEs */ + byte_offset = hl_byte_off % DW_SIZE; + for (i = 0; i < DW_SELECTORS; i++) { + if (definer->dw_selector[i] == hl_byte_off / DW_SIZE) { + *tag_byte_off = byte_offset + DW_SIZE * (DW_SELECTORS - i - 1); + return 0; + } + } + + /* Add offset to skip DWs in definer */ + byte_offset = DW_SIZE * DW_SELECTORS; + /* Iterate in reverse since the code uses bytes from 7 -> 0 */ + for (i = BYTE_SELECTORS; i-- > 0 ;) { + if (definer->byte_selector[i] == hl_byte_off) { + *tag_byte_off = byte_offset + (BYTE_SELECTORS - i - 1); + return 0; + } + } + + /* The hl byte offset must be part of the definer */ + DR_LOG(INFO, "Failed to map to definer, HL byte [%d] not found", byte_offset); + rte_errno = EINVAL; + return rte_errno; +} + +static int +mlx5dr_definer_fc_bind(struct mlx5dr_definer *definer, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz) +{ + uint32_t tag_offset = 0; + int ret, byte_diff; + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + /* Map header layout byte offset to byte offset in tag */ + ret = mlx5dr_definer_find_byte_in_tag(definer, fc->byte_off, &tag_offset); + if (ret) + return ret; + + /* Move setter based on the location in the definer */ + byte_diff = fc->byte_off % DW_SIZE - tag_offset % DW_SIZE; + fc->bit_off = fc->bit_off + byte_diff * BITS_IN_BYTE; + + /* Update offset in headers layout to offset in tag */ + fc->byte_off = tag_offset; + fc++; + } + + return 0; +} + +static bool +mlx5dr_definer_best_hl_fit_recu(struct mlx5dr_definer_sel_ctrl *ctrl, + uint32_t cur_dw, + uint32_t *data) +{ + uint8_t bytes_set; + int byte_idx; + bool ret; + int i; + + /* Reached end, nothing left to do */ + if (cur_dw == MLX5_ST_SZ_DW(definer_hl)) + return true; + + /* No data set, can skip to next DW */ + while (!*data) { + cur_dw++; + data++; + + /* Reached end, nothing left to do */ + if (cur_dw == MLX5_ST_SZ_DW(definer_hl)) + return true; + } + + /* Used all DW selectors and Byte selectors, no possible solution */ + if (ctrl->allowed_full_dw == ctrl->used_full_dw && + ctrl->allowed_lim_dw == ctrl->used_lim_dw && + ctrl->allowed_bytes == ctrl->used_bytes) + return false; + + /* Try to use limited DW selectors */ + if (ctrl->allowed_lim_dw > ctrl->used_lim_dw && cur_dw < 64) { + ctrl->lim_dw_selector[ctrl->used_lim_dw++] = cur_dw; + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + ctrl->lim_dw_selector[--ctrl->used_lim_dw] = 0; + } + + /* Try to use DW selectors */ + if (ctrl->allowed_full_dw > ctrl->used_full_dw) { + ctrl->full_dw_selector[ctrl->used_full_dw++] = cur_dw; + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + ctrl->full_dw_selector[--ctrl->used_full_dw] = 0; + } + + /* No byte selector for offset bigger than 255 */ + if (cur_dw * DW_SIZE > 255) + return false; + + bytes_set = !!(0x000000ff & *data) + + !!(0x0000ff00 & *data) + + !!(0x00ff0000 & *data) + + !!(0xff000000 & *data); + + /* Check if there are enough byte selectors left */ + if (bytes_set + ctrl->used_bytes > ctrl->allowed_bytes) + return false; + + /* Try to use Byte selectors */ + for (i = 0; i < DW_SIZE; i++) + if ((0xff000000 >> (i * BITS_IN_BYTE)) & rte_be_to_cpu_32(*data)) { + /* Use byte selectors high to low */ + byte_idx = ctrl->allowed_bytes - ctrl->used_bytes - 1; + ctrl->byte_selector[byte_idx] = cur_dw * DW_SIZE + i; + ctrl->used_bytes++; + } + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + for (i = 0; i < DW_SIZE; i++) + if ((0xff << (i * BITS_IN_BYTE)) & rte_be_to_cpu_32(*data)) { + ctrl->used_bytes--; + byte_idx = ctrl->allowed_bytes - ctrl->used_bytes - 1; + ctrl->byte_selector[byte_idx] = 0; + } + + return false; +} + +static void +mlx5dr_definer_apply_sel_ctrl(struct mlx5dr_definer_sel_ctrl *ctrl, + struct mlx5dr_definer *definer) +{ + memcpy(definer->byte_selector, ctrl->byte_selector, ctrl->allowed_bytes); + memcpy(definer->dw_selector, ctrl->full_dw_selector, ctrl->allowed_full_dw); + memcpy(definer->dw_selector + ctrl->allowed_full_dw, + ctrl->lim_dw_selector, ctrl->allowed_lim_dw); +} + +static int +mlx5dr_definer_find_best_hl_fit(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt, + uint8_t *hl) +{ + struct mlx5dr_definer_sel_ctrl ctrl = {0}; + bool found; + + /* Try to create a match definer */ + ctrl.allowed_full_dw = DW_SELECTORS_MATCH; + ctrl.allowed_lim_dw = 0; + ctrl.allowed_bytes = BYTE_SELECTORS; + + found = mlx5dr_definer_best_hl_fit_recu(&ctrl, 0, (uint32_t *)hl); + if (found) { + mlx5dr_definer_apply_sel_ctrl(&ctrl, mt->definer); + mt->definer->type = MLX5DR_DEFINER_TYPE_MATCH; + return 0; + } + + /* Try to create a full/limited jumbo definer */ + ctrl.allowed_full_dw = ctx->caps->full_dw_jumbo_support ? DW_SELECTORS : + DW_SELECTORS_MATCH; + ctrl.allowed_lim_dw = ctx->caps->full_dw_jumbo_support ? 0 : + DW_SELECTORS_LIMITED; + ctrl.allowed_bytes = BYTE_SELECTORS; + + found = mlx5dr_definer_best_hl_fit_recu(&ctrl, 0, (uint32_t *)hl); + if (found) { + mlx5dr_definer_apply_sel_ctrl(&ctrl, mt->definer); + mt->definer->type = MLX5DR_DEFINER_TYPE_JUMBO; + return 0; + } + + DR_LOG(ERR, "Unable to find supporting match/jumbo definer combination"); + rte_errno = ENOTSUP; + return rte_errno; +} + +static void +mlx5dr_definer_create_tag_mask(struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag) +{ + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + if (fc->tag_mask_set) + fc->tag_mask_set(fc, items[fc->item_idx].mask, tag); + else + fc->tag_set(fc, items[fc->item_idx].mask, tag); + fc++; + } +} + +void mlx5dr_definer_create_tag(const struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag) +{ + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + fc->tag_set(fc, items[fc->item_idx].spec, tag); + fc++; + } +} + +int mlx5dr_definer_get_id(struct mlx5dr_definer *definer) +{ + return definer->obj->id; +} + +int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b) +{ + int i; + + if (definer_a->type != definer_b->type) + return 1; + + for (i = 0; i < BYTE_SELECTORS; i++) + if (definer_a->byte_selector[i] != definer_b->byte_selector[i]) + return 1; + + for (i = 0; i < DW_SELECTORS; i++) + if (definer_a->dw_selector[i] != definer_b->dw_selector[i]) + return 1; + + for (i = 0; i < MLX5DR_JUMBO_TAG_SZ; i++) + if (definer_a->mask.jumbo[i] != definer_b->mask.jumbo[i]) + return 1; + + return 0; +} + +int mlx5dr_definer_get(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt) +{ + struct mlx5dr_cmd_definer_create_attr def_attr = {0}; + struct ibv_context *ibv_ctx = ctx->ibv_ctx; + uint8_t *hl; + int ret; + + if (mt->refcount++) + return 0; + + mt->definer = simple_calloc(1, sizeof(*mt->definer)); + if (!mt->definer) { + DR_LOG(ERR, "Failed to allocate memory for definer"); + rte_errno = ENOMEM; + goto dec_refcount; + } + + /* Header layout (hl) holds full bit mask per field */ + hl = simple_calloc(1, MLX5_ST_SZ_BYTES(definer_hl)); + if (!hl) { + DR_LOG(ERR, "Failed to allocate memory for header layout"); + rte_errno = ENOMEM; + goto free_definer; + } + + /* Convert items to hl and allocate the field copy array (fc) */ + ret = mlx5dr_definer_conv_items_to_hl(ctx, mt, hl); + if (ret) { + DR_LOG(ERR, "Failed to convert items to hl"); + goto free_hl; + } + + /* Find the definer for given header layout */ + ret = mlx5dr_definer_find_best_hl_fit(ctx, mt, hl); + if (ret) { + DR_LOG(ERR, "Failed to create definer from header layout"); + goto free_field_copy; + } + + /* Align field copy array based on the new definer */ + ret = mlx5dr_definer_fc_bind(mt->definer, + mt->fc, + mt->fc_sz); + if (ret) { + DR_LOG(ERR, "Failed to bind field copy to definer"); + goto free_field_copy; + } + + /* Create the tag mask used for definer creation */ + mlx5dr_definer_create_tag_mask(mt->items, + mt->fc, + mt->fc_sz, + mt->definer->mask.jumbo); + + /* Create definer based on the bitmask tag */ + def_attr.match_mask = mt->definer->mask.jumbo; + def_attr.dw_selector = mt->definer->dw_selector; + def_attr.byte_selector = mt->definer->byte_selector; + mt->definer->obj = mlx5dr_cmd_definer_create(ibv_ctx, &def_attr); + if (!mt->definer->obj) + goto free_field_copy; + + simple_free(hl); + + return 0; + +free_field_copy: + simple_free(mt->fc); +free_hl: + simple_free(hl); +free_definer: + simple_free(mt->definer); +dec_refcount: + mt->refcount--; + + return rte_errno; +} + +void mlx5dr_definer_put(struct mlx5dr_match_template *mt) +{ + if (--mt->refcount) + return; + + simple_free(mt->fc); + mlx5dr_cmd_destroy_obj(mt->definer->obj); + simple_free(mt->definer); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h new file mode 100644 index 0000000000..d52c6b0627 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_definer.h @@ -0,0 +1,585 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_DEFINER_H_ +#define MLX5DR_DEFINER_H_ + +/* Selectors based on match TAG */ +#define DW_SELECTORS_MATCH 6 +#define DW_SELECTORS_LIMITED 3 +#define DW_SELECTORS 9 +#define BYTE_SELECTORS 8 + +enum mlx5dr_definer_fname { + MLX5DR_DEFINER_FNAME_ETH_SMAC_48_16_O, + MLX5DR_DEFINER_FNAME_ETH_SMAC_48_16_I, + MLX5DR_DEFINER_FNAME_ETH_SMAC_15_0_O, + MLX5DR_DEFINER_FNAME_ETH_SMAC_15_0_I, + MLX5DR_DEFINER_FNAME_ETH_DMAC_48_16_O, + MLX5DR_DEFINER_FNAME_ETH_DMAC_48_16_I, + MLX5DR_DEFINER_FNAME_ETH_DMAC_15_0_O, + MLX5DR_DEFINER_FNAME_ETH_DMAC_15_0_I, + MLX5DR_DEFINER_FNAME_ETH_TYPE_O, + MLX5DR_DEFINER_FNAME_ETH_TYPE_I, + MLX5DR_DEFINER_FNAME_VLAN_TYPE_O, + MLX5DR_DEFINER_FNAME_VLAN_TYPE_I, + MLX5DR_DEFINER_FNAME_VLAN_TCI_O, + MLX5DR_DEFINER_FNAME_VLAN_TCI_I, + MLX5DR_DEFINER_FNAME_IPV4_IHL_O, + MLX5DR_DEFINER_FNAME_IPV4_IHL_I, + MLX5DR_DEFINER_FNAME_IP_TTL_O, + MLX5DR_DEFINER_FNAME_IP_TTL_I, + MLX5DR_DEFINER_FNAME_IPV4_DST_O, + MLX5DR_DEFINER_FNAME_IPV4_DST_I, + MLX5DR_DEFINER_FNAME_IPV4_SRC_O, + MLX5DR_DEFINER_FNAME_IPV4_SRC_I, + MLX5DR_DEFINER_FNAME_IP_VERSION_O, + MLX5DR_DEFINER_FNAME_IP_VERSION_I, + MLX5DR_DEFINER_FNAME_IP_FRAG_O, + MLX5DR_DEFINER_FNAME_IP_FRAG_I, + MLX5DR_DEFINER_FNAME_IPV6_PAYLOAD_LEN_O, + MLX5DR_DEFINER_FNAME_IPV6_PAYLOAD_LEN_I, + MLX5DR_DEFINER_FNAME_IP_TOS_O, + MLX5DR_DEFINER_FNAME_IP_TOS_I, + MLX5DR_DEFINER_FNAME_IPV6_FLOW_LABEL_O, + MLX5DR_DEFINER_FNAME_IPV6_FLOW_LABEL_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_127_96_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_95_64_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_63_32_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_31_0_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_127_96_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_95_64_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_63_32_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_31_0_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_127_96_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_95_64_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_63_32_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_31_0_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_127_96_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_95_64_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_63_32_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_31_0_I, + MLX5DR_DEFINER_FNAME_IP_PROTOCOL_O, + MLX5DR_DEFINER_FNAME_IP_PROTOCOL_I, + MLX5DR_DEFINER_FNAME_L4_SPORT_O, + MLX5DR_DEFINER_FNAME_L4_SPORT_I, + MLX5DR_DEFINER_FNAME_L4_DPORT_O, + MLX5DR_DEFINER_FNAME_L4_DPORT_I, + MLX5DR_DEFINER_FNAME_TCP_FLAGS_I, + MLX5DR_DEFINER_FNAME_TCP_FLAGS_O, + MLX5DR_DEFINER_FNAME_GTP_TEID, + MLX5DR_DEFINER_FNAME_GTP_MSG_TYPE, + MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG, + MLX5DR_DEFINER_FNAME_GTP_NEXT_EXT_HDR, + MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_PDU, + MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_QFI, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_0, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_1, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_2, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_3, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_4, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_5, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_6, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_7, + MLX5DR_DEFINER_FNAME_VPORT_REG_C_0, + MLX5DR_DEFINER_FNAME_VXLAN_FLAGS, + MLX5DR_DEFINER_FNAME_VXLAN_VNI, + MLX5DR_DEFINER_FNAME_SOURCE_QP, + MLX5DR_DEFINER_FNAME_REG_0, + MLX5DR_DEFINER_FNAME_REG_1, + MLX5DR_DEFINER_FNAME_REG_2, + MLX5DR_DEFINER_FNAME_REG_3, + MLX5DR_DEFINER_FNAME_REG_4, + MLX5DR_DEFINER_FNAME_REG_5, + MLX5DR_DEFINER_FNAME_REG_6, + MLX5DR_DEFINER_FNAME_REG_7, + MLX5DR_DEFINER_FNAME_REG_A, + MLX5DR_DEFINER_FNAME_REG_B, + MLX5DR_DEFINER_FNAME_GRE_KEY_PRESENT, + MLX5DR_DEFINER_FNAME_GRE_C_VER, + MLX5DR_DEFINER_FNAME_GRE_PROTOCOL, + MLX5DR_DEFINER_FNAME_GRE_OPT_KEY, + MLX5DR_DEFINER_FNAME_GRE_OPT_SEQ, + MLX5DR_DEFINER_FNAME_GRE_OPT_CHECKSUM, + MLX5DR_DEFINER_FNAME_INTEGRITY_O, + MLX5DR_DEFINER_FNAME_INTEGRITY_I, + MLX5DR_DEFINER_FNAME_ICMP_DW1, + MLX5DR_DEFINER_FNAME_ICMP_DW2, + MLX5DR_DEFINER_FNAME_MAX, +}; + +enum mlx5dr_definer_type { + MLX5DR_DEFINER_TYPE_MATCH, + MLX5DR_DEFINER_TYPE_JUMBO, +}; + +struct mlx5dr_definer_fc { + uint8_t item_idx; + uint32_t byte_off; + int bit_off; + uint32_t bit_mask; + enum mlx5dr_definer_fname fname; + void (*tag_set)(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag); + void (*tag_mask_set)(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag); +}; + +struct mlx5_ifc_definer_hl_eth_l2_bits { + u8 dmac_47_16[0x20]; + u8 dmac_15_0[0x10]; + u8 l3_ethertype[0x10]; + u8 reserved_at_40[0x1]; + u8 sx_sniffer[0x1]; + u8 functional_lb[0x1]; + u8 ip_fragmented[0x1]; + u8 qp_type[0x2]; + u8 encap_type[0x2]; + u8 port_number[0x2]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 tci[0x10]; /* contains first_priority[0x3] + first_cfi[0x1] + first_vlan_id[0xc] */ + u8 l4_type[0x4]; + u8 reserved_at_64[0x2]; + u8 ipsec_layer[0x2]; + u8 l2_type[0x2]; + u8 force_lb[0x1]; + u8 l2_ok[0x1]; + u8 l3_ok[0x1]; + u8 l4_ok[0x1]; + u8 second_vlan_qualifier[0x2]; + u8 second_priority[0x3]; + u8 second_cfi[0x1]; + u8 second_vlan_id[0xc]; +}; + +struct mlx5_ifc_definer_hl_eth_l2_src_bits { + u8 smac_47_16[0x20]; + u8 smac_15_0[0x10]; + u8 loopback_syndrome[0x8]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 ip_fragmented[0x1]; + u8 functional_lb[0x1]; +}; + +struct mlx5_ifc_definer_hl_ib_l2_bits { + u8 sx_sniffer[0x1]; + u8 force_lb[0x1]; + u8 functional_lb[0x1]; + u8 reserved_at_3[0x3]; + u8 port_number[0x2]; + u8 sl[0x4]; + u8 qp_type[0x2]; + u8 lnh[0x2]; + u8 dlid[0x10]; + u8 vl[0x4]; + u8 lrh_packet_length[0xc]; + u8 slid[0x10]; +}; + +struct mlx5_ifc_definer_hl_eth_l3_bits { + u8 ip_version[0x4]; + u8 ihl[0x4]; + union { + u8 tos[0x8]; + struct { + u8 dscp[0x6]; + u8 ecn[0x2]; + }; + }; + u8 time_to_live_hop_limit[0x8]; + u8 protocol_next_header[0x8]; + u8 identification[0x10]; + u8 flags[0x3]; + u8 fragment_offset[0xd]; + u8 ipv4_total_length[0x10]; + u8 checksum[0x10]; + u8 reserved_at_60[0xc]; + u8 flow_label[0x14]; + u8 packet_length[0x10]; + u8 ipv6_payload_length[0x10]; +}; + +struct mlx5_ifc_definer_hl_eth_l4_bits { + u8 source_port[0x10]; + u8 destination_port[0x10]; + u8 data_offset[0x4]; + u8 l4_ok[0x1]; + u8 l3_ok[0x1]; + u8 ip_fragmented[0x1]; + u8 tcp_ns[0x1]; + union { + u8 tcp_flags[0x8]; + struct { + u8 tcp_cwr[0x1]; + u8 tcp_ece[0x1]; + u8 tcp_urg[0x1]; + u8 tcp_ack[0x1]; + u8 tcp_psh[0x1]; + u8 tcp_rst[0x1]; + u8 tcp_syn[0x1]; + u8 tcp_fin[0x1]; + }; + }; + u8 first_fragment[0x1]; + u8 reserved_at_31[0xf]; +}; + +struct mlx5_ifc_definer_hl_src_qp_gvmi_bits { + u8 loopback_syndrome[0x8]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 reserved_at_e[0x1]; + u8 functional_lb[0x1]; + u8 source_gvmi[0x10]; + u8 force_lb[0x1]; + u8 ip_fragmented[0x1]; + u8 source_is_requestor[0x1]; + u8 reserved_at_23[0x5]; + u8 source_qp[0x18]; +}; + +struct mlx5_ifc_definer_hl_ib_l4_bits { + u8 opcode[0x8]; + u8 qp[0x18]; + u8 se[0x1]; + u8 migreq[0x1]; + u8 ackreq[0x1]; + u8 fecn[0x1]; + u8 becn[0x1]; + u8 bth[0x1]; + u8 deth[0x1]; + u8 dcceth[0x1]; + u8 reserved_at_28[0x2]; + u8 pad_count[0x2]; + u8 tver[0x4]; + u8 p_key[0x10]; + u8 reserved_at_40[0x8]; + u8 deth_source_qp[0x18]; +}; + +enum mlx5dr_integrity_ok1_bits { + MLX5DR_DEFINER_OKS1_FIRST_L4_OK = 24, + MLX5DR_DEFINER_OKS1_FIRST_L3_OK = 25, + MLX5DR_DEFINER_OKS1_SECOND_L4_OK = 26, + MLX5DR_DEFINER_OKS1_SECOND_L3_OK = 27, + MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK = 28, + MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK = 29, + MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK = 30, + MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK = 31, +}; + +struct mlx5_ifc_definer_hl_oks1_bits { + union { + u8 oks1_bits[0x20]; + struct { + u8 second_ipv4_checksum_ok[0x1]; + u8 second_l4_checksum_ok[0x1]; + u8 first_ipv4_checksum_ok[0x1]; + u8 first_l4_checksum_ok[0x1]; + u8 second_l3_ok[0x1]; + u8 second_l4_ok[0x1]; + u8 first_l3_ok[0x1]; + u8 first_l4_ok[0x1]; + u8 flex_parser7_steering_ok[0x1]; + u8 flex_parser6_steering_ok[0x1]; + u8 flex_parser5_steering_ok[0x1]; + u8 flex_parser4_steering_ok[0x1]; + u8 flex_parser3_steering_ok[0x1]; + u8 flex_parser2_steering_ok[0x1]; + u8 flex_parser1_steering_ok[0x1]; + u8 flex_parser0_steering_ok[0x1]; + u8 second_ipv6_extension_header_vld[0x1]; + u8 first_ipv6_extension_header_vld[0x1]; + u8 l3_tunneling_ok[0x1]; + u8 l2_tunneling_ok[0x1]; + u8 second_tcp_ok[0x1]; + u8 second_udp_ok[0x1]; + u8 second_ipv4_ok[0x1]; + u8 second_ipv6_ok[0x1]; + u8 second_l2_ok[0x1]; + u8 vxlan_ok[0x1]; + u8 gre_ok[0x1]; + u8 first_tcp_ok[0x1]; + u8 first_udp_ok[0x1]; + u8 first_ipv4_ok[0x1]; + u8 first_ipv6_ok[0x1]; + u8 first_l2_ok[0x1]; + }; + }; +}; + +struct mlx5_ifc_definer_hl_oks2_bits { + u8 reserved_at_0[0xa]; + u8 second_mpls_ok[0x1]; + u8 second_mpls4_s_bit[0x1]; + u8 second_mpls4_qualifier[0x1]; + u8 second_mpls3_s_bit[0x1]; + u8 second_mpls3_qualifier[0x1]; + u8 second_mpls2_s_bit[0x1]; + u8 second_mpls2_qualifier[0x1]; + u8 second_mpls1_s_bit[0x1]; + u8 second_mpls1_qualifier[0x1]; + u8 second_mpls0_s_bit[0x1]; + u8 second_mpls0_qualifier[0x1]; + u8 first_mpls_ok[0x1]; + u8 first_mpls4_s_bit[0x1]; + u8 first_mpls4_qualifier[0x1]; + u8 first_mpls3_s_bit[0x1]; + u8 first_mpls3_qualifier[0x1]; + u8 first_mpls2_s_bit[0x1]; + u8 first_mpls2_qualifier[0x1]; + u8 first_mpls1_s_bit[0x1]; + u8 first_mpls1_qualifier[0x1]; + u8 first_mpls0_s_bit[0x1]; + u8 first_mpls0_qualifier[0x1]; +}; + +struct mlx5_ifc_definer_hl_voq_bits { + u8 reserved_at_0[0x18]; + u8 ecn_ok[0x1]; + u8 congestion[0x1]; + u8 profile[0x2]; + u8 internal_prio[0x4]; +}; + +struct mlx5_ifc_definer_hl_ipv4_src_dst_bits { + u8 source_address[0x20]; + u8 destination_address[0x20]; +}; + +struct mlx5_ifc_definer_hl_ipv6_addr_bits { + u8 ipv6_address_127_96[0x20]; + u8 ipv6_address_95_64[0x20]; + u8 ipv6_address_63_32[0x20]; + u8 ipv6_address_31_0[0x20]; +}; + +struct mlx5_ifc_definer_tcp_icmp_header_bits { + union { + struct { + u8 icmp_dw1[0x20]; + u8 icmp_dw2[0x20]; + u8 icmp_dw3[0x20]; + }; + struct { + u8 tcp_seq[0x20]; + u8 tcp_ack[0x20]; + u8 tcp_win_urg[0x20]; + }; + }; +}; + +struct mlx5_ifc_definer_hl_tunnel_header_bits { + u8 tunnel_header_0[0x20]; + u8 tunnel_header_1[0x20]; + u8 tunnel_header_2[0x20]; + u8 tunnel_header_3[0x20]; +}; + +struct mlx5_ifc_definer_hl_ipsec_bits { + u8 spi[0x20]; + u8 sequence_number[0x20]; + u8 reserved[0x10]; + u8 ipsec_syndrome[0x8]; + u8 next_header[0x8]; +}; + +struct mlx5_ifc_definer_hl_metadata_bits { + u8 metadata_to_cqe[0x20]; + u8 general_purpose[0x20]; + u8 acomulated_hash[0x20]; +}; + +struct mlx5_ifc_definer_hl_flex_parser_bits { + u8 flex_parser_7[0x20]; + u8 flex_parser_6[0x20]; + u8 flex_parser_5[0x20]; + u8 flex_parser_4[0x20]; + u8 flex_parser_3[0x20]; + u8 flex_parser_2[0x20]; + u8 flex_parser_1[0x20]; + u8 flex_parser_0[0x20]; +}; + +struct mlx5_ifc_definer_hl_registers_bits { + u8 register_c_10[0x20]; + u8 register_c_11[0x20]; + u8 register_c_8[0x20]; + u8 register_c_9[0x20]; + u8 register_c_6[0x20]; + u8 register_c_7[0x20]; + u8 register_c_4[0x20]; + u8 register_c_5[0x20]; + u8 register_c_2[0x20]; + u8 register_c_3[0x20]; + u8 register_c_0[0x20]; + u8 register_c_1[0x20]; +}; + +struct mlx5_ifc_definer_hl_bits { + struct mlx5_ifc_definer_hl_eth_l2_bits eth_l2_outer; + struct mlx5_ifc_definer_hl_eth_l2_bits eth_l2_inner; + struct mlx5_ifc_definer_hl_eth_l2_src_bits eth_l2_src_outer; + struct mlx5_ifc_definer_hl_eth_l2_src_bits eth_l2_src_inner; + struct mlx5_ifc_definer_hl_ib_l2_bits ib_l2; + struct mlx5_ifc_definer_hl_eth_l3_bits eth_l3_outer; + struct mlx5_ifc_definer_hl_eth_l3_bits eth_l3_inner; + struct mlx5_ifc_definer_hl_eth_l4_bits eth_l4_outer; + struct mlx5_ifc_definer_hl_eth_l4_bits eth_l4_inner; + struct mlx5_ifc_definer_hl_src_qp_gvmi_bits source_qp_gvmi; + struct mlx5_ifc_definer_hl_ib_l4_bits ib_l4; + struct mlx5_ifc_definer_hl_oks1_bits oks1; + struct mlx5_ifc_definer_hl_oks2_bits oks2; + struct mlx5_ifc_definer_hl_voq_bits voq; + u8 reserved_at_480[0x380]; + struct mlx5_ifc_definer_hl_ipv4_src_dst_bits ipv4_src_dest_outer; + struct mlx5_ifc_definer_hl_ipv4_src_dst_bits ipv4_src_dest_inner; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_dst_outer; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_dst_inner; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_src_outer; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_src_inner; + u8 unsupported_dest_ib_l3[0x80]; + u8 unsupported_source_ib_l3[0x80]; + u8 unsupported_udp_misc_outer[0x20]; + u8 unsupported_udp_misc_inner[0x20]; + struct mlx5_ifc_definer_tcp_icmp_header_bits tcp_icmp; + struct mlx5_ifc_definer_hl_tunnel_header_bits tunnel_header; + u8 unsupported_mpls_outer[0xa0]; + u8 unsupported_mpls_inner[0xa0]; + u8 unsupported_config_headers_outer[0x80]; + u8 unsupported_config_headers_inner[0x80]; + u8 unsupported_random_number[0x20]; + struct mlx5_ifc_definer_hl_ipsec_bits ipsec; + struct mlx5_ifc_definer_hl_metadata_bits metadata; + u8 unsupported_utc_timestamp[0x40]; + u8 unsupported_free_running_timestamp[0x40]; + struct mlx5_ifc_definer_hl_flex_parser_bits flex_parser; + struct mlx5_ifc_definer_hl_registers_bits registers; + /* struct x ib_l3_extended; */ + /* struct x rwh */ + /* struct x dcceth */ + /* struct x dceth */ +}; + +enum mlx5dr_definer_gtp { + MLX5DR_DEFINER_GTP_EXT_HDR_BIT = 0x04, +}; + +struct mlx5_ifc_header_gtp_bits { + u8 version[0x3]; + u8 proto_type[0x1]; + u8 reserved1[0x1]; + u8 ext_hdr_flag[0x1]; + u8 seq_num_flag[0x1]; + u8 pdu_flag[0x1]; + u8 msg_type[0x8]; + u8 msg_len[0x8]; + u8 teid[0x20]; +}; + +struct mlx5_ifc_header_opt_gtp_bits { + u8 seq_num[0x10]; + u8 pdu_num[0x8]; + u8 next_ext_hdr_type[0x8]; +}; + +struct mlx5_ifc_header_gtp_psc_bits { + u8 len[0x8]; + u8 pdu_type[0x4]; + u8 flags[0x4]; + u8 qfi[0x8]; + u8 reserved2[0x8]; +}; + +struct mlx5_ifc_header_ipv6_vtc_bits { + u8 version[0x4]; + union { + u8 tos[0x8]; + struct { + u8 dscp[0x6]; + u8 ecn[0x2]; + }; + }; + u8 flow_label[0x14]; +}; + +struct mlx5_ifc_header_vxlan_bits { + u8 flags[0x8]; + u8 reserved1[0x18]; + u8 vni[0x18]; + u8 reserved2[0x8]; +}; + +struct mlx5_ifc_header_gre_bits { + union { + u8 c_rsvd0_ver[0x10]; + struct { + u8 gre_c_present[0x1]; + u8 reserved_at_1[0x1]; + u8 gre_k_present[0x1]; + u8 gre_s_present[0x1]; + u8 reserved_at_4[0x9]; + u8 version[0x3]; + }; + }; + u8 gre_protocol[0x10]; + u8 checksum[0x10]; + u8 reserved_at_30[0x10]; +}; + +struct mlx5_ifc_header_icmp_bits { + union { + u8 icmp_dw1[0x20]; + struct { + u8 type[0x8]; + u8 code[0x8]; + u8 cksum[0x10]; + }; + }; + union { + u8 icmp_dw2[0x20]; + struct { + u8 ident[0x10]; + u8 seq_nb[0x10]; + }; + }; +}; + +struct mlx5dr_definer { + enum mlx5dr_definer_type type; + uint8_t dw_selector[DW_SELECTORS]; + uint8_t byte_selector[BYTE_SELECTORS]; + struct mlx5dr_rule_match_tag mask; + struct mlx5dr_devx_obj *obj; +}; + +static inline bool +mlx5dr_definer_is_jumbo(struct mlx5dr_definer *definer) +{ + return (definer->type == MLX5DR_DEFINER_TYPE_JUMBO); +} + +void mlx5dr_definer_create_tag(const struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag); + +int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b); + +int mlx5dr_definer_get_id(struct mlx5dr_definer *definer); + +int mlx5dr_definer_get(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt); + +void mlx5dr_definer_put(struct mlx5dr_match_template *mt); + +#endif /* MLX5DR_DEFINER_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 12/18] net/mlx5/hws: Add HWS context object 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (10 preceding siblings ...) 2022-10-19 14:42 ` [v4 11/18] net/mlx5/hws: Add HWS definer layer Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 13/18] net/mlx5/hws: Add HWS table object Alex Vesker ` (5 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Context is the first mlx5dr object created, all sub object: table, matcher, rule, action are created using the context. The context holds the capabilities and send queues used for configuring the offloads to the HW. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_context.c | 223 ++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_context.h | 40 +++++ 2 files changed, 263 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.h diff --git a/drivers/net/mlx5/hws/mlx5dr_context.c b/drivers/net/mlx5/hws/mlx5dr_context.c new file mode 100644 index 0000000000..ae86694a51 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_context.c @@ -0,0 +1,223 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static int mlx5dr_context_pools_init(struct mlx5dr_context *ctx) +{ + struct mlx5dr_pool_attr pool_attr = {0}; + uint8_t max_log_sz; + int i; + + if (mlx5dr_pat_init_pattern_cache(&ctx->pattern_cache)) + return rte_errno; + + /* Create an STC pool per FT type */ + pool_attr.pool_type = MLX5DR_POOL_TYPE_STC; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_STC_POOL; + max_log_sz = RTE_MIN(MLX5DR_POOL_STC_LOG_SZ, ctx->caps->stc_alloc_log_max); + pool_attr.alloc_log_sz = RTE_MAX(max_log_sz, ctx->caps->stc_alloc_log_gran); + + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) { + pool_attr.table_type = i; + ctx->stc_pool[i] = mlx5dr_pool_create(ctx, &pool_attr); + if (!ctx->stc_pool[i]) { + DR_LOG(ERR, "Failed to allocate STC pool [%d]", i); + goto free_stc_pools; + } + } + + return 0; + +free_stc_pools: + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) + if (ctx->stc_pool[i]) + mlx5dr_pool_destroy(ctx->stc_pool[i]); + + mlx5dr_pat_uninit_pattern_cache(ctx->pattern_cache); + + return rte_errno; +} + +static void mlx5dr_context_pools_uninit(struct mlx5dr_context *ctx) +{ + int i; + + mlx5dr_pat_uninit_pattern_cache(ctx->pattern_cache); + + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) { + if (ctx->stc_pool[i]) + mlx5dr_pool_destroy(ctx->stc_pool[i]); + } +} + +static int mlx5dr_context_init_pd(struct mlx5dr_context *ctx, + struct ibv_pd *pd) +{ + struct mlx5dv_pd mlx5_pd = {0}; + struct mlx5dv_obj obj; + int ret; + + if (pd) { + ctx->pd = pd; + } else { + ctx->pd = mlx5_glue->alloc_pd(ctx->ibv_ctx); + if (!ctx->pd) { + DR_LOG(ERR, "Failed to allocate PD"); + rte_errno = errno; + return rte_errno; + } + ctx->flags |= MLX5DR_CONTEXT_FLAG_PRIVATE_PD; + } + + obj.pd.in = ctx->pd; + obj.pd.out = &mlx5_pd; + + ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD); + if (ret) + goto free_private_pd; + + ctx->pd_num = mlx5_pd.pdn; + + return 0; + +free_private_pd: + if (ctx->flags & MLX5DR_CONTEXT_FLAG_PRIVATE_PD) + mlx5_glue->dealloc_pd(ctx->pd); + + return ret; +} + +static int mlx5dr_context_uninit_pd(struct mlx5dr_context *ctx) +{ + if (ctx->flags & MLX5DR_CONTEXT_FLAG_PRIVATE_PD) + return mlx5_glue->dealloc_pd(ctx->pd); + + return 0; +} + +static void mlx5dr_context_check_hws_supp(struct mlx5dr_context *ctx) +{ + struct mlx5dr_cmd_query_caps *caps = ctx->caps; + + /* HWS not supported on device / FW */ + if (!caps->wqe_based_update) { + DR_LOG(INFO, "Required HWS WQE based insertion cap not supported"); + return; + } + + /* Current solution requires all rules to set reparse bit */ + if ((!caps->nic_ft.reparse || !caps->fdb_ft.reparse) || + !IS_BIT_SET(caps->rtc_reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS)) { + DR_LOG(INFO, "Required HWS reparse cap not supported"); + return; + } + + /* FW/HW must support 8DW STE */ + if (!IS_BIT_SET(caps->ste_format, MLX5_IFC_RTC_STE_FORMAT_8DW)) { + DR_LOG(INFO, "Required HWS STE format not supported"); + return; + } + + /* Adding rules by hash and by offset are requirements */ + if (!IS_BIT_SET(caps->rtc_index_mode, MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH) || + !IS_BIT_SET(caps->rtc_index_mode, MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET)) { + DR_LOG(INFO, "Required HWS RTC update mode not supported"); + return; + } + + /* Support for SELECT definer ID is required */ + if (!IS_BIT_SET(caps->definer_format_sup, MLX5_IFC_DEFINER_FORMAT_ID_SELECT)) { + DR_LOG(INFO, "Required HWS Dynamic definer not supported"); + return; + } + + ctx->flags |= MLX5DR_CONTEXT_FLAG_HWS_SUPPORT; +} + +static int mlx5dr_context_init_hws(struct mlx5dr_context *ctx, + struct mlx5dr_context_attr *attr) +{ + int ret; + + mlx5dr_context_check_hws_supp(ctx); + + if (!(ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) + return 0; + + ret = mlx5dr_context_init_pd(ctx, attr->pd); + if (ret) + return ret; + + ret = mlx5dr_context_pools_init(ctx); + if (ret) + goto uninit_pd; + + ret = mlx5dr_send_queues_open(ctx, attr->queues, attr->queue_size); + if (ret) + goto pools_uninit; + + return 0; + +pools_uninit: + mlx5dr_context_pools_uninit(ctx); +uninit_pd: + mlx5dr_context_uninit_pd(ctx); + return ret; +} + +static void mlx5dr_context_uninit_hws(struct mlx5dr_context *ctx) +{ + if (!(ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) + return; + + mlx5dr_send_queues_close(ctx); + mlx5dr_context_pools_uninit(ctx); + mlx5dr_context_uninit_pd(ctx); +} + +struct mlx5dr_context *mlx5dr_context_open(struct ibv_context *ibv_ctx, + struct mlx5dr_context_attr *attr) +{ + struct mlx5dr_context *ctx; + int ret; + + ctx = simple_calloc(1, sizeof(*ctx)); + if (!ctx) { + rte_errno = ENOMEM; + return NULL; + } + + ctx->ibv_ctx = ibv_ctx; + pthread_spin_init(&ctx->ctrl_lock, PTHREAD_PROCESS_PRIVATE); + + ctx->caps = simple_calloc(1, sizeof(*ctx->caps)); + if (!ctx->caps) + goto free_ctx; + + ret = mlx5dr_cmd_query_caps(ibv_ctx, ctx->caps); + if (ret) + goto free_caps; + + ret = mlx5dr_context_init_hws(ctx, attr); + if (ret) + goto free_caps; + + return ctx; + +free_caps: + simple_free(ctx->caps); +free_ctx: + simple_free(ctx); + return NULL; +} + +int mlx5dr_context_close(struct mlx5dr_context *ctx) +{ + mlx5dr_context_uninit_hws(ctx); + simple_free(ctx->caps); + pthread_spin_destroy(&ctx->ctrl_lock); + simple_free(ctx); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_context.h b/drivers/net/mlx5/hws/mlx5dr_context.h new file mode 100644 index 0000000000..b0c7802daf --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_context.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_CONTEXT_H_ +#define MLX5DR_CONTEXT_H_ + +enum mlx5dr_context_flags { + MLX5DR_CONTEXT_FLAG_HWS_SUPPORT = 1 << 0, + MLX5DR_CONTEXT_FLAG_PRIVATE_PD = 1 << 1, +}; + +enum mlx5dr_context_shared_stc_type { + MLX5DR_CONTEXT_SHARED_STC_DECAP = 0, + MLX5DR_CONTEXT_SHARED_STC_POP = 1, + MLX5DR_CONTEXT_SHARED_STC_MAX = 2, +}; + +struct mlx5dr_context_common_res { + struct mlx5dr_action_default_stc *default_stc; + struct mlx5dr_action_shared_stc *shared_stc[MLX5DR_CONTEXT_SHARED_STC_MAX]; + struct mlx5dr_cmd_forward_tbl *default_miss; +}; + +struct mlx5dr_context { + struct ibv_context *ibv_ctx; + struct mlx5dr_cmd_query_caps *caps; + struct ibv_pd *pd; + uint32_t pd_num; + struct mlx5dr_pool *stc_pool[MLX5DR_TABLE_TYPE_MAX]; + struct mlx5dr_context_common_res common_res[MLX5DR_TABLE_TYPE_MAX]; + struct mlx5dr_pattern_cache *pattern_cache; + pthread_spinlock_t ctrl_lock; + enum mlx5dr_context_flags flags; + struct mlx5dr_send_engine *send_queue; + size_t queues; + LIST_HEAD(table_head, mlx5dr_table) head; +}; + +#endif /* MLX5DR_CONTEXT_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 13/18] net/mlx5/hws: Add HWS table object 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (11 preceding siblings ...) 2022-10-19 14:42 ` [v4 12/18] net/mlx5/hws: Add HWS context object Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 14/18] net/mlx5/hws: Add HWS matcher object Alex Vesker ` (4 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit HWS table resides under the context object, each context can have multiple tables with different steering types RX/TX/FDB. The table is not only a logical object but it is also represented in the HW, packets can be steered to the table and from there to other tables. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_table.c | 248 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_table.h | 44 +++++ 2 files changed, 292 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.h diff --git a/drivers/net/mlx5/hws/mlx5dr_table.c b/drivers/net/mlx5/hws/mlx5dr_table.c new file mode 100644 index 0000000000..d3f77e4780 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_table.c @@ -0,0 +1,248 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static void mlx5dr_table_init_next_ft_attr(struct mlx5dr_table *tbl, + struct mlx5dr_cmd_ft_create_attr *ft_attr) +{ + ft_attr->type = tbl->fw_ft_type; + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) + ft_attr->level = tbl->ctx->caps->fdb_ft.max_level - 1; + else + ft_attr->level = tbl->ctx->caps->nic_ft.max_level - 1; + ft_attr->rtc_valid = true; +} + +/* Call this under ctx->ctrl_lock */ +static int +mlx5dr_table_up_default_fdb_miss_tbl(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_ft_create_attr ft_attr = {0}; + struct mlx5dr_cmd_forward_tbl *default_miss; + struct mlx5dr_context *ctx = tbl->ctx; + uint8_t tbl_type = tbl->type; + uint32_t vport; + + if (tbl->type != MLX5DR_TABLE_TYPE_FDB) + return 0; + + if (ctx->common_res[tbl_type].default_miss) { + ctx->common_res[tbl_type].default_miss->refcount++; + return 0; + } + + ft_attr.type = tbl->fw_ft_type; + ft_attr.level = tbl->ctx->caps->fdb_ft.max_level; /* The last level */ + ft_attr.rtc_valid = false; + + assert(ctx->caps->eswitch_manager); + vport = ctx->caps->eswitch_manager_vport_number; + + default_miss = mlx5dr_cmd_miss_ft_create(ctx->ibv_ctx, &ft_attr, vport); + if (!default_miss) { + DR_LOG(ERR, "Failed to default miss table type: 0x%x", tbl_type); + return rte_errno; + } + + ctx->common_res[tbl_type].default_miss = default_miss; + ctx->common_res[tbl_type].default_miss->refcount++; + return 0; +} + +/* Called under pthread_spin_lock(&ctx->ctrl_lock) */ +static void mlx5dr_table_down_default_fdb_miss_tbl(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_forward_tbl *default_miss; + struct mlx5dr_context *ctx = tbl->ctx; + uint8_t tbl_type = tbl->type; + + if (tbl->type != MLX5DR_TABLE_TYPE_FDB) + return; + + default_miss = ctx->common_res[tbl_type].default_miss; + if (--default_miss->refcount) + return; + + mlx5dr_cmd_miss_ft_destroy(default_miss); + + simple_free(default_miss); + ctx->common_res[tbl_type].default_miss = NULL; +} + +static int +mlx5dr_table_connect_to_default_miss_tbl(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + int ret; + + assert(tbl->type == MLX5DR_TABLE_TYPE_FDB); + + mlx5dr_cmd_set_attr_connect_miss_tbl(tbl->ctx, + tbl->fw_ft_type, + tbl->type, + &ft_attr); + + /* Connect to next */ + ret = mlx5dr_cmd_flow_table_modify(ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect FT to default FDB FT"); + return errno; + } + + return 0; +} + +struct mlx5dr_devx_obj * +mlx5dr_table_create_default_ft(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_ft_create_attr ft_attr = {0}; + struct mlx5dr_devx_obj *ft_obj; + int ret; + + mlx5dr_table_init_next_ft_attr(tbl, &ft_attr); + + ft_obj = mlx5dr_cmd_flow_table_create(tbl->ctx->ibv_ctx, &ft_attr); + if (ft_obj && tbl->type == MLX5DR_TABLE_TYPE_FDB) { + /* Take/create ref over the default miss */ + ret = mlx5dr_table_up_default_fdb_miss_tbl(tbl); + if (ret) { + DR_LOG(ERR, "Failed to get default fdb miss"); + goto free_ft_obj; + } + ret = mlx5dr_table_connect_to_default_miss_tbl(tbl, ft_obj); + if (ret) { + DR_LOG(ERR, "Failed connecting to default miss tbl"); + goto down_miss_tbl; + } + } + + return ft_obj; + +down_miss_tbl: + mlx5dr_table_down_default_fdb_miss_tbl(tbl); +free_ft_obj: + mlx5dr_cmd_destroy_obj(ft_obj); + return NULL; +} + +void mlx5dr_table_destroy_default_ft(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft_obj) +{ + mlx5dr_table_down_default_fdb_miss_tbl(tbl); + mlx5dr_cmd_destroy_obj(ft_obj); +} + +static int mlx5dr_table_init(struct mlx5dr_table *tbl) +{ + struct mlx5dr_context *ctx = tbl->ctx; + int ret; + + if (mlx5dr_table_is_root(tbl)) + return 0; + + if (!(tbl->ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) { + DR_LOG(ERR, "HWS not supported, cannot create mlx5dr_table"); + rte_errno = EOPNOTSUPP; + return rte_errno; + } + + switch (tbl->type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + tbl->fw_ft_type = FS_FT_NIC_RX; + break; + case MLX5DR_TABLE_TYPE_NIC_TX: + tbl->fw_ft_type = FS_FT_NIC_TX; + break; + case MLX5DR_TABLE_TYPE_FDB: + tbl->fw_ft_type = FS_FT_FDB; + break; + default: + assert(0); + break; + } + + pthread_spin_lock(&ctx->ctrl_lock); + tbl->ft = mlx5dr_table_create_default_ft(tbl); + if (!tbl->ft) { + DR_LOG(ERR, "Failed to create flow table devx object"); + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; + } + + ret = mlx5dr_action_get_default_stc(ctx, tbl->type); + if (ret) + goto tbl_destroy; + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +tbl_destroy: + mlx5dr_table_destroy_default_ft(tbl, tbl->ft); + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void mlx5dr_table_uninit(struct mlx5dr_table *tbl) +{ + if (mlx5dr_table_is_root(tbl)) + return; + pthread_spin_lock(&tbl->ctx->ctrl_lock); + mlx5dr_action_put_default_stc(tbl->ctx, tbl->type); + mlx5dr_table_destroy_default_ft(tbl, tbl->ft); + pthread_spin_unlock(&tbl->ctx->ctrl_lock); +} + +struct mlx5dr_table *mlx5dr_table_create(struct mlx5dr_context *ctx, + struct mlx5dr_table_attr *attr) +{ + struct mlx5dr_table *tbl; + int ret; + + if (attr->type > MLX5DR_TABLE_TYPE_FDB) { + DR_LOG(ERR, "Invalid table type %d", attr->type); + return NULL; + } + + tbl = simple_malloc(sizeof(*tbl)); + if (!tbl) { + rte_errno = ENOMEM; + return NULL; + } + + tbl->ctx = ctx; + tbl->type = attr->type; + tbl->level = attr->level; + LIST_INIT(&tbl->head); + + ret = mlx5dr_table_init(tbl); + if (ret) { + DR_LOG(ERR, "Failed to initialise table"); + goto free_tbl; + } + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_INSERT_HEAD(&ctx->head, tbl, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + return tbl; + +free_tbl: + simple_free(tbl); + return NULL; +} + +int mlx5dr_table_destroy(struct mlx5dr_table *tbl) +{ + struct mlx5dr_context *ctx = tbl->ctx; + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_REMOVE(tbl, next); + pthread_spin_unlock(&ctx->ctrl_lock); + mlx5dr_table_uninit(tbl); + simple_free(tbl); + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_table.h b/drivers/net/mlx5/hws/mlx5dr_table.h new file mode 100644 index 0000000000..786dddfaa4 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_table.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_TABLE_H_ +#define MLX5DR_TABLE_H_ + +#define MLX5DR_ROOT_LEVEL 0 + +struct mlx5dr_table { + struct mlx5dr_context *ctx; + struct mlx5dr_devx_obj *ft; + enum mlx5dr_table_type type; + uint32_t fw_ft_type; + uint32_t level; + LIST_HEAD(matcher_head, mlx5dr_matcher) head; + LIST_ENTRY(mlx5dr_table) next; +}; + +static inline +uint32_t mlx5dr_table_get_res_fw_ft_type(enum mlx5dr_table_type tbl_type, + bool is_mirror) +{ + if (tbl_type == MLX5DR_TABLE_TYPE_NIC_RX) + return FS_FT_NIC_RX; + else if (tbl_type == MLX5DR_TABLE_TYPE_NIC_TX) + return FS_FT_NIC_TX; + else if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + return is_mirror ? FS_FT_FDB_TX : FS_FT_FDB_RX; + + assert(0); + return 0; +} + +static inline bool mlx5dr_table_is_root(struct mlx5dr_table *tbl) +{ + return (tbl->level == MLX5DR_ROOT_LEVEL); +} + +struct mlx5dr_devx_obj *mlx5dr_table_create_default_ft(struct mlx5dr_table *tbl); + +void mlx5dr_table_destroy_default_ft(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft_obj); +#endif /* MLX5DR_TABLE_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 14/18] net/mlx5/hws: Add HWS matcher object 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (12 preceding siblings ...) 2022-10-19 14:42 ` [v4 13/18] net/mlx5/hws: Add HWS table object Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 15/18] net/mlx5/hws: Add HWS rule object Alex Vesker ` (3 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika HWS matcher resides under the table object, each table can have multiple chained matcher with different attributes. Each matcher represents a combination of match and action templates. Each matcher can contain multiple configurations based on the templates. Packets are steered from the table to the matcher and from there to other objects. The matcher allows efficent HW packet field matching and action execution based on the configuration done to it. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_matcher.c | 922 ++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_matcher.h | 76 +++ 2 files changed, 998 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.h diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c new file mode 100644 index 0000000000..835a3908eb --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -0,0 +1,922 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static bool mlx5dr_matcher_requires_col_tbl(uint8_t log_num_of_rules) +{ + /* Collision table concatenation is done only for large rule tables */ + return log_num_of_rules > MLX5DR_MATCHER_ASSURED_RULES_TH; +} + +static uint8_t mlx5dr_matcher_rules_to_tbl_depth(uint8_t log_num_of_rules) +{ + if (mlx5dr_matcher_requires_col_tbl(log_num_of_rules)) + return MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH; + + /* For small rule tables we use a single deep table to assure insertion */ + return RTE_MIN(log_num_of_rules, MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH); +} + +static int mlx5dr_matcher_create_end_ft(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_table *tbl = matcher->tbl; + + matcher->end_ft = mlx5dr_table_create_default_ft(tbl); + if (!matcher->end_ft) { + DR_LOG(ERR, "Failed to create matcher end flow table"); + return rte_errno; + } + return 0; +} + +static void mlx5dr_matcher_destroy_end_ft(struct mlx5dr_matcher *matcher) +{ + mlx5dr_table_destroy_default_ft(matcher->tbl, matcher->end_ft); +} + +static int mlx5dr_matcher_connect(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_matcher *prev = NULL; + struct mlx5dr_matcher *next = NULL; + struct mlx5dr_matcher *tmp_matcher; + struct mlx5dr_devx_obj *ft; + int ret; + + /* Find location in matcher list */ + if (LIST_EMPTY(&tbl->head)) { + LIST_INSERT_HEAD(&tbl->head, matcher, next); + goto connect; + } + + LIST_FOREACH(tmp_matcher, &tbl->head, next) { + if (tmp_matcher->attr.priority > matcher->attr.priority) { + next = tmp_matcher; + break; + } + prev = tmp_matcher; + } + + if (next) + LIST_INSERT_BEFORE(next, matcher, next); + else + LIST_INSERT_AFTER(prev, matcher, next); + +connect: + ft_attr.modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID; + ft_attr.type = tbl->fw_ft_type; + + /* Connect to next */ + if (next) { + if (next->match_ste.rtc_0) + ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; + if (next->match_ste.rtc_1) + ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + + ret = mlx5dr_cmd_flow_table_modify(matcher->end_ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect new matcher to next RTC"); + goto remove_from_list; + } + } + + /* Connect to previous */ + ft = prev ? prev->end_ft : tbl->ft; + + if (matcher->match_ste.rtc_0) + ft_attr.rtc_id_0 = matcher->match_ste.rtc_0->id; + if (matcher->match_ste.rtc_1) + ft_attr.rtc_id_1 = matcher->match_ste.rtc_1->id; + + ret = mlx5dr_cmd_flow_table_modify(ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect new matcher to previous FT"); + goto remove_from_list; + } + + return 0; + +remove_from_list: + LIST_REMOVE(matcher, next); + return ret; +} + +static int mlx5dr_matcher_disconnect(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_matcher *tmp_matcher; + struct mlx5dr_devx_obj *prev_ft; + struct mlx5dr_matcher *next; + int ret; + + prev_ft = matcher->tbl->ft; + LIST_FOREACH(tmp_matcher, &tbl->head, next) { + if (tmp_matcher == matcher) + break; + + prev_ft = tmp_matcher->end_ft; + } + + next = matcher->next.le_next; + + ft_attr.modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID; + ft_attr.type = matcher->tbl->fw_ft_type; + + if (next) { + /* Connect previous end FT to next RTC if exists */ + if (next->match_ste.rtc_0) + ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; + if (next->match_ste.rtc_1) + ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + } else { + /* Matcher is last, point prev end FT to default miss */ + mlx5dr_cmd_set_attr_connect_miss_tbl(tbl->ctx, + tbl->fw_ft_type, + tbl->type, + &ft_attr); + } + + ret = mlx5dr_cmd_flow_table_modify(prev_ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to disconnect matcher"); + return ret; + } + + LIST_REMOVE(matcher, next); + + return 0; +} + +static void mlx5dr_matcher_set_rtc_attr_sz(struct mlx5dr_matcher *matcher, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr, + bool is_match_rtc, + bool is_mirror) +{ + enum mlx5dr_matcher_flow_src flow_src = matcher->attr.optimize_flow_src; + struct mlx5dr_pool_chunk *ste = &matcher->action_ste.ste; + + if ((flow_src == MLX5DR_MATCHER_FLOW_SRC_VPORT && !is_mirror) || + (flow_src == MLX5DR_MATCHER_FLOW_SRC_WIRE && is_mirror)) { + /* Optimize FDB RTC */ + rtc_attr->log_size = 0; + rtc_attr->log_depth = 0; + } else { + /* Keep original values */ + rtc_attr->log_size = is_match_rtc ? matcher->attr.table.sz_row_log : ste->order; + rtc_attr->log_depth = is_match_rtc ? matcher->attr.table.sz_col_log : 0; + } +} + +static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, + bool is_match_rtc) +{ + const char *rtc_type_str = is_match_rtc ? "match" : "action"; + struct mlx5dr_cmd_rtc_create_attr rtc_attr = {0}; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_action_default_stc *default_stc; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_devx_obj **rtc_0, **rtc_1; + struct mlx5dr_pool *ste_pool, *stc_pool; + struct mlx5dr_devx_obj *devx_obj; + struct mlx5dr_pool_chunk *ste; + int ret; + + if (is_match_rtc) { + rtc_0 = &matcher->match_ste.rtc_0; + rtc_1 = &matcher->match_ste.rtc_1; + ste_pool = matcher->match_ste.pool; + ste = &matcher->match_ste.ste; + ste->order = matcher->attr.table.sz_col_log + + matcher->attr.table.sz_row_log; + rtc_attr.log_size = matcher->attr.table.sz_row_log; + rtc_attr.log_depth = matcher->attr.table.sz_col_log; + rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH; + /* The first match template is used since all share the same definer */ + rtc_attr.definer_id = mlx5dr_definer_get_id(matcher->mt[0]->definer); + rtc_attr.is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + rtc_attr.miss_ft_id = matcher->end_ft->id; + /* Match pool requires implicit allocation */ + ret = mlx5dr_pool_chunk_alloc(ste_pool, ste); + if (ret) { + DR_LOG(ERR, "Failed to allocate STE for %s RTC", rtc_type_str); + return ret; + } + } else { + rtc_0 = &matcher->action_ste.rtc_0; + rtc_1 = &matcher->action_ste.rtc_1; + ste_pool = matcher->action_ste.pool; + ste = &matcher->action_ste.ste; + ste->order = rte_log2_u32(matcher->action_ste.max_stes) + + matcher->attr.table.sz_row_log; + rtc_attr.log_size = ste->order; + rtc_attr.log_depth = 0; + rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; + /* The action STEs use the default always hit definer */ + rtc_attr.definer_id = ctx->caps->trivial_match_definer; + rtc_attr.is_jumbo = false; + rtc_attr.miss_ft_id = 0; + } + + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + + rtc_attr.pd = ctx->pd_num; + rtc_attr.ste_base = devx_obj->id; + rtc_attr.ste_offset = ste->offset; + rtc_attr.table_type = mlx5dr_table_get_res_fw_ft_type(tbl->type, false); + mlx5dr_matcher_set_rtc_attr_sz(matcher, &rtc_attr, is_match_rtc, false); + + /* STC is a single resource (devx_obj), use any STC for the ID */ + stc_pool = ctx->stc_pool[tbl->type]; + default_stc = ctx->common_res[tbl->type].default_stc; + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, &default_stc->default_hit); + rtc_attr.stc_base = devx_obj->id; + + *rtc_0 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); + if (!*rtc_0) { + DR_LOG(ERR, "Failed to create matcher %s RTC", rtc_type_str); + goto free_ste; + } + + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) { + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + rtc_attr.ste_base = devx_obj->id; + rtc_attr.table_type = mlx5dr_table_get_res_fw_ft_type(tbl->type, true); + + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, &default_stc->default_hit); + rtc_attr.stc_base = devx_obj->id; + mlx5dr_matcher_set_rtc_attr_sz(matcher, &rtc_attr, is_match_rtc, true); + + *rtc_1 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); + if (!*rtc_1) { + DR_LOG(ERR, "Failed to create peer matcher %s RTC0", rtc_type_str); + goto destroy_rtc_0; + } + } + + return 0; + +destroy_rtc_0: + mlx5dr_cmd_destroy_obj(*rtc_0); +free_ste: + if (is_match_rtc) + mlx5dr_pool_chunk_free(ste_pool, ste); + return rte_errno; +} + +static void mlx5dr_matcher_destroy_rtc(struct mlx5dr_matcher *matcher, + bool is_match_rtc) +{ + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_devx_obj *rtc_0, *rtc_1; + struct mlx5dr_pool_chunk *ste; + struct mlx5dr_pool *ste_pool; + + if (is_match_rtc) { + rtc_0 = matcher->match_ste.rtc_0; + rtc_1 = matcher->match_ste.rtc_1; + ste_pool = matcher->match_ste.pool; + ste = &matcher->match_ste.ste; + } else { + rtc_0 = matcher->action_ste.rtc_0; + rtc_1 = matcher->action_ste.rtc_1; + ste_pool = matcher->action_ste.pool; + ste = &matcher->action_ste.ste; + } + + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) + mlx5dr_cmd_destroy_obj(rtc_1); + + mlx5dr_cmd_destroy_obj(rtc_0); + if (is_match_rtc) + mlx5dr_pool_chunk_free(ste_pool, ste); +} + +static void mlx5dr_matcher_set_pool_attr(struct mlx5dr_pool_attr *attr, + struct mlx5dr_matcher *matcher) +{ + switch (matcher->attr.optimize_flow_src) { + case MLX5DR_MATCHER_FLOW_SRC_VPORT: + attr->opt_type = MLX5DR_POOL_OPTIMIZE_ORIG; + break; + case MLX5DR_MATCHER_FLOW_SRC_WIRE: + attr->opt_type = MLX5DR_POOL_OPTIMIZE_MIRROR; + break; + default: + break; + } +} + +static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher) +{ + bool is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_pool_attr pool_attr = {0}; + struct mlx5dr_context *ctx = tbl->ctx; + uint32_t required_stes; + int i, ret; + bool valid; + + for (i = 0; i < matcher->num_of_at; i++) { + struct mlx5dr_action_template *at = matcher->at[i]; + + /* Check if action combinabtion is valid */ + valid = mlx5dr_action_check_combo(at->action_type_arr, matcher->tbl->type); + if (!valid) { + DR_LOG(ERR, "Invalid combination in action template %d", i); + return rte_errno; + } + + /* Process action template to setters */ + ret = mlx5dr_action_template_process(at); + if (ret) { + DR_LOG(ERR, "Failed to process action template %d", i); + return rte_errno; + } + + required_stes = at->num_of_action_stes - (!is_jumbo || at->only_term); + matcher->action_ste.max_stes = RTE_MAX(matcher->action_ste.max_stes, required_stes); + + /* Future: Optimize reparse */ + } + + /* There are no additioanl STEs required for matcher */ + if (!matcher->action_ste.max_stes) + return 0; + + /* Allocate action STE mempool */ + pool_attr.table_type = tbl->type; + pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_STE_ACTION_POOL; + pool_attr.alloc_log_sz = rte_log2_u32(matcher->action_ste.max_stes) + + matcher->attr.table.sz_row_log; + mlx5dr_matcher_set_pool_attr(&pool_attr, matcher); + matcher->action_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); + if (!matcher->action_ste.pool) { + DR_LOG(ERR, "Failed to create action ste pool"); + return rte_errno; + } + + /* Allocate action RTC */ + ret = mlx5dr_matcher_create_rtc(matcher, false); + if (ret) { + DR_LOG(ERR, "Failed to create action RTC"); + goto free_ste_pool; + } + + /* Allocate STC for jumps to STE */ + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE; + stc_attr.ste_table.ste = matcher->action_ste.ste; + stc_attr.ste_table.ste_pool = matcher->action_ste.pool; + stc_attr.ste_table.match_definer_id = ctx->caps->trivial_match_definer; + + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl->type, + &matcher->action_ste.stc); + if (ret) { + DR_LOG(ERR, "Failed to create action jump to table STC"); + goto free_rtc; + } + + return 0; + +free_rtc: + mlx5dr_matcher_destroy_rtc(matcher, false); +free_ste_pool: + mlx5dr_pool_destroy(matcher->action_ste.pool); + return rte_errno; +} + +static void mlx5dr_matcher_unbind_at(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_table *tbl = matcher->tbl; + + if (!matcher->action_ste.max_stes) + return; + + mlx5dr_action_free_single_stc(tbl->ctx, tbl->type, &matcher->action_ste.stc); + mlx5dr_matcher_destroy_rtc(matcher, false); + mlx5dr_pool_destroy(matcher->action_ste.pool); +} + +static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_pool_attr pool_attr = {0}; + int i, created = 0; + int ret = -1; + + for (i = 0; i < matcher->num_of_mt; i++) { + /* Get a definer for each match template */ + ret = mlx5dr_definer_get(ctx, matcher->mt[i]); + if (ret) + goto definer_put; + + created++; + + /* Verify all templates produce the same definer */ + if (i == 0) + continue; + + ret = mlx5dr_definer_compare(matcher->mt[i]->definer, + matcher->mt[i - 1]->definer); + if (ret) { + DR_LOG(ERR, "Match templates cannot be used on the same matcher"); + rte_errno = ENOTSUP; + goto definer_put; + } + } + + /* Create an STE pool per matcher*/ + pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_MATCHER_STE_POOL; + pool_attr.table_type = matcher->tbl->type; + pool_attr.alloc_log_sz = matcher->attr.table.sz_col_log + + matcher->attr.table.sz_row_log; + mlx5dr_matcher_set_pool_attr(&pool_attr, matcher); + + matcher->match_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); + if (!matcher->match_ste.pool) { + DR_LOG(ERR, "Failed to allocate matcher STE pool"); + goto definer_put; + } + + return 0; + +definer_put: + while (created--) + mlx5dr_definer_put(matcher->mt[created]); + + return ret; +} + +static void mlx5dr_matcher_unbind_mt(struct mlx5dr_matcher *matcher) +{ + int i; + + for (i = 0; i < matcher->num_of_mt; i++) + mlx5dr_definer_put(matcher->mt[i]); + + mlx5dr_pool_destroy(matcher->match_ste.pool); +} + +static int +mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps, + struct mlx5dr_matcher *matcher, + bool is_root) +{ + struct mlx5dr_matcher_attr *attr = &matcher->attr; + + if (matcher->tbl->type != MLX5DR_TABLE_TYPE_FDB && attr->optimize_flow_src) { + DR_LOG(ERR, "NIC domain doesn't support flow_src"); + goto not_supported; + } + + if (is_root) { + if (attr->mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) { + DR_LOG(ERR, "Root matcher supports only rule resource mode"); + goto not_supported; + } + if (attr->optimize_flow_src) { + DR_LOG(ERR, "Root matcher can't specify FDB direction"); + goto not_supported; + } + return 0; + } + + /* Convert number of rules to the required depth */ + if (attr->mode == MLX5DR_MATCHER_RESOURCE_MODE_RULE) + attr->table.sz_col_log = mlx5dr_matcher_rules_to_tbl_depth(attr->rule.num_log); + + if (attr->table.sz_col_log > caps->rtc_log_depth_max) { + DR_LOG(ERR, "Matcher depth exceeds limit %d", caps->rtc_log_depth_max); + goto not_supported; + } + + if (attr->table.sz_col_log + attr->table.sz_row_log > caps->ste_alloc_log_max) { + DR_LOG(ERR, "Total matcher size exceeds limit %d", caps->ste_alloc_log_max); + goto not_supported; + } + + if (attr->table.sz_col_log + attr->table.sz_row_log < caps->ste_alloc_log_gran) { + DR_LOG(ERR, "Total matcher size below limit %d", caps->ste_alloc_log_gran); + goto not_supported; + } + + return 0; + +not_supported: + rte_errno = EOPNOTSUPP; + return rte_errno; +} + +static int mlx5dr_matcher_create_and_connect(struct mlx5dr_matcher *matcher) +{ + int ret; + + /* Select and create the definers for current matcher */ + ret = mlx5dr_matcher_bind_mt(matcher); + if (ret) + return ret; + + /* Calculate and verify action combination */ + ret = mlx5dr_matcher_bind_at(matcher); + if (ret) + goto unbind_mt; + + /* Create matcher end flow table anchor */ + ret = mlx5dr_matcher_create_end_ft(matcher); + if (ret) + goto unbind_at; + + /* Allocate the RTC for the new matcher */ + ret = mlx5dr_matcher_create_rtc(matcher, true); + if (ret) + goto destroy_end_ft; + + /* Connect the matcher to the matcher list */ + ret = mlx5dr_matcher_connect(matcher); + if (ret) + goto destroy_rtc; + + return 0; + +destroy_rtc: + mlx5dr_matcher_destroy_rtc(matcher, true); +destroy_end_ft: + mlx5dr_matcher_destroy_end_ft(matcher); +unbind_at: + mlx5dr_matcher_unbind_at(matcher); +unbind_mt: + mlx5dr_matcher_unbind_mt(matcher); + return ret; +} + +static void mlx5dr_matcher_destroy_and_disconnect(struct mlx5dr_matcher *matcher) +{ + mlx5dr_matcher_disconnect(matcher); + mlx5dr_matcher_destroy_rtc(matcher, true); + mlx5dr_matcher_destroy_end_ft(matcher); + mlx5dr_matcher_unbind_at(matcher); + mlx5dr_matcher_unbind_mt(matcher); +} + +static int +mlx5dr_matcher_create_col_matcher(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_matcher *col_matcher; + int ret; + + if (matcher->attr.mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) + return 0; + + if (!mlx5dr_matcher_requires_col_tbl(matcher->attr.rule.num_log)) + return 0; + + col_matcher = simple_calloc(1, sizeof(*matcher)); + if (!col_matcher) { + rte_errno = ENOMEM; + return rte_errno; + } + + col_matcher->tbl = matcher->tbl; + col_matcher->num_of_mt = matcher->num_of_mt; + memcpy(col_matcher->mt, matcher->mt, matcher->num_of_mt * sizeof(*matcher->mt)); + col_matcher->num_of_at = matcher->num_of_at; + memcpy(col_matcher->at, matcher->at, matcher->num_of_at * sizeof(*matcher->at)); + + col_matcher->attr.priority = matcher->attr.priority; + col_matcher->attr.mode = MLX5DR_MATCHER_RESOURCE_MODE_HTABLE; + col_matcher->attr.optimize_flow_src = matcher->attr.optimize_flow_src; + col_matcher->attr.table.sz_row_log = matcher->attr.rule.num_log; + col_matcher->attr.table.sz_col_log = MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH; + if (col_matcher->attr.table.sz_row_log > MLX5DR_MATCHER_ASSURED_ROW_RATIO) + col_matcher->attr.table.sz_row_log -= MLX5DR_MATCHER_ASSURED_ROW_RATIO; + + ret = mlx5dr_matcher_process_attr(ctx->caps, col_matcher, false); + if (ret) + goto free_col_matcher; + + ret = mlx5dr_matcher_create_and_connect(col_matcher); + if (ret) + goto free_col_matcher; + + matcher->col_matcher = col_matcher; + + return 0; + +free_col_matcher: + simple_free(col_matcher); + DR_LOG(ERR, "Failed to create assured collision matcher"); + return ret; +} + +static void +mlx5dr_matcher_destroy_col_matcher(struct mlx5dr_matcher *matcher) +{ + if (matcher->attr.mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) + return; + + if (matcher->col_matcher) { + mlx5dr_matcher_destroy_and_disconnect(matcher->col_matcher); + simple_free(matcher->col_matcher); + } +} + +static int mlx5dr_matcher_init(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + + /* Allocate matcher resource and connect to the packet pipe */ + ret = mlx5dr_matcher_create_and_connect(matcher); + if (ret) + goto unlock_err; + + /* Create additional matcher for collision handling */ + ret = mlx5dr_matcher_create_col_matcher(matcher); + if (ret) + goto destory_and_disconnect; + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +destory_and_disconnect: + mlx5dr_matcher_destroy_and_disconnect(matcher); +unlock_err: + pthread_spin_unlock(&ctx->ctrl_lock); + return ret; +} + +static int mlx5dr_matcher_uninit(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + + pthread_spin_lock(&ctx->ctrl_lock); + mlx5dr_matcher_destroy_col_matcher(matcher); + mlx5dr_matcher_destroy_and_disconnect(matcher); + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; +} + +static int mlx5dr_matcher_init_root(struct mlx5dr_matcher *matcher) +{ + enum mlx5dr_table_type type = matcher->tbl->type; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dv_flow_matcher_attr attr = {0}; + struct mlx5dv_flow_match_parameters *mask; + struct mlx5_flow_attr flow_attr = {0}; + enum mlx5dv_flow_table_type ft_type; + struct rte_flow_error rte_error; + uint8_t match_criteria; + int ret; + + switch (type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_RX; + break; + case MLX5DR_TABLE_TYPE_NIC_TX: + ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_TX; + break; + case MLX5DR_TABLE_TYPE_FDB: + ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_FDB; + break; + default: + assert(0); + break; + } + + if (matcher->attr.priority > UINT16_MAX) { + DR_LOG(ERR, "Root matcher priority exceeds allowed limit"); + rte_errno = EINVAL; + return rte_errno; + } + + mask = simple_calloc(1, MLX5_ST_SZ_BYTES(fte_match_param) + + offsetof(struct mlx5dv_flow_match_parameters, match_buf)); + if (!mask) { + rte_errno = ENOMEM; + return rte_errno; + } + + flow_attr.tbl_type = type; + + /* On root table matcher, only a single match template is supported */ + ret = flow_dv_translate_items_hws(matcher->mt[0]->items, + &flow_attr, mask->match_buf, + MLX5_SET_MATCHER_HS_M, NULL, + &match_criteria, + &rte_error); + if (ret) { + DR_LOG(ERR, "Failed to convert items to PRM [%s]", rte_error.message); + goto free_mask; + } + + mask->match_sz = MLX5_ST_SZ_BYTES(fte_match_param); + attr.match_mask = mask; + attr.match_criteria_enable = match_criteria; + attr.ft_type = ft_type; + attr.type = IBV_FLOW_ATTR_NORMAL; + attr.priority = matcher->attr.priority; + attr.comp_mask = MLX5DV_FLOW_MATCHER_MASK_FT_TYPE; + + matcher->dv_matcher = + mlx5_glue->dv_create_flow_matcher_root(ctx->ibv_ctx, &attr); + if (!matcher->dv_matcher) { + DR_LOG(ERR, "Failed to create DV flow matcher"); + rte_errno = errno; + goto free_mask; + } + + simple_free(mask); + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_INSERT_HEAD(&matcher->tbl->head, matcher, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_mask: + simple_free(mask); + return rte_errno; +} + +static int mlx5dr_matcher_uninit_root(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_REMOVE(matcher, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + ret = mlx5_glue->dv_destroy_flow_matcher_root(matcher->dv_matcher); + if (ret) { + DR_LOG(ERR, "Failed to Destroy DV flow matcher"); + rte_errno = errno; + } + + return ret; +} + +static int +mlx5dr_matcher_check_template(uint8_t num_of_mt, uint8_t num_of_at, bool is_root) +{ + uint8_t max_num_of_mt; + + max_num_of_mt = is_root ? + MLX5DR_MATCHER_MAX_MT_ROOT : + MLX5DR_MATCHER_MAX_MT; + + if (!num_of_mt || !num_of_at) { + DR_LOG(ERR, "Number of action/match template cannot be zero"); + goto out_not_sup; + } + + if (num_of_at > MLX5DR_MATCHER_MAX_AT) { + DR_LOG(ERR, "Number of action templates exceeds limit"); + goto out_not_sup; + } + + if (num_of_mt > max_num_of_mt) { + DR_LOG(ERR, "Number of match templates exceeds limit"); + goto out_not_sup; + } + + return 0; + +out_not_sup: + rte_errno = ENOTSUP; + return rte_errno; +} + +struct mlx5dr_matcher * +mlx5dr_matcher_create(struct mlx5dr_table *tbl, + struct mlx5dr_match_template *mt[], + uint8_t num_of_mt, + struct mlx5dr_action_template *at[], + uint8_t num_of_at, + struct mlx5dr_matcher_attr *attr) +{ + bool is_root = mlx5dr_table_is_root(tbl); + struct mlx5dr_matcher *matcher; + int ret; + + ret = mlx5dr_matcher_check_template(num_of_mt, num_of_at, is_root); + if (ret) + return NULL; + + matcher = simple_calloc(1, sizeof(*matcher)); + if (!matcher) { + rte_errno = ENOMEM; + return NULL; + } + + matcher->tbl = tbl; + matcher->attr = *attr; + matcher->num_of_mt = num_of_mt; + memcpy(matcher->mt, mt, num_of_mt * sizeof(*mt)); + matcher->num_of_at = num_of_at; + memcpy(matcher->at, at, num_of_at * sizeof(*at)); + + ret = mlx5dr_matcher_process_attr(tbl->ctx->caps, matcher, is_root); + if (ret) + goto free_matcher; + + if (is_root) + ret = mlx5dr_matcher_init_root(matcher); + else + ret = mlx5dr_matcher_init(matcher); + + if (ret) { + DR_LOG(ERR, "Failed to initialise matcher: %d", ret); + goto free_matcher; + } + + return matcher; + +free_matcher: + simple_free(matcher); + return NULL; +} + +int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher) +{ + if (mlx5dr_table_is_root(matcher->tbl)) + mlx5dr_matcher_uninit_root(matcher); + else + mlx5dr_matcher_uninit(matcher); + + simple_free(matcher); + return 0; +} + +struct mlx5dr_match_template * +mlx5dr_match_template_create(const struct rte_flow_item items[], + enum mlx5dr_match_template_flags flags) +{ + struct mlx5dr_match_template *mt; + struct rte_flow_error error; + int ret, len; + + if (flags > MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH) { + DR_LOG(ERR, "Unsupported match template flag provided"); + rte_errno = EINVAL; + return NULL; + } + + mt = simple_calloc(1, sizeof(*mt)); + if (!mt) { + DR_LOG(ERR, "Failed to allocate match template"); + rte_errno = ENOMEM; + return NULL; + } + + mt->flags = flags; + + /* Duplicate the user given items */ + ret = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, NULL, 0, items, &error); + if (ret <= 0) { + DR_LOG(ERR, "Unable to process items (%s): %s", + error.message ? error.message : "unspecified", + strerror(rte_errno)); + goto free_template; + } + + len = RTE_ALIGN(ret, 16); + mt->items = simple_calloc(1, len); + if (!mt->items) { + DR_LOG(ERR, "Failed to allocate item copy"); + rte_errno = ENOMEM; + goto free_template; + } + + ret = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, mt->items, ret, items, &error); + if (ret <= 0) + goto free_dst; + + return mt; + +free_dst: + simple_free(mt->items); +free_template: + simple_free(mt); + return NULL; +} + +int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt) +{ + assert(!mt->refcount); + simple_free(mt->items); + simple_free(mt); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h new file mode 100644 index 0000000000..b7bf94762c --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h @@ -0,0 +1,76 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_MATCHER_H_ +#define MLX5DR_MATCHER_H_ + +/* Max supported match template */ +#define MLX5DR_MATCHER_MAX_MT 2 +#define MLX5DR_MATCHER_MAX_MT_ROOT 1 + +/* Max supported action template */ +#define MLX5DR_MATCHER_MAX_AT 4 + +/* We calculated that concatenating a collision table to the main table with + * 3% of the main table rows will be enough resources for high insertion + * success probability. + * + * The calculation: log2(2^x * 3 / 100) = log2(2^x) + log2(3/100) = x - 5.05 ~ 5 + */ +#define MLX5DR_MATCHER_ASSURED_ROW_RATIO 5 +/* Thrashold to determine if amount of rules require a collision table */ +#define MLX5DR_MATCHER_ASSURED_RULES_TH 10 +/* Required depth of an assured collision table */ +#define MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH 4 +/* Required depth of the main large table */ +#define MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH 2 + +struct mlx5dr_match_template { + struct rte_flow_item *items; + struct mlx5dr_definer *definer; + struct mlx5dr_definer_fc *fc; + uint32_t fc_sz; + uint64_t item_flags; + uint8_t vport_item_id; + enum mlx5dr_match_template_flags flags; + uint32_t refcount; +}; + +struct mlx5dr_matcher_match_ste { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_devx_obj *rtc_0; + struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_pool *pool; +}; + +struct mlx5dr_matcher_action_ste { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_pool_chunk stc; + struct mlx5dr_devx_obj *rtc_0; + struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_pool *pool; + uint8_t max_stes; +}; + +struct mlx5dr_matcher { + struct mlx5dr_table *tbl; + struct mlx5dr_matcher_attr attr; + struct mlx5dv_flow_matcher *dv_matcher; + struct mlx5dr_match_template *mt[MLX5DR_MATCHER_MAX_MT]; + uint8_t num_of_mt; + struct mlx5dr_action_template *at[MLX5DR_MATCHER_MAX_AT]; + uint8_t num_of_at; + struct mlx5dr_devx_obj *end_ft; + struct mlx5dr_matcher *col_matcher; + struct mlx5dr_matcher_match_ste match_ste; + struct mlx5dr_matcher_action_ste action_ste; + LIST_ENTRY(mlx5dr_matcher) next; +}; + +int mlx5dr_matcher_conv_items_to_prm(uint64_t *match_buf, + struct rte_flow_item *items, + uint8_t *match_criteria, + bool is_value); + +#endif /* MLX5DR_MATCHER_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 15/18] net/mlx5/hws: Add HWS rule object 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (13 preceding siblings ...) 2022-10-19 14:42 ` [v4 14/18] net/mlx5/hws: Add HWS matcher object Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 16/18] net/mlx5/hws: Add HWS action object Alex Vesker ` (2 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit HWS rule objects reside under the matcher, each rule holds the configuration for the packet fields to match on and the set of actions to execute over the packet that has the requested fields. Rules can be created asynchronously in parallel over multiple queues to different matchers. Each rule is configured to the HW. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_rule.c | 528 +++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_rule.h | 50 +++ 2 files changed, 578 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.h diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c new file mode 100644 index 0000000000..b27318e6d4 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -0,0 +1,528 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static void mlx5dr_rule_skip(struct mlx5dr_matcher *matcher, + const struct rte_flow_item *items, + bool *skip_rx, bool *skip_tx) +{ + struct mlx5dr_match_template *mt = matcher->mt[0]; + const struct flow_hw_port_info *vport; + const struct rte_flow_item_ethdev *v; + + /* Flow_src is the 1st priority */ + if (matcher->attr.optimize_flow_src) { + *skip_tx = matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_WIRE; + *skip_rx = matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_VPORT; + return; + } + + /* By default FDB rules are added to both RX and TX */ + *skip_rx = false; + *skip_tx = false; + + if (mt->item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) { + v = items[mt->vport_item_id].spec; + vport = flow_hw_conv_port_id(v->port_id); + if (unlikely(!vport)) { + DR_LOG(ERR, "Fail to map port ID %d, ignoring", v->port_id); + return; + } + + if (!vport->is_wire) + /* Match vport ID is not WIRE -> Skip RX */ + *skip_rx = true; + else + /* Match vport ID is WIRE -> Skip TX */ + *skip_tx = true; + } +} + +static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, + struct mlx5dr_rule *rule, + const struct rte_flow_item *items, + void *user_data) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_table *tbl = matcher->tbl; + bool skip_rx, skip_tx; + + dep_wqe->rule = rule; + dep_wqe->user_data = user_data; + + switch (tbl->type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + case MLX5DR_TABLE_TYPE_NIC_TX: + dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; + dep_wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0->id : 0; + dep_wqe->rtc_1 = 0; + dep_wqe->retry_rtc_1 = 0; + break; + + case MLX5DR_TABLE_TYPE_FDB: + mlx5dr_rule_skip(matcher, items, &skip_rx, &skip_tx); + + if (!skip_rx) { + dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; + dep_wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0->id : 0; + } else { + dep_wqe->rtc_0 = 0; + dep_wqe->retry_rtc_0 = 0; + } + + if (!skip_tx) { + dep_wqe->rtc_1 = matcher->match_ste.rtc_1->id; + dep_wqe->retry_rtc_1 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_1->id : 0; + } else { + dep_wqe->rtc_1 = 0; + dep_wqe->retry_rtc_1 = 0; + } + + break; + + default: + assert(false); + break; + } +} + +static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue, + struct mlx5dr_rule *rule, + bool err, + void *user_data, + enum mlx5dr_rule_status rule_status_on_succ) +{ + enum rte_flow_op_status comp_status; + + if (!err) { + comp_status = RTE_FLOW_OP_SUCCESS; + rule->status = rule_status_on_succ; + } else { + comp_status = RTE_FLOW_OP_ERROR; + rule->status = MLX5DR_RULE_STATUS_FAILED; + } + + mlx5dr_send_engine_inc_rule(queue); + mlx5dr_send_engine_gen_comp(queue, user_data, comp_status); +} + +static int mlx5dr_rule_alloc_action_ste(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + int ret; + + /* Use rule_idx for locking optimzation, otherwise allocate from pool */ + if (matcher->attr.optimize_using_rule_idx) { + rule->action_ste_idx = attr->rule_idx * matcher->action_ste.max_stes; + } else { + struct mlx5dr_pool_chunk ste = {0}; + + ste.order = rte_log2_u32(matcher->action_ste.max_stes); + ret = mlx5dr_pool_chunk_alloc(matcher->action_ste.pool, &ste); + if (ret) { + DR_LOG(ERR, "Failed to allocate STE for rule actions"); + return ret; + } + rule->action_ste_idx = ste.offset; + } + return 0; +} + +void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + + if (rule->action_ste_idx > -1 && !matcher->attr.optimize_using_rule_idx) { + struct mlx5dr_pool_chunk ste = {0}; + + /* This release is safe only when the rule match part was deleted */ + ste.order = rte_log2_u32(matcher->action_ste.max_stes); + ste.offset = rule->action_ste_idx; + mlx5dr_pool_chunk_free(matcher->action_ste.pool, &ste); + } +} + +static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr, + struct mlx5dr_actions_apply_data *apply) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_context *ctx = tbl->ctx; + + /* Init rule before reuse */ + rule->rtc_0 = 0; + rule->rtc_1 = 0; + rule->pending_wqes = 0; + rule->action_ste_idx = -1; + rule->status = MLX5DR_RULE_STATUS_CREATING; + + /* Init default send STE attributes */ + ste_attr->gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE; + ste_attr->send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr->send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr->send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + + /* Init default action apply */ + apply->tbl_type = tbl->type; + apply->common_res = &ctx->common_res[tbl->type]; + apply->jump_to_action_stc = matcher->action_ste.stc.offset; + apply->require_dep = 0; +} + +static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dr_action_template *at = rule->matcher->at[at_idx]; + struct mlx5dr_match_template *mt = rule->matcher->mt[mt_idx]; + bool is_jumbo = mlx5dr_definer_is_jumbo(mt->definer); + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + struct mlx5dr_actions_wqe_setter *setter; + struct mlx5dr_actions_apply_data apply; + struct mlx5dr_send_engine *queue; + uint8_t total_stes, action_stes; + int i, ret; + + queue = &ctx->send_queue[attr->queue_id]; + if (unlikely(mlx5dr_send_engine_err(queue))) { + rte_errno = EIO; + return rte_errno; + } + + mlx5dr_rule_create_init(rule, &ste_attr, &apply); + + /* Allocate dependent match WQE since rule might have dependent writes. + * The queued dependent WQE can be later aborted or kept as a dependency. + * dep_wqe buffers (ctrl, data) are also reused for all STE writes. + */ + dep_wqe = mlx5dr_send_add_new_dep_wqe(queue); + mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, attr->user_data); + + ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; + ste_attr.wqe_data = &dep_wqe->wqe_data; + apply.wqe_ctrl = &dep_wqe->wqe_ctrl; + apply.wqe_data = (uint32_t *)&dep_wqe->wqe_data; + apply.rule_action = rule_actions; + apply.queue = queue; + + setter = &at->setters[at->num_of_action_stes]; + total_stes = at->num_of_action_stes + (is_jumbo && !at->only_term); + action_stes = total_stes - 1; + + if (action_stes) { + /* Allocate action STEs for complex rules */ + ret = mlx5dr_rule_alloc_action_ste(rule, attr); + if (ret) { + DR_LOG(ERR, "Failed to allocate action memory %d", ret); + mlx5dr_send_abort_new_dep_wqe(queue); + return ret; + } + /* Skip RX/TX based on the dep_wqe init */ + ste_attr.rtc_0 = dep_wqe->rtc_0 ? matcher->action_ste.rtc_0->id : 0; + ste_attr.rtc_1 = dep_wqe->rtc_1 ? matcher->action_ste.rtc_1->id : 0; + /* Action STEs are written to a specific index last to first */ + ste_attr.direct_index = rule->action_ste_idx + action_stes; + apply.next_direct_idx = ste_attr.direct_index; + } else { + apply.next_direct_idx = 0; + } + + for (i = total_stes; i-- > 0;) { + mlx5dr_action_apply_setter(&apply, setter--, !i && is_jumbo); + + if (i == 0) { + /* Handle last match STE */ + mlx5dr_definer_create_tag(items, mt->fc, mt->fc_sz, + (uint8_t *)dep_wqe->wqe_data.action); + + /* Rule has dependent WQEs, match dep_wqe is queued */ + if (action_stes || apply.require_dep) + break; + + /* Rule has no dependencies, abort dep_wqe and send WQE now */ + mlx5dr_send_abort_new_dep_wqe(queue); + ste_attr.wqe_tag_is_jumbo = is_jumbo; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = dep_wqe->user_data; + ste_attr.send_attr.rule = dep_wqe->rule; + ste_attr.direct_index = 0; + ste_attr.rtc_0 = dep_wqe->rtc_0; + ste_attr.rtc_1 = dep_wqe->rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.retry_rtc_0 = dep_wqe->retry_rtc_0; + ste_attr.retry_rtc_1 = dep_wqe->retry_rtc_1; + } else { + apply.next_direct_idx = --ste_attr.direct_index; + } + + mlx5dr_send_ste(queue, &ste_attr); + } + + /* Backup TAG on the rule for deletion */ + if (is_jumbo) + memcpy(rule->tag.jumbo, dep_wqe->wqe_data.action, MLX5DR_JUMBO_TAG_SZ); + else + memcpy(rule->tag.match, dep_wqe->wqe_data.tag, MLX5DR_MATCH_TAG_SZ); + + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQEs */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + return 0; +} + +static void mlx5dr_rule_destroy_failed_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + + mlx5dr_rule_gen_comp(queue, rule, false, + attr->user_data, MLX5DR_RULE_STATUS_DELETED); + + /* Rule failed now we can safely release action STEs */ + mlx5dr_rule_free_action_ste_idx(rule); + + /* If a rule that was indicated as burst (need to trigger HW) has failed + * insertion we won't ring the HW as nothing is being written to the WQ. + * In such case update the last WQE and ring the HW with that work + */ + if (attr->burst) + return; + + mlx5dr_send_all_dep_wqe(queue); + mlx5dr_send_engine_flush_queue(queue); +} + +static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_wqe_gta_ctrl_seg wqe_ctrl = {0}; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + + /* Rule is not completed yet */ + if (rule->status == MLX5DR_RULE_STATUS_CREATING) { + rte_errno = EBUSY; + return rte_errno; + } + + /* Rule failed and doesn't require cleanup */ + if (rule->status == MLX5DR_RULE_STATUS_FAILED) { + mlx5dr_rule_destroy_failed_hws(rule, attr); + return 0; + } + + if (unlikely(mlx5dr_send_engine_err(queue))) { + mlx5dr_rule_destroy_failed_hws(rule, attr); + return 0; + } + + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQE */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + rule->status = MLX5DR_RULE_STATUS_DELETING; + + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + + ste_attr.send_attr.rule = rule; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = attr->user_data; + + ste_attr.rtc_0 = rule->rtc_0; + ste_attr.rtc_1 = rule->rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.wqe_ctrl = &wqe_ctrl; + ste_attr.wqe_tag = &rule->tag; + ste_attr.wqe_tag_is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE; + + mlx5dr_send_ste(queue, &ste_attr); + + return 0; +} + +static int mlx5dr_rule_create_root(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *rule_attr, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dv_flow_matcher *dv_matcher = rule->matcher->dv_matcher; + uint8_t num_actions = rule->matcher->at[at_idx]->num_actions; + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dv_flow_match_parameters *value; + struct mlx5_flow_attr flow_attr = {0}; + struct mlx5dv_flow_action_attr *attr; + struct rte_flow_error error; + uint8_t match_criteria; + int ret; + + attr = simple_calloc(num_actions, sizeof(*attr)); + if (!attr) { + rte_errno = ENOMEM; + return rte_errno; + } + + value = simple_calloc(1, MLX5_ST_SZ_BYTES(fte_match_param) + + offsetof(struct mlx5dv_flow_match_parameters, match_buf)); + if (!value) { + rte_errno = ENOMEM; + goto free_attr; + } + + flow_attr.tbl_type = rule->matcher->tbl->type; + + ret = flow_dv_translate_items_hws(items, &flow_attr, value->match_buf, + MLX5_SET_MATCHER_HS_V, NULL, + &match_criteria, + &error); + if (ret) { + DR_LOG(ERR, "Failed to convert items to PRM [%s]", error.message); + goto free_value; + } + + /* Convert actions to verb action attr */ + ret = mlx5dr_action_root_build_attr(rule_actions, num_actions, attr); + if (ret) + goto free_value; + + /* Create verb flow */ + value->match_sz = MLX5_ST_SZ_BYTES(fte_match_param); + rule->flow = mlx5_glue->dv_create_flow_root(dv_matcher, + value, + num_actions, + attr); + + mlx5dr_rule_gen_comp(&ctx->send_queue[rule_attr->queue_id], rule, !rule->flow, + rule_attr->user_data, MLX5DR_RULE_STATUS_CREATED); + + simple_free(value); + simple_free(attr); + + return 0; + +free_value: + simple_free(value); +free_attr: + simple_free(attr); + + return -rte_errno; +} + +static int mlx5dr_rule_destroy_root(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + int err = 0; + + if (rule->flow) + err = ibv_destroy_flow(rule->flow); + + mlx5dr_rule_gen_comp(&ctx->send_queue[attr->queue_id], rule, err, + attr->user_data, MLX5DR_RULE_STATUS_DELETED); + + return 0; +} + +int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[], + struct mlx5dr_rule_attr *attr, + struct mlx5dr_rule *rule_handle) +{ + struct mlx5dr_context *ctx; + int ret; + + rule_handle->matcher = matcher; + ctx = matcher->tbl->ctx; + + if (unlikely(!attr->user_data)) { + rte_errno = EINVAL; + return -rte_errno; + } + + /* Check if there is room in queue */ + if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + rte_errno = EBUSY; + return -rte_errno; + } + + assert(matcher->num_of_mt >= mt_idx); + assert(matcher->num_of_at >= at_idx); + + if (unlikely(mlx5dr_table_is_root(matcher->tbl))) + ret = mlx5dr_rule_create_root(rule_handle, + attr, + items, + at_idx, + rule_actions); + else + ret = mlx5dr_rule_create_hws(rule_handle, + attr, + mt_idx, + items, + at_idx, + rule_actions); + return -ret; +} + +int mlx5dr_rule_destroy(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + int ret; + + if (unlikely(!attr->user_data)) { + rte_errno = EINVAL; + return -rte_errno; + } + + /* Check if there is room in queue */ + if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + rte_errno = EBUSY; + return -rte_errno; + } + + if (unlikely(mlx5dr_table_is_root(rule->matcher->tbl))) + ret = mlx5dr_rule_destroy_root(rule, attr); + else + ret = mlx5dr_rule_destroy_hws(rule, attr); + + return -ret; +} + +size_t mlx5dr_rule_get_handle_size(void) +{ + return sizeof(struct mlx5dr_rule); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h new file mode 100644 index 0000000000..96c85674f2 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_rule.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_RULE_H_ +#define MLX5DR_RULE_H_ + +enum { + MLX5DR_STE_CTRL_SZ = 20, + MLX5DR_ACTIONS_SZ = 12, + MLX5DR_MATCH_TAG_SZ = 32, + MLX5DR_JUMBO_TAG_SZ = 44, +}; + +enum mlx5dr_rule_status { + MLX5DR_RULE_STATUS_UNKNOWN, + MLX5DR_RULE_STATUS_CREATING, + MLX5DR_RULE_STATUS_CREATED, + MLX5DR_RULE_STATUS_DELETING, + MLX5DR_RULE_STATUS_DELETED, + MLX5DR_RULE_STATUS_FAILING, + MLX5DR_RULE_STATUS_FAILED, +}; + +struct mlx5dr_rule_match_tag { + union { + uint8_t jumbo[MLX5DR_JUMBO_TAG_SZ]; + struct { + uint8_t reserved[MLX5DR_ACTIONS_SZ]; + uint8_t match[MLX5DR_MATCH_TAG_SZ]; + }; + }; +}; + +struct mlx5dr_rule { + struct mlx5dr_matcher *matcher; + union { + struct mlx5dr_rule_match_tag tag; + struct ibv_flow *flow; + }; + uint32_t rtc_0; /* The RTC into which the STE was inserted */ + uint32_t rtc_1; /* The RTC into which the STE was inserted */ + int action_ste_idx; /* Action STE pool ID */ + uint8_t status; /* enum mlx5dr_rule_status */ + uint8_t pending_wqes; +}; + +void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule); + +#endif /* MLX5DR_RULE_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 16/18] net/mlx5/hws: Add HWS action object 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (14 preceding siblings ...) 2022-10-19 14:42 ` [v4 15/18] net/mlx5/hws: Add HWS rule object Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 17/18] net/mlx5/hws: Add HWS debug layer Alex Vesker 2022-10-19 14:42 ` [v4 18/18] net/mlx5/hws: Enable HWS Alex Vesker 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit Action objects are used for executing different HW actions over packets. Each action contains the HW resources and parameters needed for action use over the HW when creating a rule. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 2222 +++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 253 +++ drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 511 ++++++ drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 83 + 4 files changed, 3069 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.h diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c new file mode 100644 index 0000000000..61b3a58bf2 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -0,0 +1,2222 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +#define WIRE_PORT 0xFFFF + +#define MLX5DR_ACTION_METER_INIT_COLOR_OFFSET 1 + +/* This is the maximum allowed action order for each table type: + * TX: POP_VLAN, CTR, ASO_METER, AS_CT, PUSH_VLAN, MODIFY, ENCAP, Term + * RX: TAG, DECAP, POP_VLAN, CTR, ASO_METER, ASO_CT, PUSH_VLAN, MODIFY, + * ENCAP, Term + * FDB: DECAP, POP_VLAN, CTR, ASO_METER, ASO_CT, PUSH_VLAN, MODIFY, + * ENCAP, Term + */ +static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_MAX] = { + [MLX5DR_TABLE_TYPE_NIC_RX] = { + BIT(MLX5DR_ACTION_TYP_TAG), + BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_TIR) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, + [MLX5DR_TABLE_TYPE_NIC_TX] = { + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, + [MLX5DR_TABLE_TYPE_FDB] = { + BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_VPORT) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, +}; + +static int mlx5dr_action_get_shared_stc_nic(struct mlx5dr_context *ctx, + enum mlx5dr_context_shared_stc_type stc_type, + uint8_t tbl_type) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_action_shared_stc *shared_stc; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + if (ctx->common_res[tbl_type].shared_stc[stc_type]) { + rte_atomic32_add(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount, 1); + pthread_spin_unlock(&ctx->ctrl_lock); + return 0; + } + + shared_stc = simple_calloc(1, sizeof(*shared_stc)); + if (!shared_stc) { + DR_LOG(ERR, "Failed to allocate memory for shared STCs"); + rte_errno = ENOMEM; + goto unlock_and_out; + } + switch (stc_type) { + case MLX5DR_CONTEXT_SHARED_STC_DECAP: + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + stc_attr.remove_header.decap = 0; + stc_attr.remove_header.start_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + stc_attr.remove_header.end_anchor = MLX5_HEADER_ANCHOR_IPV6_IPV4; + break; + case MLX5DR_CONTEXT_SHARED_STC_POP: + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + stc_attr.remove_words.start_anchor = MLX5_HEADER_ANCHOR_FIRST_VLAN_START; + stc_attr.remove_words.num_of_words = MLX5DR_ACTION_HDR_LEN_L2_VLAN; + break; + default: + DR_LOG(ERR, "No such type : stc_type\n"); + assert(false); + rte_errno = EINVAL; + goto unlock_and_out; + } + + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &shared_stc->remove_header); + if (ret) { + DR_LOG(ERR, "Failed to allocate shared decap l2 STC"); + goto free_shared_stc; + } + + ctx->common_res[tbl_type].shared_stc[stc_type] = shared_stc; + + rte_atomic32_init(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount); + rte_atomic32_set(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount, 1); + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_shared_stc: + simple_free(shared_stc); +unlock_and_out: + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void mlx5dr_action_put_shared_stc_nic(struct mlx5dr_context *ctx, + enum mlx5dr_context_shared_stc_type stc_type, + uint8_t tbl_type) +{ + struct mlx5dr_action_shared_stc *shared_stc; + + pthread_spin_lock(&ctx->ctrl_lock); + if (!rte_atomic32_dec_and_test(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount)) { + pthread_spin_unlock(&ctx->ctrl_lock); + return; + } + + shared_stc = ctx->common_res[tbl_type].shared_stc[stc_type]; + + mlx5dr_action_free_single_stc(ctx, tbl_type, &shared_stc->remove_header); + simple_free(shared_stc); + ctx->common_res[tbl_type].shared_stc[stc_type] = NULL; + pthread_spin_unlock(&ctx->ctrl_lock); +} + +static int mlx5dr_action_get_shared_stc(struct mlx5dr_action *action, + enum mlx5dr_context_shared_stc_type stc_type) +{ + struct mlx5dr_context *ctx = action->ctx; + int ret; + + if (stc_type >= MLX5DR_CONTEXT_SHARED_STC_MAX) { + assert(false); + rte_errno = EINVAL; + return rte_errno; + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for RX shared STCs (type: %d)", + stc_type); + return ret; + } + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for TX shared STCs(type: %d)", + stc_type); + goto clean_nic_rx_stc; + } + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for FDB shared STCs (type: %d)", + stc_type); + goto clean_nic_tx_stc; + } + } + + return 0; + +clean_nic_tx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); +clean_nic_rx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + + return ret; +} + +static void mlx5dr_action_put_shared_stc(struct mlx5dr_action *action, + enum mlx5dr_context_shared_stc_type stc_type) +{ + struct mlx5dr_context *ctx = action->ctx; + + if (stc_type >= MLX5DR_CONTEXT_SHARED_STC_MAX) { + assert(false); + return; + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB); +} + +static void mlx5dr_action_print_combo(enum mlx5dr_action_type *user_actions) +{ + DR_LOG(ERR, "Invalid action_type sequence"); + while (*user_actions != MLX5DR_ACTION_TYP_LAST) { + DR_LOG(ERR, "%s", mlx5dr_debug_action_type_to_str(*user_actions)); + user_actions++; + } +} + +bool mlx5dr_action_check_combo(enum mlx5dr_action_type *user_actions, + enum mlx5dr_table_type table_type) +{ + const uint32_t *order_arr = action_order_arr[table_type]; + uint8_t order_idx = 0; + uint8_t user_idx = 0; + bool valid_combo; + + while (order_arr[order_idx] != BIT(MLX5DR_ACTION_TYP_LAST)) { + /* User action order validated move to next user action */ + if (BIT(user_actions[user_idx]) & order_arr[order_idx]) + user_idx++; + + /* Iterate to the next supported action in the order */ + order_idx++; + } + + /* Combination is valid if all user action were processed */ + valid_combo = user_actions[user_idx] == MLX5DR_ACTION_TYP_LAST; + if (!valid_combo) + mlx5dr_action_print_combo(user_actions); + + return valid_combo; +} + +int mlx5dr_action_root_build_attr(struct mlx5dr_rule_action rule_actions[], + uint32_t num_actions, + struct mlx5dv_flow_action_attr *attr) +{ + struct mlx5dr_action *action; + uint32_t i; + + for (i = 0; i < num_actions; i++) { + action = rule_actions[i].action; + + switch (action->type) { + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_TIR: + attr[i].type = MLX5DV_FLOW_ACTION_DEST_DEVX; + attr[i].obj = action->devx_obj; + break; + case MLX5DR_ACTION_TYP_TAG: + attr[i].type = MLX5DV_FLOW_ACTION_TAG; + attr[i].tag_value = rule_actions[i].tag.value; + break; +#ifdef HAVE_MLX5_DR_CREATE_ACTION_DEFAULT_MISS + case MLX5DR_ACTION_TYP_MISS: + attr[i].type = MLX5DV_FLOW_ACTION_DEFAULT_MISS; + break; +#endif + case MLX5DR_ACTION_TYP_DROP: + attr[i].type = MLX5DV_FLOW_ACTION_DROP; + break; + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + attr[i].type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; + attr[i].action = action->flow_action; + break; +#ifdef HAVE_IBV_FLOW_DEVX_COUNTERS + case MLX5DR_ACTION_TYP_CTR: + attr[i].type = MLX5DV_FLOW_ACTION_COUNTERS_DEVX; + attr[i].obj = action->devx_obj; + + if (rule_actions[i].counter.offset) { + DR_LOG(ERR, "Counter offset not supported over root"); + rte_errno = ENOTSUP; + return rte_errno; + } + break; +#endif + default: + DR_LOG(ERR, "Found unsupported action type: %d", action->type); + rte_errno = ENOTSUP; + return rte_errno; + } + } + + return 0; +} + +static bool mlx5dr_action_fixup_stc_attr(struct mlx5dr_cmd_stc_modify_attr *stc_attr, + struct mlx5dr_cmd_stc_modify_attr *fixup_stc_attr, + enum mlx5dr_table_type table_type, + bool is_mirror) +{ + struct mlx5dr_devx_obj *devx_obj; + bool use_fixup = false; + uint32_t fw_tbl_type; + + fw_tbl_type = mlx5dr_table_get_res_fw_ft_type(table_type, is_mirror); + + switch (stc_attr->action_type) { + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE: + if (!is_mirror) + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_attr->ste_table.ste_pool, + &stc_attr->ste_table.ste); + else + devx_obj = + mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_attr->ste_table.ste_pool, + &stc_attr->ste_table.ste); + + *fixup_stc_attr = *stc_attr; + fixup_stc_attr->ste_table.ste_obj_id = devx_obj->id; + use_fixup = true; + break; + + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT: + if (stc_attr->vport.vport_num != WIRE_PORT) + break; + + if (fw_tbl_type == FS_FT_FDB_RX) { + /* The FW doesn't allow to go back to wire in RX, so change it to DROP */ + fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + fixup_stc_attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + fixup_stc_attr->stc_offset = stc_attr->stc_offset; + } else if (fw_tbl_type == FS_FT_FDB_TX) { + /*The FW doesn't allow to go to wire in the TX by JUMP_TO_VPORT*/ + fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK; + fixup_stc_attr->action_offset = stc_attr->action_offset; + fixup_stc_attr->stc_offset = stc_attr->stc_offset; + fixup_stc_attr->vport.vport_num = 0; + fixup_stc_attr->vport.esw_owner_vhca_id = stc_attr->vport.esw_owner_vhca_id; + } + use_fixup = true; + break; + + default: + break; + } + + return use_fixup; +} + +int mlx5dr_action_alloc_single_stc(struct mlx5dr_context *ctx, + struct mlx5dr_cmd_stc_modify_attr *stc_attr, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc) +{ + struct mlx5dr_cmd_stc_modify_attr cleanup_stc_attr = {0}; + struct mlx5dr_pool *stc_pool = ctx->stc_pool[table_type]; + struct mlx5dr_cmd_stc_modify_attr fixup_stc_attr = {0}; + struct mlx5dr_devx_obj *devx_obj_0; + bool use_fixup; + int ret; + + ret = mlx5dr_pool_chunk_alloc(stc_pool, stc); + if (ret) { + DR_LOG(ERR, "Failed to allocate single action STC"); + return ret; + } + + stc_attr->stc_offset = stc->offset; + devx_obj_0 = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, stc); + + /* According to table/action limitation change the stc_attr */ + use_fixup = mlx5dr_action_fixup_stc_attr(stc_attr, &fixup_stc_attr, table_type, false); + ret = mlx5dr_cmd_stc_modify(devx_obj_0, use_fixup ? &fixup_stc_attr : stc_attr); + if (ret) { + DR_LOG(ERR, "Failed to modify STC action_type %d tbl_type %d", + stc_attr->action_type, table_type); + goto free_chunk; + } + + /* Modify the FDB peer */ + if (table_type == MLX5DR_TABLE_TYPE_FDB) { + struct mlx5dr_devx_obj *devx_obj_1; + + devx_obj_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, stc); + + use_fixup = mlx5dr_action_fixup_stc_attr(stc_attr, &fixup_stc_attr, + table_type, true); + ret = mlx5dr_cmd_stc_modify(devx_obj_1, use_fixup ? &fixup_stc_attr : stc_attr); + if (ret) { + DR_LOG(ERR, "Failed to modify peer STC action_type %d tbl_type %d", + stc_attr->action_type, table_type); + goto clean_devx_obj_0; + } + } + + return 0; + +clean_devx_obj_0: + cleanup_stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + cleanup_stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + cleanup_stc_attr.stc_offset = stc->offset; + mlx5dr_cmd_stc_modify(devx_obj_0, &cleanup_stc_attr); +free_chunk: + mlx5dr_pool_chunk_free(stc_pool, stc); + return rte_errno; +} + +void mlx5dr_action_free_single_stc(struct mlx5dr_context *ctx, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc) +{ + struct mlx5dr_pool *stc_pool = ctx->stc_pool[table_type]; + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_devx_obj *devx_obj; + + /* Modify the STC not to point to an object */ + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + stc_attr.stc_offset = stc->offset; + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, stc); + mlx5dr_cmd_stc_modify(devx_obj, &stc_attr); + + if (table_type == MLX5DR_TABLE_TYPE_FDB) { + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, stc); + mlx5dr_cmd_stc_modify(devx_obj, &stc_attr); + } + + mlx5dr_pool_chunk_free(stc_pool, stc); +} + +static uint32_t mlx5dr_action_get_mh_stc_type(__be64 pattern) +{ + uint8_t action_type = MLX5_GET(set_action_in, &pattern, action_type); + + switch (action_type) { + case MLX5_MODIFICATION_TYPE_SET: + return MLX5_IFC_STC_ACTION_TYPE_SET; + case MLX5_MODIFICATION_TYPE_ADD: + return MLX5_IFC_STC_ACTION_TYPE_ADD; + case MLX5_MODIFICATION_TYPE_COPY: + return MLX5_IFC_STC_ACTION_TYPE_COPY; + default: + assert(false); + DR_LOG(ERR, "Unsupported action type: 0x%x\n", action_type); + rte_errno = ENOTSUP; + return MLX5_IFC_STC_ACTION_TYPE_NOP; + } +} + +static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, + struct mlx5dr_devx_obj *obj, + struct mlx5dr_cmd_stc_modify_attr *attr) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TAG: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_TAG; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + break; + case MLX5DR_ACTION_TYP_DROP: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + break; + case MLX5DR_ACTION_TYP_MISS: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ALLOW; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + /* TODO Need to support default miss for FDB */ + break; + case MLX5DR_ACTION_TYP_CTR: + attr->id = obj->id; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_COUNTER; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW0; + break; + case MLX5DR_ACTION_TYP_TIR: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->dest_tir_num = obj->id; + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + if (action->modify_header.num_of_actions == 1) { + attr->modify_action.data = action->modify_header.single_action; + attr->action_type = mlx5dr_action_get_mh_stc_type(attr->modify_action.data); + + if (attr->action_type == MLX5_IFC_STC_ACTION_TYPE_ADD || + attr->action_type == MLX5_IFC_STC_ACTION_TYPE_SET) + MLX5_SET(set_action_in, &attr->modify_action.data, data, 0); + } else { + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST; + attr->modify_header.arg_id = action->modify_header.arg_obj->id; + attr->modify_header.pattern_id = action->modify_header.pattern_obj->id; + } + break; + case MLX5DR_ACTION_TYP_FT: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->dest_table_id = obj->id; + break; + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + attr->remove_header.decap = 1; + attr->remove_header.start_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->remove_header.end_anchor = MLX5_HEADER_ANCHOR_INNER_MAC; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.arg_id = action->reformat.arg_obj->id; + attr->insert_header.header_size = action->reformat.header_size; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.arg_id = action->reformat.arg_obj->id; + attr->insert_header.header_size = action->reformat.header_size; + break; + case MLX5DR_ACTION_TYP_ASO_METER: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ASO; + attr->aso.aso_type = ASO_OPC_MOD_POLICER; + attr->aso.devx_obj_id = obj->id; + attr->aso.return_reg_id = action->aso.return_reg_id; + break; + case MLX5DR_ACTION_TYP_ASO_CT: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ASO; + attr->aso.aso_type = ASO_OPC_MOD_CONNECTION_TRACKING; + attr->aso.devx_obj_id = obj->id; + attr->aso.return_reg_id = action->aso.return_reg_id; + break; + case MLX5DR_ACTION_TYP_VPORT: + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT; + attr->vport.vport_num = action->vport.vport_num; + attr->vport.esw_owner_vhca_id = action->vport.esw_owner_vhca_id; + break; + case MLX5DR_ACTION_TYP_POP_VLAN: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + attr->remove_words.start_anchor = MLX5_HEADER_ANCHOR_FIRST_VLAN_START; + attr->remove_words.num_of_words = MLX5DR_ACTION_HDR_LEN_L2_VLAN / 2; + break; + case MLX5DR_ACTION_TYP_PUSH_VLAN: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 0; + attr->insert_header.is_inline = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.insert_offset = MLX5DR_ACTION_HDR_LEN_L2_MACS; + attr->insert_header.header_size = MLX5DR_ACTION_HDR_LEN_L2_VLAN; + break; + default: + DR_LOG(ERR, "Invalid action type %d", action->type); + assert(false); + } +} + +static int +mlx5dr_action_create_stcs(struct mlx5dr_action *action, + struct mlx5dr_devx_obj *obj) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_context *ctx = action->ctx; + int ret; + + mlx5dr_action_fill_stc_attr(action, obj, &stc_attr); + + /* Block unsupported parallel devx obj modify over the same base */ + pthread_spin_lock(&ctx->ctrl_lock); + + /* Allocate STC for RX */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); + if (ret) + goto out_err; + } + + /* Allocate STC for TX */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); + if (ret) + goto free_nic_rx_stc; + } + + /* Allocate STC for FDB */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_FDB, + &action->stc[MLX5DR_TABLE_TYPE_FDB]); + if (ret) + goto free_nic_tx_stc; + } + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_nic_tx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_free_single_stc(ctx, + MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); +free_nic_rx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_free_single_stc(ctx, + MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); +out_err: + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void +mlx5dr_action_destroy_stcs(struct mlx5dr_action *action) +{ + struct mlx5dr_context *ctx = action->ctx; + + /* Block unsupported parallel devx obj modify over the same base */ + pthread_spin_lock(&ctx->ctrl_lock); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_FDB, + &action->stc[MLX5DR_TABLE_TYPE_FDB]); + + pthread_spin_unlock(&ctx->ctrl_lock); +} + +static bool +mlx5dr_action_is_root_flags(uint32_t flags) +{ + return flags & (MLX5DR_ACTION_FLAG_ROOT_RX | + MLX5DR_ACTION_FLAG_ROOT_TX | + MLX5DR_ACTION_FLAG_ROOT_FDB); +} + +static bool +mlx5dr_action_is_hws_flags(uint32_t flags) +{ + return flags & (MLX5DR_ACTION_FLAG_HWS_RX | + MLX5DR_ACTION_FLAG_HWS_TX | + MLX5DR_ACTION_FLAG_HWS_FDB); +} + +static struct mlx5dr_action * +mlx5dr_action_create_generic(struct mlx5dr_context *ctx, + uint32_t flags, + enum mlx5dr_action_type action_type) +{ + struct mlx5dr_action *action; + + if (!mlx5dr_action_is_root_flags(flags) && + !mlx5dr_action_is_hws_flags(flags)) { + DR_LOG(ERR, "Action flags must specify root or non root (HWS)"); + rte_errno = ENOTSUP; + return NULL; + } + + action = simple_calloc(1, sizeof(*action)); + if (!action) { + DR_LOG(ERR, "Failed to allocate memory for action [%d]", action_type); + rte_errno = ENOMEM; + return NULL; + } + + action->ctx = ctx; + action->flags = flags; + action->type = action_type; + + return action; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx, + struct mlx5dr_table *tbl, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_table_is_root(tbl)) { + DR_LOG(ERR, "Root table cannot be set as destination"); + rte_errno = ENOTSUP; + return NULL; + } + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_FT); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = tbl->ft->obj; + } else { + ret = mlx5dr_action_create_stcs(action, tbl->ft); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *obj, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_TIR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = obj->obj; + } else { + ret = mlx5dr_action_create_stcs(action, obj); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_drop(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_DROP); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_default_miss(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_MISS); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_tag(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_TAG); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static struct mlx5dr_action * +mlx5dr_action_create_aso(struct mlx5dr_context *ctx, + enum mlx5dr_action_type action_type, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "ASO action cannot be used over root table"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + action->aso.devx_obj = devx_obj; + action->aso.return_reg_id = return_reg_id; + + ret = mlx5dr_action_create_stcs(action, devx_obj); + if (ret) + goto free_action; + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_aso_meter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + return mlx5dr_action_create_aso(ctx, MLX5DR_ACTION_TYP_ASO_METER, + devx_obj, return_reg_id, flags); +} + +struct mlx5dr_action * +mlx5dr_action_create_aso_ct(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + return mlx5dr_action_create_aso(ctx, MLX5DR_ACTION_TYP_ASO_CT, + devx_obj, return_reg_id, flags); +} + +struct mlx5dr_action * +mlx5dr_action_create_counter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *obj, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_CTR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = obj->obj; + } else { + ret = mlx5dr_action_create_stcs(action, obj); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static int mlx5dr_action_create_dest_vport_hws(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint32_t ib_port_num) +{ + struct mlx5dr_cmd_query_vport_caps vport_caps = {0}; + int ret; + + ret = mlx5dr_cmd_query_ib_port(ctx->ibv_ctx, &vport_caps, ib_port_num); + if (ret) { + DR_LOG(ERR, "Failed querying port %d\n", ib_port_num); + return ret; + } + action->vport.vport_num = vport_caps.vport_num; + action->vport.esw_owner_vhca_id = vport_caps.esw_owner_vhca_id; + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed creating stc for port %d\n", ib_port_num); + return ret; + } + + return 0; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_vport(struct mlx5dr_context *ctx, + uint32_t ib_port_num, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (!(flags & MLX5DR_ACTION_FLAG_HWS_FDB)) { + DR_LOG(ERR, "Vport action is supported for FDB only\n"); + rte_errno = EINVAL; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_VPORT); + if (!action) + return NULL; + + ret = mlx5dr_action_create_dest_vport_hws(ctx, action, ib_port_num); + if (ret) { + DR_LOG(ERR, "Failed to create vport action HWS\n"); + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_push_vlan(struct mlx5dr_context *ctx, uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Push vlan action not supported for root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_PUSH_VLAN); + if (!action) + return NULL; + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed creating stc for push vlan\n"); + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_pop_vlan(struct mlx5dr_context *ctx, uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Pop vlan action not supported for root"); + rte_errno = ENOTSUP; + return NULL; + } + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_POP_VLAN); + if (!action) + return NULL; + + ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); + if (ret) { + DR_LOG(ERR, "Failed to create remove stc for reformat"); + goto free_action; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed creating stc for pop vlan\n"); + goto free_shared; + } + + return action; + +free_shared: + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); +free_action: + simple_free(action); + return NULL; +} + +static int +mlx5dr_action_conv_reformat_type_to_action(uint32_t reformat_type, + enum mlx5dr_action_type *action_type) +{ + switch (reformat_type) { + case MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2: + *action_type = MLX5DR_ACTION_TYP_TNL_L2_TO_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2: + *action_type = MLX5DR_ACTION_TYP_L2_TO_TNL_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_TNL_L3_TO_L2: + *action_type = MLX5DR_ACTION_TYP_TNL_L3_TO_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3: + *action_type = MLX5DR_ACTION_TYP_L2_TO_TNL_L3; + break; + default: + DR_LOG(ERR, "Invalid reformat type requested"); + rte_errno = ENOTSUP; + return rte_errno; + } + return 0; +} + +static void +mlx5dr_action_conv_reformat_to_verbs(uint32_t action_type, + uint32_t *verb_reformat_type) +{ + switch (action_type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TUNNEL_TO_L2; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TO_L2_TUNNEL; + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L3_TUNNEL_TO_L2; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TO_L3_TUNNEL; + break; + } +} + +static void +mlx5dr_action_conv_flags_to_ft_type(uint32_t flags, enum mlx5dv_flow_table_type *ft_type) +{ + if (flags & MLX5DR_ACTION_FLAG_ROOT_RX) + *ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_RX; + else if (flags & MLX5DR_ACTION_FLAG_ROOT_TX) + *ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_NIC_TX; + else if (flags & MLX5DR_ACTION_FLAG_ROOT_FDB) + *ft_type = MLX5_IB_UAPI_FLOW_TABLE_TYPE_FDB; +} + +static int +mlx5dr_action_create_reformat_root(struct mlx5dr_action *action, + size_t data_sz, + void *data) +{ + enum mlx5dv_flow_table_type ft_type = 0; /*fix compilation warn*/ + uint32_t verb_reformat_type = 0; + + /* Convert action to FT type and verbs reformat type */ + mlx5dr_action_conv_flags_to_ft_type(action->flags, &ft_type); + mlx5dr_action_conv_reformat_to_verbs(action->type, &verb_reformat_type); + + /* Create the reformat type for root table */ + action->flow_action = + mlx5_glue->dv_create_flow_action_packet_reformat_root(action->ctx->ibv_ctx, + data_sz, + data, + verb_reformat_type, + ft_type); + if (!action->flow_action) { + rte_errno = errno; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_action_handle_reformat_args(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + uint32_t args_log_size; + int ret; + + if (data_sz % 2 != 0) { + DR_LOG(ERR, "Data size should be multiply of 2"); + rte_errno = EINVAL; + return rte_errno; + } + action->reformat.header_size = data_sz; + + args_log_size = mlx5dr_arg_data_size_to_arg_log_size(data_sz); + if (args_log_size >= MLX5DR_ARG_CHUNK_SIZE_MAX) { + DR_LOG(ERR, "Data size is bigger than supported"); + rte_errno = EINVAL; + return rte_errno; + } + args_log_size += bulk_size; + + if (!mlx5dr_arg_is_valid_arg_request_size(ctx, args_log_size)) { + DR_LOG(ERR, "Arg size %d does not fit FW requests", + args_log_size); + rte_errno = EINVAL; + return rte_errno; + } + + action->reformat.arg_obj = mlx5dr_cmd_arg_create(ctx->ibv_ctx, + args_log_size, + ctx->pd_num); + if (!action->reformat.arg_obj) { + DR_LOG(ERR, "Failed to create arg for reformat"); + return rte_errno; + } + + /* When INLINE need to write the arg data */ + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->reformat.arg_obj->id, + data, + data_sz); + if (ret) { + DR_LOG(ERR, "Failed to write inline arg for reformat"); + goto free_arg; + } + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static int mlx5dr_action_handle_l2_to_tunnel_l2(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + ret = mlx5dr_action_handle_reformat_args(ctx, data_sz, data, bulk_size, + action); + if (ret) { + DR_LOG(ERR, "Failed to create args for reformat"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed to create stc for reformat"); + goto free_arg; + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static int mlx5dr_action_get_shared_stc_offset(struct mlx5dr_context_common_res *common_res, + enum mlx5dr_context_shared_stc_type stc_type) +{ + return common_res->shared_stc[stc_type]->remove_header.offset; +} + +static int mlx5dr_action_handle_l2_to_tunnel_l3(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + ret = mlx5dr_action_handle_reformat_args(ctx, data_sz, data, bulk_size, + action); + if (ret) { + DR_LOG(ERR, "Failed to create args for reformat"); + return ret; + } + + /* The action is remove-l2-header + insert-l3-header */ + ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); + if (ret) { + DR_LOG(ERR, "Failed to create remove stc for reformat"); + goto free_arg; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed to create insert stc for reformat"); + goto down_shared; + } + + return 0; + +down_shared: + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static void mlx5dr_action_prepare_decap_l3_actions(size_t data_sz, + uint8_t *mh_data, + int *num_of_actions) +{ + int actions; + uint32_t i; + + /* Remove L2L3 outer headers */ + MLX5_SET(stc_ste_param_remove, mh_data, action_type, + MLX5_MODIFICATION_TYPE_REMOVE); + MLX5_SET(stc_ste_param_remove, mh_data, decap, 0x1); + MLX5_SET(stc_ste_param_remove, mh_data, remove_start_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + MLX5_SET(stc_ste_param_remove, mh_data, remove_end_anchor, + MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4); + mh_data += MLX5DR_ACTION_DOUBLE_SIZE; /* Assume every action is 2 dw */ + actions = 1; + + /* Add the new header using inline action 4Byte at a time, the header + * is added in reversed order to the beginning of the packet to avoid + * incorrect parsing by the HW. Since header is 14B or 18B an extra + * two bytes are padded and later removed. + */ + for (i = 0; i < data_sz / MLX5DR_ACTION_INLINE_DATA_SIZE + 1; i++) { + MLX5_SET(stc_ste_param_insert, mh_data, action_type, + MLX5_MODIFICATION_TYPE_INSERT); + MLX5_SET(stc_ste_param_insert, mh_data, inline_data, 0x1); + MLX5_SET(stc_ste_param_insert, mh_data, insert_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + MLX5_SET(stc_ste_param_insert, mh_data, insert_size, 2); + mh_data += MLX5DR_ACTION_DOUBLE_SIZE; + actions++; + } + + /* Remove first 2 extra bytes */ + MLX5_SET(stc_ste_param_remove_words, mh_data, action_type, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS); + MLX5_SET(stc_ste_param_remove_words, mh_data, remove_start_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + /* The hardware expects here size in words (2 bytes) */ + MLX5_SET(stc_ste_param_remove_words, mh_data, remove_size, 1); + actions++; + + *num_of_actions = actions; +} + +static int +mlx5dr_action_handle_tunnel_l3_to_l2(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + uint8_t mh_data[MLX5DR_ACTION_REFORMAT_DATA_SIZE] = {0}; + int num_of_actions; + int mh_data_size; + int ret; + + if (data_sz != MLX5DR_ACTION_HDR_LEN_L2 && + data_sz != MLX5DR_ACTION_HDR_LEN_L2_W_VLAN) { + DR_LOG(ERR, "Data size is not supported for decap-l3\n"); + rte_errno = EINVAL; + return rte_errno; + } + + mlx5dr_action_prepare_decap_l3_actions(data_sz, mh_data, &num_of_actions); + + mh_data_size = num_of_actions * MLX5DR_MODIFY_ACTION_SIZE; + + ret = mlx5dr_pat_arg_create_modify_header(ctx, action, mh_data_size, + (__be64 *)mh_data, bulk_size); + if (ret) { + DR_LOG(ERR, "Failed allocating modify-header for decap-l3\n"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_mh_obj; + + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + mlx5dr_action_prepare_decap_l3_data(data, mh_data, num_of_actions); + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->modify_header.arg_obj->id, + (uint8_t *)mh_data, + num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); + if (ret) { + DR_LOG(ERR, "Failed writing INLINE arg decap_l3"); + goto clean_stc; + } + } + + return 0; + +clean_stc: + mlx5dr_action_destroy_stcs(action); +free_mh_obj: + mlx5dr_pat_arg_destroy_modify_header(ctx, action); + return ret; +} + +static int +mlx5dr_action_create_reformat_hws(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + switch (action->type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + ret = mlx5dr_action_create_stcs(action, NULL); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + ret = mlx5dr_action_handle_l2_to_tunnel_l2(ctx, data_sz, data, bulk_size, action); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + ret = mlx5dr_action_handle_l2_to_tunnel_l3(ctx, data_sz, data, bulk_size, action); + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + ret = mlx5dr_action_handle_tunnel_l3_to_l2(ctx, data_sz, data, bulk_size, action); + break; + + default: + assert(false); + rte_errno = ENOTSUP; + return rte_errno; + } + + return ret; +} + +struct mlx5dr_action * +mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, + enum mlx5dr_action_reformat_type reformat_type, + size_t data_sz, + void *inline_data, + uint32_t log_bulk_size, + uint32_t flags) +{ + enum mlx5dr_action_type action_type; + struct mlx5dr_action *action; + int ret; + + ret = mlx5dr_action_conv_reformat_type_to_action(reformat_type, &action_type); + if (ret) + return NULL; + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + if (log_bulk_size) { + DR_LOG(ERR, "Bulk reformat not supported over root"); + rte_errno = ENOTSUP; + goto free_action; + } + + ret = mlx5dr_action_create_reformat_root(action, data_sz, inline_data); + if (ret) + goto free_action; + + return action; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Reformat flags don't fit HWS (flags: %x0x)\n", + flags); + rte_errno = EINVAL; + goto free_action; + } + + ret = mlx5dr_action_create_reformat_hws(ctx, data_sz, inline_data, log_bulk_size, action); + if (ret) { + DR_LOG(ERR, "Failed to create reformat.\n"); + rte_errno = EINVAL; + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static int +mlx5dr_action_create_modify_header_root(struct mlx5dr_action *action, + size_t actions_sz, + __be64 *actions) +{ + enum mlx5dv_flow_table_type ft_type = 0; + + mlx5dr_action_conv_flags_to_ft_type(action->flags, &ft_type); + + action->flow_action = + mlx5_glue->dv_create_flow_action_modify_header_root(action->ctx->ibv_ctx, + actions_sz, + (uint64_t *)actions, + ft_type); + if (!action->flow_action) { + rte_errno = errno; + return rte_errno; + } + + return 0; +} + +struct mlx5dr_action * +mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, + size_t pattern_sz, + __be64 pattern[], + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_MODIFY_HDR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + if (log_bulk_size) { + DR_LOG(ERR, "Bulk modify-header not supported over root"); + rte_errno = ENOTSUP; + goto free_action; + } + ret = mlx5dr_action_create_modify_header_root(action, pattern_sz, pattern); + if (ret) + goto free_action; + + return action; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Flags don't fit hws (flags: %x0x, log_bulk_size: %d)\n", + flags, log_bulk_size); + rte_errno = EINVAL; + goto free_action; + } + + if (pattern_sz / MLX5DR_MODIFY_ACTION_SIZE == 1) { + /* Optimize single modiy action to be used inline */ + action->modify_header.single_action = pattern[0]; + action->modify_header.num_of_actions = 1; + action->modify_header.single_action_type = + MLX5_GET(set_action_in, pattern, action_type); + } else { + /* Use multi action pattern and argument */ + ret = mlx5dr_pat_arg_create_modify_header(ctx, action, pattern_sz, + pattern, log_bulk_size); + if (ret) { + DR_LOG(ERR, "Failed allocating modify-header\n"); + goto free_action; + } + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_mh_obj; + + return action; + +free_mh_obj: + if (action->modify_header.num_of_actions > 1) + mlx5dr_pat_arg_destroy_modify_header(ctx, action); +free_action: + simple_free(action); + return NULL; +} + +static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TIR: + case MLX5DR_ACTION_TYP_MISS: + case MLX5DR_ACTION_TYP_TAG: + case MLX5DR_ACTION_TYP_DROP: + case MLX5DR_ACTION_TYP_CTR: + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_ASO_METER: + case MLX5DR_ACTION_TYP_ASO_CT: + case MLX5DR_ACTION_TYP_PUSH_VLAN: + mlx5dr_action_destroy_stcs(action); + break; + case MLX5DR_ACTION_TYP_POP_VLAN: + mlx5dr_action_destroy_stcs(action); + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + mlx5dr_action_destroy_stcs(action); + if (action->modify_header.num_of_actions > 1) + mlx5dr_pat_arg_destroy_modify_header(action->ctx, action); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + mlx5dr_action_destroy_stcs(action); + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + mlx5dr_action_destroy_stcs(action); + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + break; + } +} + +static void mlx5dr_action_destroy_root(struct mlx5dr_action *action) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + ibv_destroy_flow_action(action->flow_action); + break; + } +} + +int mlx5dr_action_destroy(struct mlx5dr_action *action) +{ + if (mlx5dr_action_is_root_flags(action->flags)) + mlx5dr_action_destroy_root(action); + else + mlx5dr_action_destroy_hws(action); + + simple_free(action); + return 0; +} + +/* Called under pthread_spin_lock(&ctx->ctrl_lock) */ +int mlx5dr_action_get_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_action_default_stc *default_stc; + int ret; + + if (ctx->common_res[tbl_type].default_stc) { + ctx->common_res[tbl_type].default_stc->refcount++; + return 0; + } + + default_stc = simple_calloc(1, sizeof(*default_stc)); + if (!default_stc) { + DR_LOG(ERR, "Failed to allocate memory for default STCs"); + rte_errno = ENOMEM; + return rte_errno; + } + + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_NOP; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW0; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_ctr); + if (ret) { + DR_LOG(ERR, "Failed to allocate default counter STC"); + goto free_default_stc; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw5); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW5 STC"); + goto free_nop_ctr; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW6; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw6); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW6 STC"); + goto free_nop_dw5; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW7; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw7); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW7 STC"); + goto free_nop_dw6; + } + + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_ALLOW; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->default_hit); + if (ret) { + DR_LOG(ERR, "Failed to allocate default allow STC"); + goto free_nop_dw7; + } + + ctx->common_res[tbl_type].default_stc = default_stc; + ctx->common_res[tbl_type].default_stc->refcount++; + + return 0; + +free_nop_dw7: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw7); +free_nop_dw6: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw6); +free_nop_dw5: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw5); +free_nop_ctr: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_ctr); +free_default_stc: + simple_free(default_stc); + return rte_errno; +} + +void mlx5dr_action_put_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type) +{ + struct mlx5dr_action_default_stc *default_stc; + + default_stc = ctx->common_res[tbl_type].default_stc; + + default_stc = ctx->common_res[tbl_type].default_stc; + if (--default_stc->refcount) + return; + + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->default_hit); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw7); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw6); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw5); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_ctr); + simple_free(default_stc); + ctx->common_res[tbl_type].default_stc = NULL; +} + +static void mlx5dr_action_modify_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions) +{ + mlx5dr_arg_write(queue, NULL, arg_idx, arg_data, + num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); +} + +void +mlx5dr_action_prepare_decap_l3_data(uint8_t *src, uint8_t *dst, + uint16_t num_of_actions) +{ + uint8_t *e_src; + int i; + + /* num_of_actions = remove l3l2 + 4/5 inserts + remove extra 2 bytes + * copy from end of src to the start of dst. + * move to the end, 2 is the leftover from 14B or 18B + */ + if (num_of_actions == DECAP_L3_NUM_ACTIONS_W_NO_VLAN) + e_src = src + MLX5DR_ACTION_HDR_LEN_L2; + else + e_src = src + MLX5DR_ACTION_HDR_LEN_L2_W_VLAN; + + /* Move dst over the first remove action + zero data */ + dst += MLX5DR_ACTION_DOUBLE_SIZE; + /* Move dst over the first insert ctrl action */ + dst += MLX5DR_ACTION_DOUBLE_SIZE / 2; + /* Actions: + * no vlan: r_h-insert_4b-insert_4b-insert_4b-insert_4b-remove_2b. + * with vlan: r_h-insert_4b-insert_4b-insert_4b-insert_4b-insert_4b-remove_2b. + * the loop is without the last insertion. + */ + for (i = 0; i < num_of_actions - 3; i++) { + e_src -= MLX5DR_ACTION_INLINE_DATA_SIZE; + memcpy(dst, e_src, MLX5DR_ACTION_INLINE_DATA_SIZE); /* data */ + dst += MLX5DR_ACTION_DOUBLE_SIZE; + } + /* Copy the last 2 bytes after a gap of 2 bytes which will be removed */ + e_src -= MLX5DR_ACTION_INLINE_DATA_SIZE / 2; + dst += MLX5DR_ACTION_INLINE_DATA_SIZE / 2; + memcpy(dst, e_src, 2); +} + +static struct mlx5dr_actions_wqe_setter * +mlx5dr_action_setter_find_first(struct mlx5dr_actions_wqe_setter *setter, + uint8_t req_flags) +{ + /* Use a new setter if requested flags are taken */ + while (setter->flags & req_flags) + setter++; + + /* Use current setter in required flags are not used */ + return setter; +} + +static void +mlx5dr_action_apply_stc(struct mlx5dr_actions_apply_data *apply, + enum mlx5dr_action_stc_idx stc_idx, + uint8_t action_idx) +{ + struct mlx5dr_action *action = apply->rule_action[action_idx].action; + + apply->wqe_ctrl->stc_ix[stc_idx] = + htobe32(action->stc[apply->tbl_type].offset); +} + +static void +mlx5dr_action_setter_push_vlan(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_double]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = rule_action->push_vlan.vlan_hdr; + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; +} + +static void +mlx5dr_action_setter_modify_header(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + uint32_t arg_sz, arg_idx; + uint8_t *single_action; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + + if (action->modify_header.num_of_actions == 1) { + if (action->modify_header.single_action_type == + MLX5_MODIFICATION_TYPE_COPY) { + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + return; + } + + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) + single_action = (uint8_t *)&action->modify_header.single_action; + else + single_action = rule_action->modify_header.data; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = + *(__be32 *)MLX5_ADDR_OF(set_action_in, single_action, data); + } else { + /* Argument offset multiple with number of args per these actions */ + arg_sz = mlx5dr_arg_get_arg_size(action->modify_header.num_of_actions); + arg_idx = rule_action->modify_header.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + if (!(action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_action_modify_write(apply->queue, + action->modify_header.arg_obj->id + arg_idx, + rule_action->modify_header.data, + action->modify_header.num_of_actions); + } + } +} + +static void +mlx5dr_action_setter_insert_ptr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + uint32_t arg_idx, arg_sz; + + rule_action = &apply->rule_action[setter->idx_double]; + + /* Argument offset multiple on args required for header size */ + arg_sz = mlx5dr_arg_data_size_to_arg_size(rule_action->action->reformat.header_size); + arg_idx = rule_action->reformat.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + if (!(rule_action->action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_arg_write(apply->queue, NULL, + rule_action->action->reformat.arg_obj->id + arg_idx, + rule_action->reformat.data, + rule_action->action->reformat.header_size); + } +} + +static void +mlx5dr_action_setter_tnl_l3_to_l2(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + uint32_t arg_sz, arg_idx; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + + /* Argument offset multiple on args required for num of actions */ + arg_sz = mlx5dr_arg_get_arg_size(action->modify_header.num_of_actions); + arg_idx = rule_action->reformat.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + if (!(action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_arg_decapl3_write(apply->queue, + action->modify_header.arg_obj->id + arg_idx, + rule_action->reformat.data, + action->modify_header.num_of_actions); + } +} + +static void +mlx5dr_action_setter_aso(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + uint32_t exe_aso_ctrl; + uint32_t offset; + + rule_action = &apply->rule_action[setter->idx_double]; + + switch (rule_action->action->type) { + case MLX5DR_ACTION_TYP_ASO_METER: + /* exe_aso_ctrl format: + * [STC only and reserved bits 29b][init_color 2b][meter_id 1b] + */ + offset = rule_action->aso_meter.offset / MLX5_ASO_METER_NUM_PER_OBJ; + exe_aso_ctrl = rule_action->aso_meter.offset % MLX5_ASO_METER_NUM_PER_OBJ; + exe_aso_ctrl |= rule_action->aso_meter.init_color << + MLX5DR_ACTION_METER_INIT_COLOR_OFFSET; + break; + case MLX5DR_ACTION_TYP_ASO_CT: + /* exe_aso_ctrl CT format: + * [STC only and reserved bits 31b][direction 1b] + */ + offset = rule_action->aso_ct.offset / MLX5_ASO_CT_NUM_PER_OBJ; + exe_aso_ctrl = rule_action->aso_ct.direction; + break; + default: + DR_LOG(ERR, "Unsupported ASO action type: %d", rule_action->action->type); + rte_errno = ENOTSUP; + return; + } + + /* aso_object_offset format: [24B] */ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = htobe32(offset); + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(exe_aso_ctrl); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; +} + +static void +mlx5dr_action_setter_tag(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_single]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = htobe32(rule_action->tag.value); + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW5, setter->idx_single); +} + +static void +mlx5dr_action_setter_ctrl_ctr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_ctr]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW0] = htobe32(rule_action->counter.offset); + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_CTRL, setter->idx_ctr); +} + +static void +mlx5dr_action_setter_single(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW5, setter->idx_single); +} + +static void +mlx5dr_action_setter_single_double_pop(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(mlx5dr_action_get_shared_stc_offset(apply->common_res, + MLX5DR_CONTEXT_SHARED_STC_POP)); +} + +static void +mlx5dr_action_setter_hit(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = 0; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_HIT, setter->idx_hit); +} + +static void +mlx5dr_action_setter_default_hit(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_HIT] = + htobe32(apply->common_res->default_stc->default_hit.offset); +} + +static void +mlx5dr_action_setter_hit_next_action(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = htobe32(apply->next_direct_idx << 6); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_HIT] = htobe32(apply->jump_to_action_stc); +} + +static void +mlx5dr_action_setter_common_decap(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(mlx5dr_action_get_shared_stc_offset(apply->common_res, + MLX5DR_CONTEXT_SHARED_STC_DECAP)); +} + +int mlx5dr_action_template_process(struct mlx5dr_action_template *at) +{ + struct mlx5dr_actions_wqe_setter *start_setter = at->setters + 1; + enum mlx5dr_action_type *action_type = at->action_type_arr; + struct mlx5dr_actions_wqe_setter *setter = at->setters; + struct mlx5dr_actions_wqe_setter *pop_setter = NULL; + struct mlx5dr_actions_wqe_setter *last_setter; + int i; + + /* Note: Given action combination must be valid */ + + /* Check if action were already processed */ + if (at->num_of_action_stes) + return 0; + + for (i = 0; i < MLX5DR_ACTION_MAX_STE; i++) + setter[i].set_hit = &mlx5dr_action_setter_hit_next_action; + + /* The same action template setters can be used with jumbo or match + * STE, to support both cases we reseve the first setter for cases + * with jumbo STE to allow jump to the first action STE. + * This extra setter can be reduced in some cases on rule creation. + */ + setter = start_setter; + last_setter = start_setter; + + for (i = 0; i < at->num_actions; i++) { + switch (action_type[i]) { + case MLX5DR_ACTION_TYP_DROP: + case MLX5DR_ACTION_TYP_TIR: + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_VPORT: + case MLX5DR_ACTION_TYP_MISS: + /* Hit action */ + last_setter->flags |= ASF_HIT; + last_setter->set_hit = &mlx5dr_action_setter_hit; + last_setter->idx_hit = i; + break; + + case MLX5DR_ACTION_TYP_POP_VLAN: + /* Single remove header to header */ + if (pop_setter) { + /* We have 2 pops, use the shared */ + pop_setter->set_single = &mlx5dr_action_setter_single_double_pop; + break; + } + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY); + setter->flags |= ASF_SINGLE1 | ASF_REPARSE | ASF_REMOVE; + setter->set_single = &mlx5dr_action_setter_single; + setter->idx_single = i; + pop_setter = setter; + break; + + case MLX5DR_ACTION_TYP_PUSH_VLAN: + /* Double insert inline */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_REPARSE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_push_vlan; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_MODIFY_HDR: + /* Double modify header list */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_modify_header; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_ASO_METER: + case MLX5DR_ACTION_TYP_ASO_CT: + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE); + setter->flags |= ASF_DOUBLE; + setter->set_double = &mlx5dr_action_setter_aso; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + /* Single remove header to header */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY); + setter->flags |= ASF_SINGLE1 | ASF_REMOVE | ASF_REPARSE; + setter->set_single = &mlx5dr_action_setter_single; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + /* Double insert header with pointer */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE); + setter->flags |= ASF_DOUBLE | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_insert_ptr; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + /* Single remove + Double insert header with pointer */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_DOUBLE); + setter->flags |= ASF_SINGLE1 | ASF_DOUBLE | ASF_REPARSE | ASF_REMOVE; + setter->set_double = &mlx5dr_action_setter_insert_ptr; + setter->idx_double = i; + setter->set_single = &mlx5dr_action_setter_common_decap; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + /* Double modify header list with remove and push inline */ + setter = mlx5dr_action_setter_find_first(last_setter, + ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_tnl_l3_to_l2; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_TAG: + /* Single TAG action, search for any room from the start */ + setter = mlx5dr_action_setter_find_first(start_setter, ASF_SINGLE1); + setter->flags |= ASF_SINGLE1; + setter->set_single = &mlx5dr_action_setter_tag; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_CTR: + /* Control counter action + * TODO: Current counter executed first. Support is needed + * for single ation counter action which is done last. + * Example: Decap + CTR + */ + setter = mlx5dr_action_setter_find_first(start_setter, ASF_CTR); + setter->flags |= ASF_CTR; + setter->set_ctr = &mlx5dr_action_setter_ctrl_ctr; + setter->idx_ctr = i; + break; + + default: + DR_LOG(ERR, "Unsupported action type: %d", action_type[i]); + rte_errno = ENOTSUP; + assert(false); + return rte_errno; + } + + last_setter = RTE_MAX(setter, last_setter); + } + + /* Set default hit on the last STE if no hit action provided */ + if (!(last_setter->flags & ASF_HIT)) + last_setter->set_hit = &mlx5dr_action_setter_default_hit; + + at->num_of_action_stes = last_setter - start_setter + 1; + + /* Check if action template doesn't require any action DWs */ + at->only_term = (at->num_of_action_stes == 1) && + !(last_setter->flags & ~(ASF_CTR | ASF_HIT)); + + return 0; +} + +struct mlx5dr_action_template * +mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[]) +{ + struct mlx5dr_action_template *at; + uint8_t num_actions = 0; + int i; + + at = simple_calloc(1, sizeof(*at)); + if (!at) { + DR_LOG(ERR, "Failed to allocate action template"); + rte_errno = ENOMEM; + return NULL; + } + + while (action_type[num_actions] != MLX5DR_ACTION_TYP_LAST) + num_actions++; + + at->num_actions = num_actions - 1; + at->action_type_arr = simple_calloc(num_actions, sizeof(*action_type)); + if (!at->action_type_arr) { + DR_LOG(ERR, "Failed to allocate action type array"); + rte_errno = ENOMEM; + goto free_at; + } + + for (i = 0; i < num_actions; i++) + at->action_type_arr[i] = action_type[i]; + + return at; + +free_at: + simple_free(at); + return NULL; +} + +int mlx5dr_action_template_destroy(struct mlx5dr_action_template *at) +{ + simple_free(at->action_type_arr); + simple_free(at); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h new file mode 100644 index 0000000000..f14d91f994 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -0,0 +1,253 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_ACTION_H_ +#define MLX5DR_ACTION_H_ + +/* Max number of STEs needed for a rule (including match) */ +#define MLX5DR_ACTION_MAX_STE 7 + +enum mlx5dr_action_stc_idx { + MLX5DR_ACTION_STC_IDX_CTRL = 0, + MLX5DR_ACTION_STC_IDX_HIT = 1, + MLX5DR_ACTION_STC_IDX_DW5 = 2, + MLX5DR_ACTION_STC_IDX_DW6 = 3, + MLX5DR_ACTION_STC_IDX_DW7 = 4, + MLX5DR_ACTION_STC_IDX_MAX = 5, + /* STC Jumvo STE combo: CTR, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_JUMBO_STE = 1, + /* STC combo1: CTR, SINGLE, DOUBLE, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_COMBO1 = 3, + /* STC combo2: CTR, 3 x SINGLE, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_COMBO2 = 4, +}; + +enum mlx5dr_action_offset { + MLX5DR_ACTION_OFFSET_DW0 = 0, + MLX5DR_ACTION_OFFSET_DW5 = 5, + MLX5DR_ACTION_OFFSET_DW6 = 6, + MLX5DR_ACTION_OFFSET_DW7 = 7, + MLX5DR_ACTION_OFFSET_HIT = 3, + MLX5DR_ACTION_OFFSET_HIT_LSB = 4, +}; + +enum { + MLX5DR_ACTION_DOUBLE_SIZE = 8, + MLX5DR_ACTION_INLINE_DATA_SIZE = 4, + MLX5DR_ACTION_HDR_LEN_L2_MACS = 12, + MLX5DR_ACTION_HDR_LEN_L2_VLAN = 4, + MLX5DR_ACTION_HDR_LEN_L2_ETHER = 2, + MLX5DR_ACTION_HDR_LEN_L2 = (MLX5DR_ACTION_HDR_LEN_L2_MACS + + MLX5DR_ACTION_HDR_LEN_L2_ETHER), + MLX5DR_ACTION_HDR_LEN_L2_W_VLAN = (MLX5DR_ACTION_HDR_LEN_L2 + + MLX5DR_ACTION_HDR_LEN_L2_VLAN), + MLX5DR_ACTION_REFORMAT_DATA_SIZE = 64, + DECAP_L3_NUM_ACTIONS_W_NO_VLAN = 6, + DECAP_L3_NUM_ACTIONS_W_VLAN = 7, +}; + +enum mlx5dr_action_setter_flag { + ASF_SINGLE1 = 1 << 0, + ASF_SINGLE2 = 1 << 1, + ASF_SINGLE3 = 1 << 2, + ASF_DOUBLE = ASF_SINGLE2 | ASF_SINGLE3, + ASF_REPARSE = 1 << 3, + ASF_REMOVE = 1 << 4, + ASF_MODIFY = 1 << 5, + ASF_CTR = 1 << 6, + ASF_HIT = 1 << 7, +}; + +struct mlx5dr_action_default_stc { + struct mlx5dr_pool_chunk nop_ctr; + struct mlx5dr_pool_chunk nop_dw5; + struct mlx5dr_pool_chunk nop_dw6; + struct mlx5dr_pool_chunk nop_dw7; + struct mlx5dr_pool_chunk default_hit; + uint32_t refcount; +}; + +struct mlx5dr_action_shared_stc { + struct mlx5dr_pool_chunk remove_header; + rte_atomic32_t refcount; +}; + +struct mlx5dr_actions_apply_data { + struct mlx5dr_send_engine *queue; + struct mlx5dr_rule_action *rule_action; + uint32_t *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + uint32_t jump_to_action_stc; + struct mlx5dr_context_common_res *common_res; + enum mlx5dr_table_type tbl_type; + uint32_t next_direct_idx; + uint8_t require_dep; +}; + +struct mlx5dr_actions_wqe_setter; + +typedef void (*mlx5dr_action_setter_fp) + (struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter); + +struct mlx5dr_actions_wqe_setter { + mlx5dr_action_setter_fp set_single; + mlx5dr_action_setter_fp set_double; + mlx5dr_action_setter_fp set_hit; + mlx5dr_action_setter_fp set_ctr; + uint8_t idx_single; + uint8_t idx_double; + uint8_t idx_ctr; + uint8_t idx_hit; + uint8_t flags; +}; + +struct mlx5dr_action_template { + struct mlx5dr_actions_wqe_setter setters[MLX5DR_ACTION_MAX_STE]; + enum mlx5dr_action_type *action_type_arr; + uint8_t num_of_action_stes; + uint8_t num_actions; + uint8_t only_term; +}; + +struct mlx5dr_action { + uint8_t type; + uint8_t flags; + struct mlx5dr_context *ctx; + union { + struct { + struct mlx5dr_pool_chunk stc[MLX5DR_TABLE_TYPE_MAX]; + union { + struct { + struct mlx5dr_devx_obj *pattern_obj; + struct mlx5dr_devx_obj *arg_obj; + __be64 single_action; + uint8_t single_action_type; + uint16_t num_of_actions; + } modify_header; + struct { + struct mlx5dr_devx_obj *arg_obj; + uint32_t header_size; + } reformat; + struct { + struct mlx5dr_devx_obj *devx_obj; + uint8_t return_reg_id; + } aso; + struct { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + } vport; + }; + }; + + struct ibv_flow_action *flow_action; + struct mlx5dv_devx_obj *devx_obj; + struct ibv_qp *qp; + }; +}; + +int mlx5dr_action_root_build_attr(struct mlx5dr_rule_action rule_actions[], + uint32_t num_actions, + struct mlx5dv_flow_action_attr *attr); + +int mlx5dr_action_get_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type); + +void mlx5dr_action_put_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type); + +void mlx5dr_action_prepare_decap_l3_data(uint8_t *src, uint8_t *dst, + uint16_t num_of_actions); + +int mlx5dr_action_template_process(struct mlx5dr_action_template *at); + +bool mlx5dr_action_check_combo(enum mlx5dr_action_type *user_actions, + enum mlx5dr_table_type table_type); + +int mlx5dr_action_alloc_single_stc(struct mlx5dr_context *ctx, + struct mlx5dr_cmd_stc_modify_attr *stc_attr, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc); + +void mlx5dr_action_free_single_stc(struct mlx5dr_context *ctx, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc); + +static inline void +mlx5dr_action_setter_default_single(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(apply->common_res->default_stc->nop_dw5.offset); +} + +static inline void +mlx5dr_action_setter_default_double(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = + htobe32(apply->common_res->default_stc->nop_dw6.offset); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = + htobe32(apply->common_res->default_stc->nop_dw7.offset); +} + +static inline void +mlx5dr_action_setter_default_ctr(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW0] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_CTRL] = + htobe32(apply->common_res->default_stc->nop_ctr.offset); +} + +static inline void +mlx5dr_action_apply_setter(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter, + bool is_jumbo) +{ + uint8_t num_of_actions; + + /* Set control counter */ + if (setter->flags & ASF_CTR) + setter->set_ctr(apply, setter); + else + mlx5dr_action_setter_default_ctr(apply, setter); + + /* Set single and double on match */ + if (!is_jumbo) { + if (setter->flags & ASF_SINGLE1) + setter->set_single(apply, setter); + else + mlx5dr_action_setter_default_single(apply, setter); + + if (setter->flags & ASF_DOUBLE) + setter->set_double(apply, setter); + else + mlx5dr_action_setter_default_double(apply, setter); + + num_of_actions = setter->flags & ASF_DOUBLE ? + MLX5DR_ACTION_STC_IDX_LAST_COMBO1 : + MLX5DR_ACTION_STC_IDX_LAST_COMBO2; + } else { + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + num_of_actions = MLX5DR_ACTION_STC_IDX_LAST_JUMBO_STE; + } + + /* Set next/final hit action */ + setter->set_hit(apply, setter); + + /* Set number of actions */ + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_CTRL] |= + htobe32(num_of_actions << 29); +} + +#endif /* MLX5DR_ACTION_H_ */ diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.c b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c new file mode 100644 index 0000000000..584b7f3dfd --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c @@ -0,0 +1,511 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_data_size_to_arg_log_size(uint16_t data_size) +{ + /* Return the roundup of log2(data_size) */ + if (data_size <= MLX5DR_ARG_DATA_SIZE) + return MLX5DR_ARG_CHUNK_SIZE_1; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 2) + return MLX5DR_ARG_CHUNK_SIZE_2; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 4) + return MLX5DR_ARG_CHUNK_SIZE_3; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 8) + return MLX5DR_ARG_CHUNK_SIZE_4; + + return MLX5DR_ARG_CHUNK_SIZE_MAX; +} + +uint32_t mlx5dr_arg_data_size_to_arg_size(uint16_t data_size) +{ + return BIT(mlx5dr_arg_data_size_to_arg_log_size(data_size)); +} + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_get_arg_log_size(uint16_t num_of_actions) +{ + return mlx5dr_arg_data_size_to_arg_log_size(num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); +} + +uint32_t mlx5dr_arg_get_arg_size(uint16_t num_of_actions) +{ + return BIT(mlx5dr_arg_get_arg_log_size(num_of_actions)); +} + +/* Cache and cache element handling */ +int mlx5dr_pat_init_pattern_cache(struct mlx5dr_pattern_cache **cache) +{ + struct mlx5dr_pattern_cache *new_cache; + + new_cache = simple_calloc(1, sizeof(*new_cache)); + if (!new_cache) { + rte_errno = ENOMEM; + return rte_errno; + } + LIST_INIT(&new_cache->head); + pthread_spin_init(&new_cache->lock, PTHREAD_PROCESS_PRIVATE); + + *cache = new_cache; + + return 0; +} + +void mlx5dr_pat_uninit_pattern_cache(struct mlx5dr_pattern_cache *cache) +{ + simple_free(cache); +} + +static bool mlx5dr_pat_compare_pattern(enum mlx5dr_action_type cur_type, + int cur_num_of_actions, + __be64 cur_actions[], + enum mlx5dr_action_type type, + int num_of_actions, + __be64 actions[]) +{ + int i; + + if ((cur_num_of_actions != num_of_actions) || (cur_type != type)) + return false; + + /* All decap-l3 look the same, only change is the num of actions */ + if (type == MLX5DR_ACTION_TYP_TNL_L3_TO_L2) + return true; + + for (i = 0; i < num_of_actions; i++) { + u8 action_id = + MLX5_GET(set_action_in, &actions[i], action_type); + + if (action_id == MLX5_MODIFICATION_TYPE_COPY) { + if (actions[i] != cur_actions[i]) + return false; + } else { + /* Compare just the control, not the values */ + if ((__be32)actions[i] != + (__be32)cur_actions[i]) + return false; + } + } + + return true; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_find_cached_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pat; + + LIST_FOREACH(cached_pat, &cache->head, next) { + if (mlx5dr_pat_compare_pattern(cached_pat->type, + cached_pat->mh_data.num_of_actions, + (__be64 *)cached_pat->mh_data.data, + action->type, + num_of_actions, + actions)) + return cached_pat; + } + + return NULL; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_get_existing_cached_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + cached_pattern = mlx5dr_pat_find_cached_pattern(cache, action, num_of_actions, actions); + if (cached_pattern) { + /* LRU: move it to be first in the list */ + LIST_REMOVE(cached_pattern, next); + LIST_INSERT_HEAD(&cache->head, cached_pattern, next); + rte_atomic32_add(&cached_pattern->refcount, 1); + } + + return cached_pattern; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_get_cached_pattern_by_action(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + LIST_FOREACH(cached_pattern, &cache->head, next) { + if (cached_pattern->mh_data.pattern_obj->id == action->modify_header.pattern_obj->id) + return cached_pattern; + } + + return NULL; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_add_pattern_to_cache(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_devx_obj *pattern_obj, + enum mlx5dr_action_type type, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + cached_pattern = simple_calloc(1, sizeof(*cached_pattern)); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to allocate cached_pattern"); + rte_errno = ENOMEM; + return NULL; + } + + cached_pattern->type = type; + cached_pattern->mh_data.num_of_actions = num_of_actions; + cached_pattern->mh_data.pattern_obj = pattern_obj; + cached_pattern->mh_data.data = + simple_malloc(num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); + if (!cached_pattern->mh_data.data) { + DR_LOG(ERR, "Failed to allocate mh_data.data"); + rte_errno = ENOMEM; + goto free_cached_obj; + } + + memcpy(cached_pattern->mh_data.data, actions, + num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); + + LIST_INSERT_HEAD(&cache->head, cached_pattern, next); + + rte_atomic32_init(&cached_pattern->refcount); + rte_atomic32_set(&cached_pattern->refcount, 1); + + return cached_pattern; + +free_cached_obj: + simple_free(cached_pattern); + return NULL; +} + +static void +mlx5dr_pat_remove_pattern(struct mlx5dr_pat_cached_pattern *cached_pattern) +{ + LIST_REMOVE(cached_pattern, next); + simple_free(cached_pattern->mh_data.data); + simple_free(cached_pattern); +} + +static void +mlx5dr_pat_put_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + pthread_spin_lock(&cache->lock); + cached_pattern = mlx5dr_pat_get_cached_pattern_by_action(cache, action); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to find pattern according to action with pt"); + assert(false); + goto out; + } + + if (!rte_atomic32_dec_and_test(&cached_pattern->refcount)) + goto out; + + mlx5dr_pat_remove_pattern(cached_pattern); + +out: + pthread_spin_unlock(&cache->lock); +} + +static int mlx5dr_pat_get_pattern(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint16_t num_of_actions, + size_t pattern_sz, + __be64 *pattern) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + int ret = 0; + + pthread_spin_lock(&ctx->pattern_cache->lock); + + cached_pattern = mlx5dr_pat_get_existing_cached_pattern(ctx->pattern_cache, + action, + num_of_actions, + pattern); + if (cached_pattern) { + action->modify_header.pattern_obj = cached_pattern->mh_data.pattern_obj; + goto out_unlock; + } + + action->modify_header.pattern_obj = + mlx5dr_cmd_header_modify_pattern_create(ctx->ibv_ctx, + pattern_sz, + (uint8_t *)pattern); + if (!action->modify_header.pattern_obj) { + DR_LOG(ERR, "Failed to create pattern FW object"); + + ret = rte_errno; + goto out_unlock; + } + + cached_pattern = + mlx5dr_pat_add_pattern_to_cache(ctx->pattern_cache, + action->modify_header.pattern_obj, + action->type, + num_of_actions, + pattern); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to add pattern to cache"); + ret = rte_errno; + goto clean_pattern; + } + +out_unlock: + pthread_spin_unlock(&ctx->pattern_cache->lock); + return ret; + +clean_pattern: + mlx5dr_cmd_destroy_obj(action->modify_header.pattern_obj); + pthread_spin_unlock(&ctx->pattern_cache->lock); + return ret; +} + +static void +mlx5d_arg_init_send_attr(struct mlx5dr_send_engine_post_attr *send_attr, + void *comp_data, + uint32_t arg_idx) +{ + send_attr->opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + send_attr->opmod = MLX5DR_WQE_GTA_OPMOD_MOD_ARG; + send_attr->len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + send_attr->id = arg_idx; + send_attr->user_data = comp_data; +} + +void mlx5dr_arg_decapl3_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_arg *wqe_arg; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + size_t wqe_len; + + mlx5d_arg_init_send_attr(&send_attr, NULL, arg_idx); + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + mlx5dr_action_prepare_decap_l3_data(arg_data, (uint8_t *) wqe_arg, + num_of_actions); + mlx5dr_send_engine_post_end(&ctrl, &send_attr); +} + +static int +mlx5dr_arg_poll_for_comp(struct mlx5dr_context *ctx, uint16_t queue_id) +{ + struct rte_flow_op_result comp[1]; + int ret; + + while (true) { + ret = mlx5dr_send_queue_poll(ctx, queue_id, comp, 1); + if (ret) { + if (ret < 0) { + DR_LOG(ERR, "Failed mlx5dr_send_queue_poll"); + } else if (comp[0].status == RTE_FLOW_OP_ERROR) { + DR_LOG(ERR, "Got comp with error"); + rte_errno = ENOENT; + } + break; + } + } + return (ret == 1 ? 0 : ret); +} + +void mlx5dr_arg_write(struct mlx5dr_send_engine *queue, + void *comp_data, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_arg *wqe_arg; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + int i, full_iter, leftover; + size_t wqe_len; + + mlx5d_arg_init_send_attr(&send_attr, comp_data, arg_idx); + + /* Each WQE can hold 64B of data, it might require multiple iteration */ + full_iter = data_size / MLX5DR_ARG_DATA_SIZE; + leftover = data_size & (MLX5DR_ARG_DATA_SIZE - 1); + + for (i = 0; i < full_iter; i++) { + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + memcpy(wqe_arg, arg_data, wqe_len); + send_attr.id = arg_idx++; + mlx5dr_send_engine_post_end(&ctrl, &send_attr); + + /* Move to next argument data */ + arg_data += MLX5DR_ARG_DATA_SIZE; + } + + if (leftover) { + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + memcpy(wqe_arg, arg_data, leftover); + send_attr.id = arg_idx; + mlx5dr_send_engine_post_end(&ctrl, &send_attr); + } +} + +int mlx5dr_arg_write_inline_arg_data(struct mlx5dr_context *ctx, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size) +{ + struct mlx5dr_send_engine *queue; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + + /* Get the control queue */ + queue = &ctx->send_queue[ctx->queues - 1]; + + mlx5dr_arg_write(queue, arg_data, arg_idx, arg_data, data_size); + + mlx5dr_send_engine_flush_queue(queue); + + /* Poll for completion */ + ret = mlx5dr_arg_poll_for_comp(ctx, ctx->queues - 1); + if (ret) + DR_LOG(ERR, "Failed to get completions for shared action"); + + pthread_spin_unlock(&ctx->ctrl_lock); + + return ret; +} + +bool mlx5dr_arg_is_valid_arg_request_size(struct mlx5dr_context *ctx, + uint32_t arg_size) +{ + if (arg_size < ctx->caps->log_header_modify_argument_granularity || + arg_size > ctx->caps->log_header_modify_argument_max_alloc) { + return false; + } + return true; +} + +static int +mlx5dr_arg_create_modify_header_arg(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *pattern, + uint32_t bulk_size) +{ + uint32_t flags = action->flags; + uint16_t args_log_size; + int ret = 0; + + /* Alloc bulk of args */ + args_log_size = mlx5dr_arg_get_arg_log_size(num_of_actions); + if (args_log_size >= MLX5DR_ARG_CHUNK_SIZE_MAX) { + DR_LOG(ERR, "Exceed number of allowed actions %u", + num_of_actions); + rte_errno = EINVAL; + return rte_errno; + } + + if (!mlx5dr_arg_is_valid_arg_request_size(ctx, args_log_size + bulk_size)) { + DR_LOG(ERR, "Arg size %d does not fit FW capability", + args_log_size + bulk_size); + rte_errno = EINVAL; + return rte_errno; + } + + action->modify_header.arg_obj = + mlx5dr_cmd_arg_create(ctx->ibv_ctx, args_log_size + bulk_size, + ctx->pd_num); + if (!action->modify_header.arg_obj) { + DR_LOG(ERR, "Failed allocating arg in order: %d", + args_log_size + bulk_size); + return rte_errno; + } + + /* When INLINE need to write the arg data */ + if (flags & MLX5DR_ACTION_FLAG_SHARED) + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->modify_header.arg_obj->id, + (uint8_t *)pattern, + num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); + if (ret) { + DR_LOG(ERR, "Failed writing INLINE arg in order: %d", + args_log_size + bulk_size); + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + return rte_errno; + } + + return 0; +} + +int mlx5dr_pat_arg_create_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + size_t pattern_sz, + __be64 pattern[], + uint32_t bulk_size) +{ + uint16_t num_of_actions; + int ret; + + num_of_actions = pattern_sz / MLX5DR_MODIFY_ACTION_SIZE; + if (num_of_actions == 0) { + DR_LOG(ERR, "Invalid number of actions %u\n", num_of_actions); + rte_errno = EINVAL; + return rte_errno; + } + + action->modify_header.num_of_actions = num_of_actions; + + ret = mlx5dr_arg_create_modify_header_arg(ctx, action, + num_of_actions, + pattern, + bulk_size); + if (ret) { + DR_LOG(ERR, "Failed to allocate arg"); + return ret; + } + + ret = mlx5dr_pat_get_pattern(ctx, action, num_of_actions, pattern_sz, + pattern); + if (ret) { + DR_LOG(ERR, "Failed to allocate pattern"); + goto free_arg; + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + return rte_errno; +} + +void mlx5dr_pat_arg_destroy_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action) +{ + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + mlx5dr_pat_put_pattern(ctx->pattern_cache, action); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.h b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h new file mode 100644 index 0000000000..8a4670427f --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_PAT_ARG_H_ +#define MLX5DR_PAT_ARG_H_ + +/* Modify-header arg pool */ +enum mlx5dr_arg_chunk_size { + MLX5DR_ARG_CHUNK_SIZE_1, + /* Keep MIN updated when changing */ + MLX5DR_ARG_CHUNK_SIZE_MIN = MLX5DR_ARG_CHUNK_SIZE_1, + MLX5DR_ARG_CHUNK_SIZE_2, + MLX5DR_ARG_CHUNK_SIZE_3, + MLX5DR_ARG_CHUNK_SIZE_4, + MLX5DR_ARG_CHUNK_SIZE_MAX, +}; + +enum { + MLX5DR_MODIFY_ACTION_SIZE = 8, + MLX5DR_ARG_DATA_SIZE = 64, +}; + +struct mlx5dr_pattern_cache { + /* Protect pattern list */ + pthread_spinlock_t lock; + LIST_HEAD(pattern_head, mlx5dr_pat_cached_pattern) head; +}; + +struct mlx5dr_pat_cached_pattern { + enum mlx5dr_action_type type; + struct { + struct mlx5dr_devx_obj *pattern_obj; + struct dr_icm_chunk *chunk; + uint8_t *data; + uint16_t num_of_actions; + } mh_data; + rte_atomic32_t refcount; + LIST_ENTRY(mlx5dr_pat_cached_pattern) next; +}; + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_get_arg_log_size(uint16_t num_of_actions); + +uint32_t mlx5dr_arg_get_arg_size(uint16_t num_of_actions); + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_data_size_to_arg_log_size(uint16_t data_size); + +uint32_t mlx5dr_arg_data_size_to_arg_size(uint16_t data_size); + +int mlx5dr_pat_init_pattern_cache(struct mlx5dr_pattern_cache **cache); + +void mlx5dr_pat_uninit_pattern_cache(struct mlx5dr_pattern_cache *cache); + +int mlx5dr_pat_arg_create_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + size_t pattern_sz, + __be64 pattern[], + uint32_t bulk_size); + +void mlx5dr_pat_arg_destroy_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action); + +bool mlx5dr_arg_is_valid_arg_request_size(struct mlx5dr_context *ctx, + uint32_t arg_size); + +void mlx5dr_arg_write(struct mlx5dr_send_engine *queue, + void *comp_data, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size); + +void mlx5dr_arg_decapl3_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions); + +int mlx5dr_arg_write_inline_arg_data(struct mlx5dr_context *ctx, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size); +#endif /* MLX5DR_PAT_ARG_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 17/18] net/mlx5/hws: Add HWS debug layer 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (15 preceding siblings ...) 2022-10-19 14:42 ` [v4 16/18] net/mlx5/hws: Add HWS action object Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 2022-10-19 14:42 ` [v4 18/18] net/mlx5/hws: Enable HWS Alex Vesker 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Hamdan Igbaria From: Hamdan Igbaria <hamdani@nvidia.com> The debug layer is used to generate a debug CSV file containing details of the context, table, matcher, rules and other useful debug information. Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_debug.c | 462 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_debug.h | 28 ++ 2 files changed, 490 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.h diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c new file mode 100644 index 0000000000..890a761c48 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -0,0 +1,462 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +const char *mlx5dr_debug_action_type_str[] = { + [MLX5DR_ACTION_TYP_LAST] = "LAST", + [MLX5DR_ACTION_TYP_TNL_L2_TO_L2] = "TNL_L2_TO_L2", + [MLX5DR_ACTION_TYP_L2_TO_TNL_L2] = "L2_TO_TNL_L2", + [MLX5DR_ACTION_TYP_TNL_L3_TO_L2] = "TNL_L3_TO_L2", + [MLX5DR_ACTION_TYP_L2_TO_TNL_L3] = "L2_TO_TNL_L3", + [MLX5DR_ACTION_TYP_DROP] = "DROP", + [MLX5DR_ACTION_TYP_TIR] = "TIR", + [MLX5DR_ACTION_TYP_FT] = "FT", + [MLX5DR_ACTION_TYP_CTR] = "CTR", + [MLX5DR_ACTION_TYP_TAG] = "TAG", + [MLX5DR_ACTION_TYP_MODIFY_HDR] = "MODIFY_HDR", + [MLX5DR_ACTION_TYP_VPORT] = "VPORT", + [MLX5DR_ACTION_TYP_MISS] = "DEFAULT_MISS", + [MLX5DR_ACTION_TYP_POP_VLAN] = "POP_VLAN", + [MLX5DR_ACTION_TYP_PUSH_VLAN] = "PUSH_VLAN", + [MLX5DR_ACTION_TYP_ASO_METER] = "ASO_METER", + [MLX5DR_ACTION_TYP_ASO_CT] = "ASO_CT", +}; + +static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX, + "Missing mlx5dr_debug_action_type_str"); + +const char *mlx5dr_debug_action_type_to_str(enum mlx5dr_action_type action_type) +{ + return mlx5dr_debug_action_type_str[action_type]; +} + +static int +mlx5dr_debug_dump_matcher_template_definer(FILE *f, + struct mlx5dr_match_template *mt) +{ + struct mlx5dr_definer *definer = mt->definer; + int i, ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,", + MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_DEFINER, + (uint64_t)(uintptr_t)definer, + (uint64_t)(uintptr_t)mt, + definer->obj->id, + definer->type); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (i = 0; i < DW_SELECTORS; i++) { + ret = fprintf(f, "0x%x%s", definer->dw_selector[i], + (i == DW_SELECTORS - 1) ? "," : "-"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + for (i = 0; i < BYTE_SELECTORS; i++) { + ret = fprintf(f, "0x%x%s", definer->byte_selector[i], + (i == BYTE_SELECTORS - 1) ? "," : "-"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + for (i = 0; i < MLX5DR_JUMBO_TAG_SZ; i++) { + ret = fprintf(f, "%02x", definer->mask.jumbo[i]); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + ret = fprintf(f, "\n"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int +mlx5dr_debug_dump_matcher_match_template(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + int i, ret; + + for (i = 0; i < matcher->num_of_mt; i++) { + struct mlx5dr_match_template *mt = matcher->mt[i]; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_MATCHER_MATCH_TEMPLATE, + (uint64_t)(uintptr_t)mt, + (uint64_t)(uintptr_t)matcher, + is_root ? 0 : mt->fc_sz, + mt->flags); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + if (!is_root) { + ret = mlx5dr_debug_dump_matcher_template_definer(f, mt); + if (ret) + return ret; + } + } + + return 0; +} + +static int +mlx5dr_debug_dump_matcher_action_template(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + enum mlx5dr_action_type action_type; + int i, j, ret; + + for (i = 0; i < matcher->num_of_at; i++) { + struct mlx5dr_action_template *at = matcher->at[i]; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,%d", + MLX5DR_DEBUG_RES_TYPE_MATCHER_ACTION_TEMPLATE, + (uint64_t)(uintptr_t)at, + (uint64_t)(uintptr_t)matcher, + at->only_term ? 0 : 1, + is_root ? 0 : at->num_of_action_stes, + at->num_actions); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (j = 0; j < at->num_actions; j++) { + action_type = at->action_type_arr[j]; + ret = fprintf(f, ",%s", mlx5dr_debug_action_type_to_str(action_type)); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + fprintf(f, "\n"); + } + + return 0; +} + +static int +mlx5dr_debug_dump_matcher_attr(FILE *f, struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_matcher_attr *attr = &matcher->attr; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_MATCHER_ATTR, + (uint64_t)(uintptr_t)matcher, + attr->priority, + attr->mode, + attr->table.sz_row_log, + attr->table.sz_col_log, + attr->optimize_using_rule_idx, + attr->optimize_flow_src); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_matcher(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + enum mlx5dr_table_type tbl_type = matcher->tbl->type; + struct mlx5dr_devx_obj *ste_0, *ste_1 = NULL; + struct mlx5dr_pool_chunk *ste; + struct mlx5dr_pool *ste_pool; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,0x%" PRIx64, + MLX5DR_DEBUG_RES_TYPE_MATCHER, + (uint64_t)(uintptr_t)matcher, + (uint64_t)(uintptr_t)matcher->tbl, + matcher->num_of_mt, + is_root ? 0 : matcher->end_ft->id, + matcher->col_matcher ? (uint64_t)(uintptr_t)matcher->col_matcher : 0); + if (ret < 0) + goto out_err; + + ste = &matcher->match_ste.ste; + ste_pool = matcher->match_ste.pool; + if (ste_pool) { + ste_0 = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + ste_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + } else { + ste_0 = NULL; + ste_1 = NULL; + } + + ret = fprintf(f, ",%d,%d,%d,%d", + matcher->match_ste.rtc_0 ? matcher->match_ste.rtc_0->id : 0, + ste_0 ? (int)ste_0->id : -1, + matcher->match_ste.rtc_1 ? matcher->match_ste.rtc_1->id : 0, + ste_1 ? (int)ste_1->id : -1); + if (ret < 0) + goto out_err; + + ste = &matcher->action_ste.ste; + ste_pool = matcher->action_ste.pool; + if (ste_pool) { + ste_0 = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + ste_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + } else { + ste_0 = NULL; + ste_1 = NULL; + } + + ret = fprintf(f, ",%d,%d,%d,%d\n", + matcher->action_ste.rtc_0 ? matcher->action_ste.rtc_0->id : 0, + ste_0 ? (int)ste_0->id : -1, + matcher->action_ste.rtc_1 ? matcher->action_ste.rtc_1->id : 0, + ste_1 ? (int)ste_1->id : -1); + if (ret < 0) + goto out_err; + + ret = mlx5dr_debug_dump_matcher_attr(f, matcher); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_matcher_match_template(f, matcher); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_matcher_action_template(f, matcher); + if (ret) + return ret; + + return 0; + +out_err: + rte_errno = EINVAL; + return rte_errno; +} + +static int mlx5dr_debug_dump_table(FILE *f, struct mlx5dr_table *tbl) +{ + bool is_root = tbl->level == MLX5DR_ROOT_LEVEL; + struct mlx5dr_matcher *matcher; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_TABLE, + (uint64_t)(uintptr_t)tbl, + (uint64_t)(uintptr_t)tbl->ctx, + is_root ? 0 : tbl->ft->id, + tbl->type, + is_root ? 0 : tbl->fw_ft_type, + tbl->level); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + LIST_FOREACH(matcher, &tbl->head, next) { + ret = mlx5dr_debug_dump_matcher(f, matcher); + if (ret) + return ret; + } + + return 0; +} + +static int +mlx5dr_debug_dump_context_send_engine(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_send_engine *send_queue; + int ret, i, j; + + for (i = 0; i < (int)ctx->queues; i++) { + send_queue = &ctx->send_queue[i]; + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_ENGINE, + (uint64_t)(uintptr_t)ctx, + i, + send_queue->used_entries, + send_queue->th_entries, + send_queue->rings, + send_queue->num_entries, + send_queue->err, + send_queue->completed.ci, + send_queue->completed.pi, + send_queue->completed.mask); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (j = 0; j < MLX5DR_NUM_SEND_RINGS; j++) { + struct mlx5dr_send_ring *send_ring = &send_queue->send_ring[j]; + struct mlx5dr_send_ring_cq *cq = &send_ring->send_cq; + struct mlx5dr_send_ring_sq *sq = &send_ring->send_sq; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_RING, + (uint64_t)(uintptr_t)ctx, + j, + i, + cq->cqn, + cq->cons_index, + cq->ncqe_mask, + cq->buf_sz, + cq->ncqe, + cq->cqe_log_sz, + cq->poll_wqe, + cq->cqe_sz, + sq->sqn, + sq->obj->id, + sq->cur_post, + sq->buf_mask); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + } + + return 0; +} + +static int mlx5dr_debug_dump_context_caps(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_cmd_query_caps *caps = ctx->caps; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%s,%d,%d,%d,%d,", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_CAPS, + (uint64_t)(uintptr_t)ctx, + caps->fw_ver, + caps->wqe_based_update, + caps->ste_format, + caps->ste_alloc_log_max, + caps->log_header_modify_argument_max_alloc); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + ret = fprintf(f, "%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + caps->flex_protocols, + caps->rtc_reparse_mode, + caps->rtc_index_mode, + caps->ste_alloc_log_gran, + caps->stc_alloc_log_max, + caps->stc_alloc_log_gran, + caps->rtc_log_depth_max, + caps->format_select_gtpu_dw_0, + caps->format_select_gtpu_dw_1, + caps->format_select_gtpu_dw_2, + caps->format_select_gtpu_ext_dw_0, + caps->nic_ft.max_level, + caps->nic_ft.reparse, + caps->fdb_ft.max_level, + caps->fdb_ft.reparse, + caps->log_header_modify_argument_granularity); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_context_attr(FILE *f, struct mlx5dr_context *ctx) +{ + int ret; + + ret = fprintf(f, "%u,0x%" PRIx64 ",%d,%zu,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_ATTR, + (uint64_t)(uintptr_t)ctx, + ctx->pd_num, + ctx->queues, + ctx->send_queue->num_entries); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_context_info(FILE *f, struct mlx5dr_context *ctx) +{ + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%s,%s\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT, + (uint64_t)(uintptr_t)ctx, + ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT, + mlx5_glue->get_device_name(ctx->ibv_ctx->device), + DEBUG_VERSION); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + ret = mlx5dr_debug_dump_context_attr(f, ctx); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_context_caps(f, ctx); + if (ret) + return ret; + + return 0; +} + +static int mlx5dr_debug_dump_context(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_table *tbl; + int ret; + + ret = mlx5dr_debug_dump_context_info(f, ctx); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_context_send_engine(f, ctx); + if (ret) + return ret; + + LIST_FOREACH(tbl, &ctx->head, next) { + ret = mlx5dr_debug_dump_table(f, tbl); + if (ret) + return ret; + } + + return 0; +} + +int mlx5dr_debug_dump(struct mlx5dr_context *ctx, FILE *f) +{ + int ret; + + if (!f || !ctx) { + rte_errno = EINVAL; + return -rte_errno; + } + + pthread_spin_lock(&ctx->ctrl_lock); + ret = mlx5dr_debug_dump_context(f, ctx); + pthread_spin_unlock(&ctx->ctrl_lock); + + return -ret; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.h b/drivers/net/mlx5/hws/mlx5dr_debug.h new file mode 100644 index 0000000000..cf00170f7d --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_debug.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_DEBUG_H_ +#define MLX5DR_DEBUG_H_ + +#define DEBUG_VERSION "1.0.DPDK" + +enum mlx5dr_debug_res_type { + MLX5DR_DEBUG_RES_TYPE_CONTEXT = 4000, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_ATTR = 4001, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_CAPS = 4002, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_ENGINE = 4003, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_RING = 4004, + + MLX5DR_DEBUG_RES_TYPE_TABLE = 4100, + + MLX5DR_DEBUG_RES_TYPE_MATCHER = 4200, + MLX5DR_DEBUG_RES_TYPE_MATCHER_ATTR = 4201, + MLX5DR_DEBUG_RES_TYPE_MATCHER_MATCH_TEMPLATE = 4202, + MLX5DR_DEBUG_RES_TYPE_MATCHER_ACTION_TEMPLATE = 4204, + MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_DEFINER = 4203, +}; + +const char *mlx5dr_debug_action_type_to_str(enum mlx5dr_action_type action_type); + +#endif /* MLX5DR_DEBUG_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v4 18/18] net/mlx5/hws: Enable HWS 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (16 preceding siblings ...) 2022-10-19 14:42 ` [v4 17/18] net/mlx5/hws: Add HWS debug layer Alex Vesker @ 2022-10-19 14:42 ` Alex Vesker 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 14:42 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Replace stub implenation of HWS with mlx5dr code. Signed-off-by: Alex Vesker <valex@nvidia.com> --- doc/guides/nics/mlx5.rst | 5 +- doc/guides/rel_notes/release_22_11.rst | 4 + drivers/common/mlx5/linux/meson.build | 2 + drivers/net/mlx5/hws/meson.build | 18 + drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} | 209 ++++++++-- drivers/net/mlx5/hws/mlx5dr_internal.h | 93 +++++ drivers/net/mlx5/meson.build | 7 +- drivers/net/mlx5/mlx5.c | 6 +- drivers/net/mlx5/mlx5.h | 7 +- drivers/net/mlx5/mlx5_devx.c | 2 +- drivers/net/mlx5/mlx5_dr.c | 383 ------------------- drivers/net/mlx5/mlx5_flow.c | 2 + drivers/net/mlx5/mlx5_flow.h | 11 +- drivers/net/mlx5/mlx5_flow_hw.c | 10 +- 14 files changed, 327 insertions(+), 432 deletions(-) create mode 100644 drivers/net/mlx5/hws/meson.build rename drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} (66%) create mode 100644 drivers/net/mlx5/hws/mlx5dr_internal.h delete mode 100644 drivers/net/mlx5/mlx5_dr.c diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index bb436892a0..303eb17714 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -539,7 +539,10 @@ Limitations - WQE based high scaling and safer flow insertion/destruction. - Set ``dv_flow_en`` to 2 in order to enable HW steering. - - Async queue-based ``rte_flow_q`` APIs supported only. + - Async queue-based ``rte_flow_async`` APIs supported only. + - NIC ConnectX-5 and before are not supported. + - Partial match with item template is not supported. + - IPv6 5-tuple matching is not supported. - Match on GRE header supports the following fields: diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index a3700bbb34..eed7acc838 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -237,6 +237,10 @@ New Features sysfs entries to adjust the minimum and maximum uncore frequency values, which works on Linux with Intel hardware only. +* **Updated Nvidia mlx5 driver.** + + * Added fully support for queue based async HW steering to the PMD. + * **Rewritten pmdinfo script.** The ``dpdk-pmdinfo.py`` script was rewritten to produce valid JSON only. diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build index e77b46d157..3db7a1770f 100644 --- a/drivers/common/mlx5/linux/meson.build +++ b/drivers/common/mlx5/linux/meson.build @@ -229,6 +229,8 @@ foreach arg:has_member_args endforeach configure_file(output : 'mlx5_autoconf.h', configuration : config) +MLX5_HAVE_IBV_FLOW_DV_SUPPORT=config.get('HAVE_IBV_FLOW_DV_SUPPORT') + # Build Glue Library if dlopen_ibverbs dlopen_name = 'mlx5_glue' diff --git a/drivers/net/mlx5/hws/meson.build b/drivers/net/mlx5/hws/meson.build new file mode 100644 index 0000000000..f94798dd2d --- /dev/null +++ b/drivers/net/mlx5/hws/meson.build @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2022 NVIDIA Corporation & Affiliates + +includes += include_directories('.') +sources += files( + 'mlx5dr_context.c', + 'mlx5dr_table.c', + 'mlx5dr_matcher.c', + 'mlx5dr_rule.c', + 'mlx5dr_action.c', + 'mlx5dr_buddy.c', + 'mlx5dr_pool.c', + 'mlx5dr_cmd.c', + 'mlx5dr_send.c', + 'mlx5dr_definer.c', + 'mlx5dr_debug.c', + 'mlx5dr_pat_arg.c', +) diff --git a/drivers/net/mlx5/mlx5_dr.h b/drivers/net/mlx5/hws/mlx5dr.h similarity index 66% rename from drivers/net/mlx5/mlx5_dr.h rename to drivers/net/mlx5/hws/mlx5dr.h index d0b2c15652..664dadbcde 100644 --- a/drivers/net/mlx5/mlx5_dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved. + * Copyright (c) 2022 NVIDIA Corporation & Affiliates */ -#ifndef MLX5_DR_H_ -#define MLX5_DR_H_ +#ifndef MLX5DR_H_ +#define MLX5DR_H_ #include <rte_flow.h> @@ -11,6 +11,7 @@ struct mlx5dr_context; struct mlx5dr_table; struct mlx5dr_matcher; struct mlx5dr_rule; +struct ibv_context; enum mlx5dr_table_type { MLX5DR_TABLE_TYPE_NIC_RX, @@ -26,6 +27,27 @@ enum mlx5dr_matcher_resource_mode { MLX5DR_MATCHER_RESOURCE_MODE_HTABLE, }; +enum mlx5dr_action_type { + MLX5DR_ACTION_TYP_LAST, + MLX5DR_ACTION_TYP_TNL_L2_TO_L2, + MLX5DR_ACTION_TYP_L2_TO_TNL_L2, + MLX5DR_ACTION_TYP_TNL_L3_TO_L2, + MLX5DR_ACTION_TYP_L2_TO_TNL_L3, + MLX5DR_ACTION_TYP_DROP, + MLX5DR_ACTION_TYP_TIR, + MLX5DR_ACTION_TYP_FT, + MLX5DR_ACTION_TYP_CTR, + MLX5DR_ACTION_TYP_TAG, + MLX5DR_ACTION_TYP_MODIFY_HDR, + MLX5DR_ACTION_TYP_VPORT, + MLX5DR_ACTION_TYP_MISS, + MLX5DR_ACTION_TYP_POP_VLAN, + MLX5DR_ACTION_TYP_PUSH_VLAN, + MLX5DR_ACTION_TYP_ASO_METER, + MLX5DR_ACTION_TYP_ASO_CT, + MLX5DR_ACTION_TYP_MAX, +}; + enum mlx5dr_action_flags { MLX5DR_ACTION_FLAG_ROOT_RX = 1 << 0, MLX5DR_ACTION_FLAG_ROOT_TX = 1 << 1, @@ -33,7 +55,10 @@ enum mlx5dr_action_flags { MLX5DR_ACTION_FLAG_HWS_RX = 1 << 3, MLX5DR_ACTION_FLAG_HWS_TX = 1 << 4, MLX5DR_ACTION_FLAG_HWS_FDB = 1 << 5, - MLX5DR_ACTION_FLAG_INLINE = 1 << 6, + /* Shared action can be used over a few threads, since data is written + * only once at the creation of the action. + */ + MLX5DR_ACTION_FLAG_SHARED = 1 << 6, }; enum mlx5dr_action_reformat_type { @@ -43,6 +68,18 @@ enum mlx5dr_action_reformat_type { MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3, }; +enum mlx5dr_action_aso_meter_color { + MLX5DR_ACTION_ASO_METER_COLOR_RED = 0x0, + MLX5DR_ACTION_ASO_METER_COLOR_YELLOW = 0x1, + MLX5DR_ACTION_ASO_METER_COLOR_GREEN = 0x2, + MLX5DR_ACTION_ASO_METER_COLOR_UNDEFINED = 0x3, +}; + +enum mlx5dr_action_aso_ct_flags { + MLX5DR_ACTION_ASO_CT_DIRECTION_INITIATOR = 0 << 0, + MLX5DR_ACTION_ASO_CT_DIRECTION_RESPONDER = 1 << 0, +}; + enum mlx5dr_match_template_flags { /* Allow relaxed matching by skipping derived dependent match fields. */ MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH = 1, @@ -56,7 +93,7 @@ enum mlx5dr_send_queue_actions { struct mlx5dr_context_attr { uint16_t queues; uint16_t queue_size; - size_t initial_log_ste_memory; + size_t initial_log_ste_memory; /* Currently not in use */ /* Optional PD used for allocating res ources */ struct ibv_pd *pd; }; @@ -66,9 +103,21 @@ struct mlx5dr_table_attr { uint32_t level; }; +enum mlx5dr_matcher_flow_src { + MLX5DR_MATCHER_FLOW_SRC_ANY = 0x0, + MLX5DR_MATCHER_FLOW_SRC_WIRE = 0x1, + MLX5DR_MATCHER_FLOW_SRC_VPORT = 0x2, +}; + struct mlx5dr_matcher_attr { + /* Processing priority inside table */ uint32_t priority; + /* Provide all rules with unique rule_idx in num_log range to reduce locking */ + bool optimize_using_rule_idx; + /* Resource mode and corresponding size */ enum mlx5dr_matcher_resource_mode mode; + /* Optimize insertion in case packet origin is the same for all rules */ + enum mlx5dr_matcher_flow_src optimize_flow_src; union { struct { uint8_t sz_row_log; @@ -84,6 +133,8 @@ struct mlx5dr_matcher_attr { struct mlx5dr_rule_attr { uint16_t queue_id; void *user_data; + /* Valid if matcher optimize_using_rule_idx is set */ + uint32_t rule_idx; uint32_t burst:1; }; @@ -92,6 +143,9 @@ struct mlx5dr_devx_obj { uint32_t id; }; +/* In actions that take offset, the offset is unique, and the user should not + * reuse the same index because data changing is not atomic. + */ struct mlx5dr_rule_action { struct mlx5dr_action *action; union { @@ -116,31 +170,17 @@ struct mlx5dr_rule_action { struct { rte_be32_t vlan_hdr; } push_vlan; - }; -}; - -enum { - MLX5DR_MATCH_TAG_SZ = 32, - MLX5DR_JAMBO_TAG_SZ = 44, -}; -enum mlx5dr_rule_status { - MLX5DR_RULE_STATUS_UNKNOWN, - MLX5DR_RULE_STATUS_CREATING, - MLX5DR_RULE_STATUS_CREATED, - MLX5DR_RULE_STATUS_DELETING, - MLX5DR_RULE_STATUS_DELETED, - MLX5DR_RULE_STATUS_FAILED, -}; + struct { + uint32_t offset; + enum mlx5dr_action_aso_meter_color init_color; + } aso_meter; -struct mlx5dr_rule { - struct mlx5dr_matcher *matcher; - union { - uint8_t match_tag[MLX5DR_MATCH_TAG_SZ]; - struct ibv_flow *flow; + struct { + uint32_t offset; + enum mlx5dr_action_aso_ct_flags direction; + } aso_ct; }; - enum mlx5dr_rule_status status; - uint32_t rtc_used; /* The RTC into which the STE was inserted */ }; /* Open a context used for direct rule insertion using hardware steering. @@ -153,7 +193,7 @@ struct mlx5dr_rule { * @return pointer to mlx5dr_context on success NULL otherwise. */ struct mlx5dr_context * -mlx5dr_context_open(void *ibv_ctx, +mlx5dr_context_open(struct ibv_context *ibv_ctx, struct mlx5dr_context_attr *attr); /* Close a context used for direct hardware steering. @@ -205,6 +245,26 @@ mlx5dr_match_template_create(const struct rte_flow_item items[], */ int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt); +/* Create new action template based on action_type array, the action template + * will be used for matcher creation. + * + * @param[in] action_type + * An array of actions based on the order of actions which will be provided + * with rule_actions to mlx5dr_rule_create. The last action is marked + * using MLX5DR_ACTION_TYP_LAST. + * @return pointer to mlx5dr_action_template on success NULL otherwise + */ +struct mlx5dr_action_template * +mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[]); + +/* Destroy action template. + * + * @param[in] at + * Action template to destroy. + * @return zero on success non zero otherwise. + */ +int mlx5dr_action_template_destroy(struct mlx5dr_action_template *at); + /* Create a new direct rule matcher. Each matcher can contain multiple rules. * Matchers on the table will be processed by priority. Matching fields and * mask are described by the match template. In some cases multiple match @@ -216,6 +276,10 @@ int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt); * Array of match templates to be used on matcher. * @param[in] num_of_mt * Number of match templates in mt array. + * @param[in] at + * Array of action templates to be used on matcher. + * @param[in] num_of_at + * Number of action templates in mt array. * @param[in] attr * Attributes used for matcher creation. * @return pointer to mlx5dr_matcher on success NULL otherwise. @@ -224,6 +288,8 @@ struct mlx5dr_matcher * mlx5dr_matcher_create(struct mlx5dr_table *table, struct mlx5dr_match_template *mt[], uint8_t num_of_mt, + struct mlx5dr_action_template *at[], + uint8_t num_of_at, struct mlx5dr_matcher_attr *attr); /* Destroy direct rule matcher. @@ -245,11 +311,13 @@ size_t mlx5dr_rule_get_handle_size(void); * @param[in] matcher * The matcher in which the new rule will be created. * @param[in] mt_idx - * Match template index to create the rule with. + * Match template index to create the match with. * @param[in] items * The items used for the value matching. * @param[in] rule_actions * Rule action to be executed on match. + * @param[in] at_idx + * Action template index to apply the actions with. * @param[in] num_of_actions * Number of rule actions. * @param[in] attr @@ -261,8 +329,8 @@ size_t mlx5dr_rule_get_handle_size(void); int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, uint8_t mt_idx, const struct rte_flow_item items[], + uint8_t at_idx, struct mlx5dr_rule_action rule_actions[], - uint8_t num_of_actions, struct mlx5dr_rule_attr *attr, struct mlx5dr_rule *rule_handle); @@ -317,6 +385,21 @@ mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx, struct mlx5dr_table *tbl, uint32_t flags); +/* Create direct rule goto vport action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] ib_port_num + * Destination ib_port number. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_dest_vport(struct mlx5dr_context *ctx, + uint32_t ib_port_num, + uint32_t flags); + /* Create direct rule goto TIR action. * * @param[in] ctx @@ -400,10 +483,66 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, struct mlx5dr_action * mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, size_t pattern_sz, - rte_be64_t pattern[], + __be64 pattern[], uint32_t log_bulk_size, uint32_t flags); +/* Create direct rule ASO flow meter action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] devx_obj + * The DEVX ASO object. + * @param[in] return_reg_c + * Copy the ASO object value into this reg_c, after a packet hits a rule with this ASO object. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_aso_meter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_c, + uint32_t flags); + +/* Create direct rule ASO CT action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] devx_obj + * The DEVX ASO object. + * @param[in] return_reg_id + * Copy the ASO object value into this reg_id, after a packet hits a rule with this ASO object. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_aso_ct(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags); + +/* Create direct rule pop vlan action. + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_pop_vlan(struct mlx5dr_context *ctx, uint32_t flags); + +/* Create direct rule push vlan action. + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_push_vlan(struct mlx5dr_context *ctx, uint32_t flags); + /* Destroy direct rule action. * * @param[in] action @@ -432,11 +571,11 @@ int mlx5dr_send_queue_poll(struct mlx5dr_context *ctx, /* Perform an action on the queue * * @param[in] ctx - * The context to which the queue belong to. + * The context to which the queue belong to. * @param[in] queue_id - * The id of the queue to perform the action on. + * The id of the queue to perform the action on. * @param[in] actions - * Actions to perform on the queue. (enum mlx5dr_send_queue_actions) + * Actions to perform on the queue. (enum mlx5dr_send_queue_actions) * @return zero on success non zero otherwise. */ int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, @@ -448,7 +587,7 @@ int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, * @param[in] ctx * The context which to dump the info from. * @param[in] f - * The file to write the dump to. + * The file to write the dump to. * @return zero on success non zero otherwise. */ int mlx5dr_debug_dump(struct mlx5dr_context *ctx, FILE *f); diff --git a/drivers/net/mlx5/hws/mlx5dr_internal.h b/drivers/net/mlx5/hws/mlx5dr_internal.h new file mode 100644 index 0000000000..dbd77b9c66 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_internal.h @@ -0,0 +1,93 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_INTERNAL_H_ +#define MLX5DR_INTERNAL_H_ + +#include <stdint.h> +#include <sys/queue.h> +/* Verbs headers do not support -pedantic. */ +#ifdef PEDANTIC +#pragma GCC diagnostic ignored "-Wpedantic" +#endif +#include <infiniband/verbs.h> +#include <infiniband/mlx5dv.h> +#ifdef PEDANTIC +#pragma GCC diagnostic error "-Wpedantic" +#endif +#include <rte_flow.h> +#include <rte_gtp.h> + +#include "mlx5_prm.h" +#include "mlx5_glue.h" +#include "mlx5_flow.h" +#include "mlx5_utils.h" +#include "mlx5_malloc.h" + +#include "mlx5dr.h" +#include "mlx5dr_pool.h" +#include "mlx5dr_context.h" +#include "mlx5dr_table.h" +#include "mlx5dr_matcher.h" +#include "mlx5dr_send.h" +#include "mlx5dr_rule.h" +#include "mlx5dr_cmd.h" +#include "mlx5dr_action.h" +#include "mlx5dr_definer.h" +#include "mlx5dr_debug.h" +#include "mlx5dr_pat_arg.h" + +#define DW_SIZE 4 +#define BITS_IN_BYTE 8 +#define BITS_IN_DW (BITS_IN_BYTE * DW_SIZE) + +#define BIT(_bit) (1ULL << (_bit)) +#define IS_BIT_SET(_value, _bit) (_value & (1ULL << (_bit))) + +#ifndef ARRAY_SIZE +#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) +#endif + +#ifdef RTE_LIBRTE_MLX5_DEBUG +/* Prevent double function name print when debug is set */ +#define DR_LOG DRV_LOG +#else +/* Print function name as part of the log */ +#define DR_LOG(level, ...) \ + DRV_LOG(level, RTE_FMT("[%s]: " RTE_FMT_HEAD(__VA_ARGS__,), __func__, RTE_FMT_TAIL(__VA_ARGS__,))) +#endif + +static inline void *simple_malloc(size_t size) +{ + return mlx5_malloc(MLX5_MEM_SYS, + size, + MLX5_MALLOC_ALIGNMENT, + SOCKET_ID_ANY); +} + +static inline void *simple_calloc(size_t nmemb, size_t size) +{ + return mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO, + nmemb * size, + MLX5_MALLOC_ALIGNMENT, + SOCKET_ID_ANY); +} + +static inline void simple_free(void *addr) +{ + mlx5_free(addr); +} + +static inline bool is_mem_zero(const uint8_t *mem, size_t size) +{ + assert(size); + return (*mem == 0) && memcmp(mem, mem + 1, size - 1) == 0; +} + +static inline uint64_t roundup_pow_of_two(uint64_t n) +{ + return n == 1 ? 1 : 1ULL << log2above(n); +} + +#endif /* MLX5DR_INTERNAL_H_ */ diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index 6a84d96380..c3b8fa16d3 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -14,10 +14,8 @@ sources = files( 'mlx5.c', 'mlx5_ethdev.c', 'mlx5_flow.c', - 'mlx5_dr.c', 'mlx5_flow_meter.c', 'mlx5_flow_dv.c', - 'mlx5_flow_hw.c', 'mlx5_flow_aso.c', 'mlx5_flow_flex.c', 'mlx5_mac.c', @@ -42,6 +40,7 @@ sources = files( if is_linux sources += files( + 'mlx5_flow_hw.c', 'mlx5_flow_verbs.c', ) if (dpdk_conf.has('RTE_ARCH_X86_64') @@ -72,3 +71,7 @@ endif testpmd_sources += files('mlx5_testpmd.c') subdir(exec_env) + +if (is_linux and MLX5_HAVE_IBV_FLOW_DV_SUPPORT) + subdir('hws') +endif diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index b39ef1ecbe..a34fbcf74d 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1700,7 +1700,7 @@ mlx5_free_table_hash_list(struct mlx5_priv *priv) *tbls = NULL; } -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#ifdef HAVE_MLX5_HWS_SUPPORT /** * Allocate HW steering group hash list. * @@ -1749,7 +1749,7 @@ mlx5_alloc_table_hash_list(struct mlx5_priv *priv __rte_unused) int err = 0; /* Tables are only used in DV and DR modes. */ -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#ifdef HAVE_MLX5_HWS_SUPPORT struct mlx5_dev_ctx_shared *sh = priv->sh; char s[MLX5_NAME_SIZE]; @@ -1942,7 +1942,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) /* Free the eCPRI flex parser resource. */ mlx5_flex_parser_ecpri_release(dev); mlx5_flex_item_port_cleanup(dev); -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#ifdef HAVE_MLX5_HWS_SUPPORT flow_hw_resource_release(dev); flow_hw_clear_port_info(dev); if (priv->sh->config.dv_flow_en == 2) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 741be2df98..1d3c1ad93d 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -34,7 +34,12 @@ #include "mlx5_os.h" #include "mlx5_autoconf.h" #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) -#include "mlx5_dr.h" +#ifndef RTE_EXEC_ENV_WINDOWS +#define HAVE_MLX5_HWS_SUPPORT 1 +#else +#define __be64 uint64_t +#endif +#include "hws/mlx5dr.h" #endif #define MLX5_SH(dev) (((struct mlx5_priv *)(dev)->data->dev_private)->sh) diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index fe303a73bb..137e7dd4ac 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -907,7 +907,7 @@ mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq, rte_errno = errno; goto error; } -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#ifdef HAVE_MLX5_HWS_SUPPORT if (hrxq->hws_flags) { hrxq->action = mlx5dr_action_create_dest_tir (priv->dr_ctx, diff --git a/drivers/net/mlx5/mlx5_dr.c b/drivers/net/mlx5/mlx5_dr.c deleted file mode 100644 index 7218708986..0000000000 --- a/drivers/net/mlx5/mlx5_dr.c +++ /dev/null @@ -1,383 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved. - */ -#include <rte_flow.h> - -#include "mlx5_defs.h" -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) -#include "mlx5_dr.h" - -/* - * The following null stubs are prepared in order not to break the linkage - * before the HW steering low-level implementation is added. - */ - -/* Open a context used for direct rule insertion using hardware steering. - * Each context can contain multiple tables of different types. - * - * @param[in] ibv_ctx - * The ibv context to used for HWS. - * @param[in] attr - * Attributes used for context open. - * @return pointer to mlx5dr_context on success NULL otherwise. - */ -__rte_weak struct mlx5dr_context * -mlx5dr_context_open(void *ibv_ctx, - struct mlx5dr_context_attr *attr) -{ - (void)ibv_ctx; - (void)attr; - return NULL; -} - -/* Close a context used for direct hardware steering. - * - * @param[in] ctx - * mlx5dr context to close. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_context_close(struct mlx5dr_context *ctx) -{ - (void)ctx; - return 0; -} - -/* Create a new direct rule table. Each table can contain multiple matchers. - * - * @param[in] ctx - * The context in which the new table will be opened. - * @param[in] attr - * Attributes used for table creation. - * @return pointer to mlx5dr_table on success NULL otherwise. - */ -__rte_weak struct mlx5dr_table * -mlx5dr_table_create(struct mlx5dr_context *ctx, - struct mlx5dr_table_attr *attr) -{ - (void)ctx; - (void)attr; - return NULL; -} - -/* Destroy direct rule table. - * - * @param[in] tbl - * mlx5dr table to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int mlx5dr_table_destroy(struct mlx5dr_table *tbl) -{ - (void)tbl; - return 0; -} - -/* Create new match template based on items mask, the match template - * will be used for matcher creation. - * - * @param[in] items - * Describe the mask for template creation - * @param[in] flags - * Template creation flags - * @return pointer to mlx5dr_match_template on success NULL otherwise - */ -__rte_weak struct mlx5dr_match_template * -mlx5dr_match_template_create(const struct rte_flow_item items[], - enum mlx5dr_match_template_flags flags) -{ - (void)items; - (void)flags; - return NULL; -} - -/* Destroy match template. - * - * @param[in] mt - * Match template to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt) -{ - (void)mt; - return 0; -} - -/* Create a new direct rule matcher. Each matcher can contain multiple rules. - * Matchers on the table will be processed by priority. Matching fields and - * mask are described by the match template. In some cases multiple match - * templates can be used on the same matcher. - * - * @param[in] table - * The table in which the new matcher will be opened. - * @param[in] mt - * Array of match templates to be used on matcher. - * @param[in] num_of_mt - * Number of match templates in mt array. - * @param[in] attr - * Attributes used for matcher creation. - * @return pointer to mlx5dr_matcher on success NULL otherwise. - */ -__rte_weak struct mlx5dr_matcher * -mlx5dr_matcher_create(struct mlx5dr_table *table __rte_unused, - struct mlx5dr_match_template *mt[] __rte_unused, - uint8_t num_of_mt __rte_unused, - struct mlx5dr_matcher_attr *attr __rte_unused) -{ - return NULL; -} - -/* Destroy direct rule matcher. - * - * @param[in] matcher - * Matcher to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher __rte_unused) -{ - return 0; -} - -/* Enqueue create rule operation. - * - * @param[in] matcher - * The matcher in which the new rule will be created. - * @param[in] mt_idx - * Match template index to create the rule with. - * @param[in] items - * The items used for the value matching. - * @param[in] rule_actions - * Rule action to be executed on match. - * @param[in] num_of_actions - * Number of rule actions. - * @param[in] attr - * Rule creation attributes. - * @param[in, out] rule_handle - * A valid rule handle. The handle doesn't require any initialization. - * @return zero on successful enqueue non zero otherwise. - */ -__rte_weak int -mlx5dr_rule_create(struct mlx5dr_matcher *matcher __rte_unused, - uint8_t mt_idx __rte_unused, - const struct rte_flow_item items[] __rte_unused, - struct mlx5dr_rule_action rule_actions[] __rte_unused, - uint8_t num_of_actions __rte_unused, - struct mlx5dr_rule_attr *attr __rte_unused, - struct mlx5dr_rule *rule_handle __rte_unused) -{ - return 0; -} - -/* Enqueue destroy rule operation. - * - * @param[in] rule - * The rule destruction to enqueue. - * @param[in] attr - * Rule destruction attributes. - * @return zero on successful enqueue non zero otherwise. - */ -__rte_weak int -mlx5dr_rule_destroy(struct mlx5dr_rule *rule __rte_unused, - struct mlx5dr_rule_attr *attr __rte_unused) -{ - return 0; -} - -/* Create direct rule drop action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_drop(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule default miss action. - * Defaults are RX: Drop TX: Wire. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_default_miss(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule goto table action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] tbl - * Destination table. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx __rte_unused, - struct mlx5dr_table *tbl __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule goto TIR action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] obj - * Direct rule TIR devx object. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx __rte_unused, - struct mlx5dr_devx_obj *obj __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule TAG action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_tag(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule counter action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] obj - * Direct rule counter devx object. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_counter(struct mlx5dr_context *ctx, - struct mlx5dr_devx_obj *obj, - uint32_t flags); - -/* Create direct rule reformat action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] reformat_type - * Type of reformat. - * @param[in] data_sz - * Size in bytes of data. - * @param[in] inline_data - * Header data array in case of inline action. - * @param[in] log_bulk_size - * Number of unique values used with this pattern. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_reformat(struct mlx5dr_context *ctx __rte_unused, - enum mlx5dr_action_reformat_type reformat_type __rte_unused, - size_t data_sz __rte_unused, - void *inline_data __rte_unused, - uint32_t log_bulk_size __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule modify header action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] pattern_sz - * Byte size of the pattern array. - * @param[in] pattern - * PRM format modify pattern action array. - * @param[in] log_bulk_size - * Number of unique values used with this pattern. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, - size_t pattern_sz, - rte_be64_t pattern[], - uint32_t log_bulk_size, - uint32_t flags); - -/* Destroy direct rule action. - * - * @param[in] action - * The action to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_action_destroy(struct mlx5dr_action *action __rte_unused) -{ - return 0; -} - -/* Poll queue for rule creation and deletions completions. - * - * @param[in] ctx - * The context to which the queue belong to. - * @param[in] queue_id - * The id of the queue to poll. - * @param[in, out] res - * Completion array. - * @param[in] res_nb - * Maximum number of results to return. - * @return negative number on failure, the number of completions otherwise. - */ -__rte_weak int -mlx5dr_send_queue_poll(struct mlx5dr_context *ctx __rte_unused, - uint16_t queue_id __rte_unused, - struct rte_flow_op_result res[] __rte_unused, - uint32_t res_nb __rte_unused) -{ - return 0; -} - -/* Perform an action on the queue - * - * @param[in] ctx - * The context to which the queue belong to. - * @param[in] queue_id - * The id of the queue to perform the action on. - * @param[in] actions - * Actions to perform on the queue. (enum mlx5dr_send_queue_actions) - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_send_queue_action(struct mlx5dr_context *ctx __rte_unused, - uint16_t queue_id __rte_unused, - uint32_t actions __rte_unused) -{ - return 0; -} - -#endif diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index dd3d2bb1a4..2c6acd551c 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -93,6 +93,8 @@ const struct mlx5_flow_driver_ops *flow_drv_ops[] = { [MLX5_FLOW_TYPE_MIN] = &mlx5_flow_null_drv_ops, #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) [MLX5_FLOW_TYPE_DV] = &mlx5_flow_dv_drv_ops, +#endif +#ifdef HAVE_MLX5_HWS_SUPPORT [MLX5_FLOW_TYPE_HW] = &mlx5_flow_hw_drv_ops, #endif [MLX5_FLOW_TYPE_VERBS] = &mlx5_flow_verbs_drv_ops, diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2002f6ef4b..cde602d3a1 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -17,6 +17,7 @@ #include <mlx5_prm.h> #include "mlx5.h" +#include "hws/mlx5dr.h" /* E-Switch Manager port, used for rte_flow_item_port_id. */ #define MLX5_PORT_ESW_MGR UINT32_MAX @@ -1043,6 +1044,10 @@ struct rte_flow { #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#ifdef PEDANTIC +#pragma GCC diagnostic ignored "-Wpedantic" +#endif + /* HWS flow struct. */ struct rte_flow_hw { uint32_t idx; /* Flow index from indexed pool. */ @@ -1053,9 +1058,13 @@ struct rte_flow_hw { struct mlx5_hrxq *hrxq; /* TIR action. */ }; struct rte_flow_template_table *table; /* The table flow allcated from. */ - struct mlx5dr_rule rule; /* HWS layer data struct. */ + uint8_t rule[0]; /* HWS layer data struct. */ } __rte_packed; +#ifdef PEDANTIC +#pragma GCC diagnostic error "-Wpedantic" +#endif + /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { LIST_ENTRY(mlx5_action_construct_data) next; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 78c741bb91..fecf28c1ca 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -1107,8 +1107,8 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, actions, rule_acts, &acts_num); ret = mlx5dr_rule_create(table->matcher, pattern_template_index, items, - rule_acts, acts_num, - &rule_attr, &flow->rule); + action_template_index, rule_acts, + &rule_attr, (struct mlx5dr_rule *)flow->rule); if (likely(!ret)) return (struct rte_flow *)flow; /* Flow created fail, return the descriptor and flow memory. */ @@ -1171,7 +1171,7 @@ flow_hw_async_flow_destroy(struct rte_eth_dev *dev, job->user_data = user_data; job->flow = fh; rule_attr.user_data = job; - ret = mlx5dr_rule_destroy(&fh->rule, &rule_attr); + ret = mlx5dr_rule_destroy((struct mlx5dr_rule *)fh->rule, &rule_attr); if (likely(!ret)) return 0; priv->hw_q[queue].job_idx++; @@ -1437,7 +1437,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, .data = &flow_attr, }; struct mlx5_indexed_pool_config cfg = { - .size = sizeof(struct rte_flow_hw), + .size = sizeof(struct rte_flow_hw) + mlx5dr_rule_get_handle_size(), .trunk_size = 1 << 12, .per_core_cache = 1 << 13, .need_lock = 1, @@ -1498,7 +1498,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, tbl->its[i] = item_templates[i]; } tbl->matcher = mlx5dr_matcher_create - (tbl->grp->tbl, mt, nb_item_templates, &matcher_attr); + (tbl->grp->tbl, mt, nb_item_templates, NULL, 0, &matcher_attr); if (!tbl->matcher) goto it_error; tbl->nb_item_templates = nb_item_templates; -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 00/18] net/mlx5: Add HW steering low level support 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (21 preceding siblings ...) 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 01/18] net/mlx5: split flow item translation Alex Vesker ` (17 more replies) 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker 23 siblings, 18 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm; +Cc: dev, orika Mellanox ConnetX devices supports packet matching, packet modification and redirection. These functionalities are also referred to as flow-steering. To configure a steering rule, the rule is written to the device owned memory, this memory is accessed and cached by the device when processing a packet. The highlight of this patchset is supporting HW Steering (HWS) which is the new technology supported in new ConnectX devices, HWS allows configuring steering rules directly to the HW using special HW queues with minimal CPU effort. This patchset is the internal low layer implementation for HWS used by the mlx5 PMD. The mlx5dr (direct rule) is layer that bridges between the PMD and the HW by configuring the HW offloads based on the PMD logic v2: Fix check patch and cosmetic changes v3: -Fix unsupported items -Fix compilation with mlx5dv dependency v4: -Fix compile on Windows v5: -Fix compile on old rdma-core or no rdma core Alex Vesker (8): net/mlx5: Add additional glue functions for HWS net/mlx5/hws: Add HWS send layer net/mlx5/hws: Add HWS definer layer net/mlx5/hws: Add HWS context object net/mlx5/hws: Add HWS table object net/mlx5/hws: Add HWS matcher object net/mlx5/hws: Add HWS rule object net/mlx5/hws: Enable HWS Bing Zhao (2): common/mlx5: query set capability of registers net/mlx5: provide the available tag registers Dariusz Sosnowski (1): net/mlx5: add port to metadata conversion Erez Shitrit (3): net/mlx5/hws: Add HWS command layer net/mlx5/hws: Add HWS pool and buddy net/mlx5/hws: Add HWS action object Hamdan Igbaria (1): net/mlx5/hws: Add HWS debug layer Suanming Mou (3): net/mlx5: split flow item translation net/mlx5: split flow item matcher and value translation net/mlx5: add hardware steering item translation function doc/guides/nics/mlx5.rst | 5 +- doc/guides/rel_notes/release_22_11.rst | 4 + drivers/common/mlx5/linux/meson.build | 5 + drivers/common/mlx5/linux/mlx5_glue.c | 121 +- drivers/common/mlx5/linux/mlx5_glue.h | 17 + drivers/common/mlx5/mlx5_devx_cmds.c | 30 + drivers/common/mlx5/mlx5_devx_cmds.h | 2 + drivers/common/mlx5/mlx5_prm.h | 652 ++++- drivers/net/mlx5/hws/meson.build | 18 + drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} | 209 +- drivers/net/mlx5/hws/mlx5dr_action.c | 2237 +++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 253 ++ drivers/net/mlx5/hws/mlx5dr_buddy.c | 201 ++ drivers/net/mlx5/hws/mlx5dr_buddy.h | 22 + drivers/net/mlx5/hws/mlx5dr_cmd.c | 948 +++++++ drivers/net/mlx5/hws/mlx5dr_cmd.h | 230 ++ drivers/net/mlx5/hws/mlx5dr_context.c | 223 ++ drivers/net/mlx5/hws/mlx5dr_context.h | 40 + drivers/net/mlx5/hws/mlx5dr_debug.c | 462 +++ drivers/net/mlx5/hws/mlx5dr_debug.h | 28 + drivers/net/mlx5/hws/mlx5dr_definer.c | 1968 +++++++++++++ drivers/net/mlx5/hws/mlx5dr_definer.h | 585 ++++ drivers/net/mlx5/hws/mlx5dr_internal.h | 93 + drivers/net/mlx5/hws/mlx5dr_matcher.c | 919 ++++++ drivers/net/mlx5/hws/mlx5dr_matcher.h | 76 + drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 511 ++++ drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 83 + drivers/net/mlx5/hws/mlx5dr_pool.c | 672 +++++ drivers/net/mlx5/hws/mlx5dr_pool.h | 152 + drivers/net/mlx5/hws/mlx5dr_rule.c | 528 ++++ drivers/net/mlx5/hws/mlx5dr_rule.h | 50 + drivers/net/mlx5/hws/mlx5dr_send.c | 844 ++++++ drivers/net/mlx5/hws/mlx5dr_send.h | 275 ++ drivers/net/mlx5/hws/mlx5dr_table.c | 248 ++ drivers/net/mlx5/hws/mlx5dr_table.h | 44 + drivers/net/mlx5/linux/mlx5_os.c | 12 +- drivers/net/mlx5/meson.build | 7 +- drivers/net/mlx5/mlx5.c | 9 +- drivers/net/mlx5/mlx5.h | 8 +- drivers/net/mlx5/mlx5_defs.h | 2 + drivers/net/mlx5/mlx5_devx.c | 2 +- drivers/net/mlx5/mlx5_dr.c | 383 --- drivers/net/mlx5/mlx5_flow.c | 29 +- drivers/net/mlx5/mlx5_flow.h | 174 +- drivers/net/mlx5/mlx5_flow_dv.c | 2631 +++++++++--------- drivers/net/mlx5/mlx5_flow_hw.c | 115 +- 46 files changed, 14401 insertions(+), 1726 deletions(-) create mode 100644 drivers/net/mlx5/hws/meson.build rename drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} (66%) create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_internal.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.h delete mode 100644 drivers/net/mlx5/mlx5_dr.c -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 01/18] net/mlx5: split flow item translation 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 02/18] net/mlx5: split flow item matcher and value translation Alex Vesker ` (16 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> In order to share the item translation code with hardware steering mode, this commit splits flow item translation code to a dedicate function. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow_dv.c | 1915 ++++++++++++++++--------------- 1 file changed, 979 insertions(+), 936 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 91f287af5c..70a3279e2f 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -13029,8 +13029,7 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, } /** - * Fill the flow with DV spec, lock free - * (mutex should be acquired by caller). + * Translate the flow item to matcher. * * @param[in] dev * Pointer to rte_eth_dev structure. @@ -13040,8 +13039,8 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * Pointer to the flow attributes. * @param[in] items * Pointer to the list of items. - * @param[in] actions - * Pointer to the list of actions. + * @param[in] matcher + * Pointer to the flow matcher. * @param[out] error * Pointer to the error structure. * @@ -13049,1041 +13048,1086 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -flow_dv_translate(struct rte_eth_dev *dev, - struct mlx5_flow *dev_flow, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[], - struct rte_flow_error *error) +flow_dv_translate_items(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + struct mlx5_flow_dv_matcher *matcher, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_sh_config *dev_conf = &priv->sh->config; struct rte_flow *flow = dev_flow->flow; struct mlx5_flow_handle *handle = dev_flow->handle; struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); - struct mlx5_flow_rss_desc *rss_desc; + struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc; uint64_t item_flags = 0; uint64_t last_item = 0; - uint64_t action_flags = 0; - struct mlx5_flow_dv_matcher matcher = { - .mask = { - .size = sizeof(matcher.mask.buf), - }, - }; - int actions_n = 0; - bool actions_end = false; - union { - struct mlx5_flow_dv_modify_hdr_resource res; - uint8_t len[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + - sizeof(struct mlx5_modification_cmd) * - (MLX5_MAX_MODIFY_NUM + 1)]; - } mhdr_dummy; - struct mlx5_flow_dv_modify_hdr_resource *mhdr_res = &mhdr_dummy.res; - const struct rte_flow_action_count *count = NULL; - const struct rte_flow_action_age *non_shared_age = NULL; - union flow_dv_attr flow_attr = { .attr = 0 }; - uint32_t tag_be; - union mlx5_flow_tbl_key tbl_key; - uint32_t modify_action_position = UINT32_MAX; - void *match_mask = matcher.mask.buf; + void *match_mask = matcher->mask.buf; void *match_value = dev_flow->dv.value.buf; uint8_t next_protocol = 0xff; - struct rte_vlan_hdr vlan = { 0 }; - struct mlx5_flow_dv_dest_array_resource mdest_res; - struct mlx5_flow_dv_sample_resource sample_res; - void *sample_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; - const struct rte_flow_action_sample *sample = NULL; - struct mlx5_flow_sub_actions_list *sample_act; - uint32_t sample_act_pos = UINT32_MAX; - uint32_t age_act_pos = UINT32_MAX; - uint32_t num_of_dest = 0; - int tmp_actions_n = 0; - uint32_t table; - int ret = 0; - const struct mlx5_flow_tunnel *tunnel = NULL; - struct flow_grp_info grp_info = { - .external = !!dev_flow->external, - .transfer = !!attr->transfer, - .fdb_def_rule = !!priv->fdb_def_rule, - .skip_scale = dev_flow->skip_scale & - (1 << MLX5_SCALE_FLOW_GROUP_BIT), - .std_tbl_fix = true, - }; + uint16_t priority = 0; const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; const struct rte_flow_item *tunnel_item = NULL; const struct rte_flow_item *gre_item = NULL; + int ret = 0; - if (!wks) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "failed to push flow workspace"); - rss_desc = &wks->rss_desc; - memset(&mdest_res, 0, sizeof(struct mlx5_flow_dv_dest_array_resource)); - memset(&sample_res, 0, sizeof(struct mlx5_flow_dv_sample_resource)); - mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : - MLX5DV_FLOW_TABLE_TYPE_NIC_RX; - /* update normal path action resource into last index of array */ - sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM - 1]; - if (is_tunnel_offload_active(dev)) { - if (dev_flow->tunnel) { - RTE_VERIFY(dev_flow->tof_type == - MLX5_TUNNEL_OFFLOAD_MISS_RULE); - tunnel = dev_flow->tunnel; - } else { - tunnel = mlx5_get_tof(items, actions, - &dev_flow->tof_type); - dev_flow->tunnel = tunnel; - } - grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate - (dev, attr, tunnel, dev_flow->tof_type); - } - mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : - MLX5DV_FLOW_TABLE_TYPE_NIC_RX; - ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table, - &grp_info, error); - if (ret) - return ret; - dev_flow->dv.group = table; - if (attr->transfer) - mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; - /* number of actions must be set to 0 in case of dirty stack. */ - mhdr_res->actions_num = 0; - if (is_flow_tunnel_match_rule(dev_flow->tof_type)) { - /* - * do not add decap action if match rule drops packet - * HW rejects rules with decap & drop - * - * if tunnel match rule was inserted before matching tunnel set - * rule flow table used in the match rule must be registered. - * current implementation handles that in the - * flow_dv_match_register() at the function end. - */ - bool add_decap = true; - const struct rte_flow_action *ptr = actions; - - for (; ptr->type != RTE_FLOW_ACTION_TYPE_END; ptr++) { - if (ptr->type == RTE_FLOW_ACTION_TYPE_DROP) { - add_decap = false; - break; - } - } - if (add_decap) { - if (flow_dv_create_action_l2_decap(dev, dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_DECAP; - } - } - for (; !actions_end ; actions++) { - const struct rte_flow_action_queue *queue; - const struct rte_flow_action_rss *rss; - const struct rte_flow_action *action = actions; - const uint8_t *rss_key; - struct mlx5_flow_tbl_resource *tbl; - struct mlx5_aso_age_action *age_act; - struct mlx5_flow_counter *cnt_act; - uint32_t port_id = 0; - struct mlx5_flow_dv_port_id_action_resource port_id_resource; - int action_type = actions->type; - const struct rte_flow_action *found_action = NULL; - uint32_t jump_group = 0; - uint32_t owner_idx; - struct mlx5_aso_ct_action *ct; + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { + int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + int item_type = items->type; - if (!mlx5_flow_os_action_supported(action_type)) + if (!mlx5_flow_os_item_supported(item_type)) return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "action not supported"); - switch (action_type) { - case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET: - action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET; + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "item not supported"); + switch (item_type) { + case RTE_FLOW_ITEM_TYPE_ESP: + flow_dv_translate_item_esp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_ITEM_ESP; break; - case RTE_FLOW_ACTION_TYPE_VOID: + case RTE_FLOW_ITEM_TYPE_PORT_ID: + flow_dv_translate_item_port_id + (dev, match_mask, match_value, items, attr); + last_item = MLX5_FLOW_ITEM_PORT_ID; break; - case RTE_FLOW_ACTION_TYPE_PORT_ID: - case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: - if (flow_dv_translate_action_port_id(dev, action, - &port_id, error)) - return -rte_errno; - port_id_resource.port_id = port_id; - MLX5_ASSERT(!handle->rix_port_id_action); - if (flow_dv_port_id_action_resource_register - (dev, &port_id_resource, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.port_id_action->action; - action_flags |= MLX5_FLOW_ACTION_PORT_ID; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_PORT_ID; - sample_act->action_flags |= MLX5_FLOW_ACTION_PORT_ID; - num_of_dest++; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + flow_dv_translate_item_represented_port + (dev, match_mask, match_value, items, attr); + last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; break; - case RTE_FLOW_ACTION_TYPE_FLAG: - action_flags |= MLX5_FLOW_ACTION_FLAG; - wks->mark = 1; - if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { - struct rte_flow_action_mark mark = { - .id = MLX5_FLOW_MARK_DEFAULT, - }; - - if (flow_dv_convert_action_mark(dev, &mark, - mhdr_res, - error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MARK_EXT; - break; + case RTE_FLOW_ITEM_TYPE_ETH: + flow_dv_translate_item_eth(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = dev_flow->act_flags & + MLX5_FLOW_ACTION_DEFAULT_MISS && + !dev_flow->external ? + MLX5_PRIORITY_MAP_L3 : + MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + flow_dv_translate_item_vlan(dev_flow, + match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | + MLX5_FLOW_LAYER_INNER_VLAN) : + (MLX5_FLOW_LAYER_OUTER_L2 | + MLX5_FLOW_LAYER_OUTER_VLAN); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &item_flags, &tunnel); + flow_dv_translate_item_ipv4(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv4 *) + items->mask)->hdr.next_proto_id) { + next_protocol = + ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + next_protocol &= + ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; } - tag_be = mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT); - /* - * Only one FLAG or MARK is supported per device flow - * right now. So the pointer to the tag resource must be - * zero before the register process. - */ - MLX5_ASSERT(!handle->dvh.rix_tag); - if (flow_dv_tag_resource_register(dev, tag_be, - dev_flow, error)) - return -rte_errno; - MLX5_ASSERT(dev_flow->dv.tag_resource); - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.tag_resource->action; break; - case RTE_FLOW_ACTION_TYPE_MARK: - action_flags |= MLX5_FLOW_ACTION_MARK; - wks->mark = 1; - if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { - const struct rte_flow_action_mark *mark = - (const struct rte_flow_action_mark *) - actions->conf; - - if (flow_dv_convert_action_mark(dev, mark, - mhdr_res, - error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MARK_EXT; - break; + case RTE_FLOW_ITEM_TYPE_IPV6: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &item_flags, &tunnel); + flow_dv_translate_item_ipv6(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto) { + next_protocol = + ((const struct rte_flow_item_ipv6 *) + items->spec)->hdr.proto; + next_protocol &= + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; } - /* Fall-through */ - case MLX5_RTE_FLOW_ACTION_TYPE_MARK: - /* Legacy (non-extensive) MARK action. */ - tag_be = mlx5_flow_mark_set - (((const struct rte_flow_action_mark *) - (actions->conf))->id); - MLX5_ASSERT(!handle->dvh.rix_tag); - if (flow_dv_tag_resource_register(dev, tag_be, - dev_flow, error)) - return -rte_errno; - MLX5_ASSERT(dev_flow->dv.tag_resource); - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.tag_resource->action; break; - case RTE_FLOW_ACTION_TYPE_SET_META: - if (flow_dv_convert_action_set_meta - (dev, mhdr_res, attr, - (const struct rte_flow_action_set_meta *) - actions->conf, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_META; + case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: + flow_dv_translate_item_ipv6_frag_ext(match_mask, + match_value, + items, tunnel); + last_item = tunnel ? + MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : + MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header) { + next_protocol = + ((const struct rte_flow_item_ipv6_frag_ext *) + items->spec)->hdr.next_header; + next_protocol &= + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } break; - case RTE_FLOW_ACTION_TYPE_SET_TAG: - if (flow_dv_convert_action_set_tag - (dev, mhdr_res, - (const struct rte_flow_action_set_tag *) - actions->conf, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TAG; + case RTE_FLOW_ITEM_TYPE_TCP: + flow_dv_translate_item_tcp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; break; - case RTE_FLOW_ACTION_TYPE_DROP: - action_flags |= MLX5_FLOW_ACTION_DROP; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_DROP; + case RTE_FLOW_ITEM_TYPE_UDP: + flow_dv_translate_item_udp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; break; - case RTE_FLOW_ACTION_TYPE_QUEUE: - queue = actions->conf; - rss_desc->queue_num = 1; - rss_desc->queue[0] = queue->index; - action_flags |= MLX5_FLOW_ACTION_QUEUE; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_QUEUE; - sample_act->action_flags |= MLX5_FLOW_ACTION_QUEUE; - num_of_dest++; + case RTE_FLOW_ITEM_TYPE_GRE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; + gre_item = items; break; - case RTE_FLOW_ACTION_TYPE_RSS: - rss = actions->conf; - memcpy(rss_desc->queue, rss->queue, - rss->queue_num * sizeof(uint16_t)); - rss_desc->queue_num = rss->queue_num; - /* NULL RSS key indicates default RSS key. */ - rss_key = !rss->key ? rss_hash_default_key : rss->key; - memcpy(rss_desc->key, rss_key, MLX5_RSS_HASH_KEY_LEN); - /* - * rss->level and rss.types should be set in advance - * when expanding items for RSS. - */ - action_flags |= MLX5_FLOW_ACTION_RSS; - dev_flow->handle->fate_action = rss_desc->shared_rss ? - MLX5_FLOW_FATE_SHARED_RSS : - MLX5_FLOW_FATE_QUEUE; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + flow_dv_translate_item_gre_key(match_mask, + match_value, items); + last_item = MLX5_FLOW_LAYER_GRE_KEY; break; - case MLX5_RTE_FLOW_ACTION_TYPE_AGE: - owner_idx = (uint32_t)(uintptr_t)action->conf; - age_act = flow_aso_age_get_by_idx(dev, owner_idx); - if (flow->age == 0) { - flow->age = owner_idx; - __atomic_fetch_add(&age_act->refcnt, 1, - __ATOMIC_RELAXED); - } - age_act_pos = actions_n++; - action_flags |= MLX5_FLOW_ACTION_AGE; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_AGE: - non_shared_age = action->conf; - age_act_pos = actions_n++; - action_flags |= MLX5_FLOW_ACTION_AGE; + case RTE_FLOW_ITEM_TYPE_NVGRE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; break; - case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: - owner_idx = (uint32_t)(uintptr_t)action->conf; - cnt_act = flow_dv_counter_get_by_idx(dev, owner_idx, - NULL); - MLX5_ASSERT(cnt_act != NULL); - /** - * When creating meter drop flow in drop table, the - * counter should not overwrite the rte flow counter. - */ - if (attr->group == MLX5_FLOW_TABLE_LEVEL_METER && - dev_flow->dv.table_id == MLX5_MTR_TABLE_ID_DROP) { - dev_flow->dv.actions[actions_n++] = - cnt_act->action; - } else { - if (flow->counter == 0) { - flow->counter = owner_idx; - __atomic_fetch_add - (&cnt_act->shared_info.refcnt, - 1, __ATOMIC_RELAXED); - } - /* Save information first, will apply later. */ - action_flags |= MLX5_FLOW_ACTION_COUNT; - } + case RTE_FLOW_ITEM_TYPE_VXLAN: + flow_dv_translate_item_vxlan(dev, attr, + match_mask, match_value, + items, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN; break; - case RTE_FLOW_ACTION_TYPE_COUNT: - if (!priv->sh->cdev->config.devx) { - return rte_flow_error_set - (error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "count action not supported"); - } - /* Save information first, will apply later. */ - count = action->conf; - action_flags |= MLX5_FLOW_ACTION_COUNT; + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN_GPE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: - dev_flow->dv.actions[actions_n++] = - priv->sh->pop_vlan_action; - action_flags |= MLX5_FLOW_ACTION_OF_POP_VLAN; + case RTE_FLOW_ITEM_TYPE_GENEVE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GENEVE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: - if (!(action_flags & - MLX5_FLOW_ACTION_OF_SET_VLAN_VID)) - flow_dev_get_vlan_info_from_items(items, &vlan); - vlan.eth_proto = rte_be_to_cpu_16 - ((((const struct rte_flow_action_of_push_vlan *) - actions->conf)->ethertype)); - found_action = mlx5_flow_find_action - (actions + 1, - RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID); - if (found_action) - mlx5_update_vlan_vid_pcp(found_action, &vlan); - found_action = mlx5_flow_find_action - (actions + 1, - RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP); - if (found_action) - mlx5_update_vlan_vid_pcp(found_action, &vlan); - if (flow_dv_create_action_push_vlan - (dev, attr, &vlan, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.push_vlan_res->action; - action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN; + case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: + ret = flow_dv_translate_item_geneve_opt(dev, match_mask, + match_value, + items, error); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GENEVE TLV option"); + flow->geneve_tlv_option = 1; + last_item = MLX5_FLOW_LAYER_GENEVE_OPT; break; - case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP: - /* of_vlan_push action handled this action */ - MLX5_ASSERT(action_flags & - MLX5_FLOW_ACTION_OF_PUSH_VLAN); + case RTE_FLOW_ITEM_TYPE_MPLS: + flow_dv_translate_item_mpls(match_mask, match_value, + items, last_item, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_MPLS; break; - case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID: - if (action_flags & MLX5_FLOW_ACTION_OF_PUSH_VLAN) - break; - flow_dev_get_vlan_info_from_items(items, &vlan); - mlx5_update_vlan_vid_pcp(actions, &vlan); - /* If no VLAN push - this is a modify header action */ - if (flow_dv_convert_action_modify_vlan_vid - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID; + case RTE_FLOW_ITEM_TYPE_MARK: + flow_dv_translate_item_mark(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_MARK; break; - case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: - case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: - if (flow_dv_create_action_l2_encap(dev, actions, - dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_ENCAP; - if (action_flags & MLX5_FLOW_ACTION_SAMPLE) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ITEM_TYPE_META: + flow_dv_translate_item_meta(dev, match_mask, + match_value, attr, items); + last_item = MLX5_FLOW_ITEM_METADATA; break; - case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: - case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: - if (flow_dv_create_action_l2_decap(dev, dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_DECAP; + case RTE_FLOW_ITEM_TYPE_ICMP: + flow_dv_translate_item_icmp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP; break; - case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: - /* Handle encap with preceding decap. */ - if (action_flags & MLX5_FLOW_ACTION_DECAP) { - if (flow_dv_create_action_raw_encap - (dev, actions, dev_flow, attr, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } else { - /* Handle encap without preceding decap. */ - if (flow_dv_create_action_l2_encap - (dev, actions, dev_flow, attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } - action_flags |= MLX5_FLOW_ACTION_ENCAP; - if (action_flags & MLX5_FLOW_ACTION_SAMPLE) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ITEM_TYPE_ICMP6: + flow_dv_translate_item_icmp6(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP6; break; - case RTE_FLOW_ACTION_TYPE_RAW_DECAP: - while ((++action)->type == RTE_FLOW_ACTION_TYPE_VOID) - ; - if (action->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { - if (flow_dv_create_action_l2_decap - (dev, dev_flow, attr->transfer, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } - /* If decap is followed by encap, handle it at encap. */ - action_flags |= MLX5_FLOW_ACTION_DECAP; + case RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_item_tag(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_TAG; break; - case MLX5_RTE_FLOW_ACTION_TYPE_JUMP: - dev_flow->dv.actions[actions_n++] = - (void *)(uintptr_t)action->conf; - action_flags |= MLX5_FLOW_ACTION_JUMP; + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_mlx5_item_tag(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_TAG; break; - case RTE_FLOW_ACTION_TYPE_JUMP: - jump_group = ((const struct rte_flow_action_jump *) - action->conf)->group; - grp_info.std_tbl_fix = 0; - if (dev_flow->skip_scale & - (1 << MLX5_SCALE_JUMP_FLOW_GROUP_BIT)) - grp_info.skip_scale = 1; - else - grp_info.skip_scale = 0; - ret = mlx5_flow_group_to_table(dev, tunnel, - jump_group, - &table, - &grp_info, error); + case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + flow_dv_translate_item_tx_queue(dev, match_mask, + match_value, + items); + last_item = MLX5_FLOW_ITEM_TX_QUEUE; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + flow_dv_translate_item_gtp(match_mask, match_value, + items, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = flow_dv_translate_item_gtp_psc(match_mask, + match_value, + items); if (ret) - return ret; - tbl = flow_dv_tbl_resource_get(dev, table, attr->egress, - attr->transfer, - !!dev_flow->external, - tunnel, jump_group, 0, - 0, error); - if (!tbl) - return rte_flow_error_set - (error, errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "cannot create jump action."); - if (flow_dv_jump_tbl_resource_register - (dev, tbl, dev_flow, error)) { - flow_dv_tbl_resource_release(MLX5_SH(dev), tbl); - return rte_flow_error_set - (error, errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "cannot create jump action."); + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GTP PSC item"); + last_item = MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_ECPRI: + if (!mlx5_flex_parser_ecpri_exist(dev)) { + /* Create it only the first time to be used. */ + ret = mlx5_flex_parser_ecpri_alloc(dev); + if (ret) + return rte_flow_error_set + (error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, + "cannot create eCPRI parser"); + } + flow_dv_translate_item_ecpri(dev, match_mask, + match_value, items, + last_item); + /* No other protocol should follow eCPRI layer. */ + last_item = MLX5_FLOW_LAYER_ECPRI; + break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + flow_dv_translate_item_integrity(items, integrity_items, + &last_item); + break; + case RTE_FLOW_ITEM_TYPE_CONNTRACK: + flow_dv_translate_item_aso_ct(dev, match_mask, + match_value, items); + break; + case RTE_FLOW_ITEM_TYPE_FLEX: + flow_dv_translate_item_flex(dev, match_mask, + match_value, items, + dev_flow, tunnel != 0); + last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : + MLX5_FLOW_ITEM_OUTER_FLEX; + break; + default: + break; + } + item_flags |= last_item; + } + /* + * When E-Switch mode is enabled, we have two cases where we need to + * set the source port manually. + * The first one, is in case of NIC ingress steering rule, and the + * second is E-Switch rule where no port_id item was found. + * In both cases the source port is set according the current port + * in use. + */ + if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && + !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && + !(attr->egress && !attr->transfer)) { + if (flow_dv_translate_item_port_id(dev, match_mask, + match_value, NULL, attr)) + return -rte_errno; + } + if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + flow_dv_translate_item_integrity_post(match_mask, match_value, + integrity_items, + item_flags); + } + if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) + flow_dv_translate_item_vxlan_gpe(match_mask, match_value, + tunnel_item, item_flags); + else if (item_flags & MLX5_FLOW_LAYER_GENEVE) + flow_dv_translate_item_geneve(match_mask, match_value, + tunnel_item, item_flags); + else if (item_flags & MLX5_FLOW_LAYER_GRE) { + if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) + flow_dv_translate_item_gre(match_mask, match_value, + tunnel_item, item_flags); + else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) + flow_dv_translate_item_nvgre(match_mask, match_value, + tunnel_item, item_flags); + else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) + flow_dv_translate_item_gre_option(match_mask, match_value, + tunnel_item, gre_item, item_flags); + else + MLX5_ASSERT(false); + } + matcher->priority = priority; +#ifdef RTE_LIBRTE_MLX5_DEBUG + MLX5_ASSERT(!flow_dv_check_valid_spec(matcher->mask.buf, + dev_flow->dv.value.buf)); +#endif + /* + * Layers may be already initialized from prefix flow if this dev_flow + * is the suffix flow. + */ + handle->layers |= item_flags; + return ret; +} + +/** + * Fill the flow with DV spec, lock free + * (mutex should be acquired by caller). + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in, out] dev_flow + * Pointer to the sub flow. + * @param[in] attr + * Pointer to the flow attributes. + * @param[in] items + * Pointer to the list of items. + * @param[in] actions + * Pointer to the list of actions. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_translate(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_sh_config *dev_conf = &priv->sh->config; + struct rte_flow *flow = dev_flow->flow; + struct mlx5_flow_handle *handle = dev_flow->handle; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); + struct mlx5_flow_rss_desc *rss_desc; + uint64_t action_flags = 0; + struct mlx5_flow_dv_matcher matcher = { + .mask = { + .size = sizeof(matcher.mask.buf), + }, + }; + int actions_n = 0; + bool actions_end = false; + union { + struct mlx5_flow_dv_modify_hdr_resource res; + uint8_t len[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * + (MLX5_MAX_MODIFY_NUM + 1)]; + } mhdr_dummy; + struct mlx5_flow_dv_modify_hdr_resource *mhdr_res = &mhdr_dummy.res; + const struct rte_flow_action_count *count = NULL; + const struct rte_flow_action_age *non_shared_age = NULL; + union flow_dv_attr flow_attr = { .attr = 0 }; + uint32_t tag_be; + union mlx5_flow_tbl_key tbl_key; + uint32_t modify_action_position = UINT32_MAX; + struct rte_vlan_hdr vlan = { 0 }; + struct mlx5_flow_dv_dest_array_resource mdest_res; + struct mlx5_flow_dv_sample_resource sample_res; + void *sample_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; + const struct rte_flow_action_sample *sample = NULL; + struct mlx5_flow_sub_actions_list *sample_act; + uint32_t sample_act_pos = UINT32_MAX; + uint32_t age_act_pos = UINT32_MAX; + uint32_t num_of_dest = 0; + int tmp_actions_n = 0; + uint32_t table; + int ret = 0; + const struct mlx5_flow_tunnel *tunnel = NULL; + struct flow_grp_info grp_info = { + .external = !!dev_flow->external, + .transfer = !!attr->transfer, + .fdb_def_rule = !!priv->fdb_def_rule, + .skip_scale = dev_flow->skip_scale & + (1 << MLX5_SCALE_FLOW_GROUP_BIT), + .std_tbl_fix = true, + }; + + if (!wks) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "failed to push flow workspace"); + rss_desc = &wks->rss_desc; + memset(&mdest_res, 0, sizeof(struct mlx5_flow_dv_dest_array_resource)); + memset(&sample_res, 0, sizeof(struct mlx5_flow_dv_sample_resource)); + mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : + MLX5DV_FLOW_TABLE_TYPE_NIC_RX; + /* update normal path action resource into last index of array */ + sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM - 1]; + if (is_tunnel_offload_active(dev)) { + if (dev_flow->tunnel) { + RTE_VERIFY(dev_flow->tof_type == + MLX5_TUNNEL_OFFLOAD_MISS_RULE); + tunnel = dev_flow->tunnel; + } else { + tunnel = mlx5_get_tof(items, actions, + &dev_flow->tof_type); + dev_flow->tunnel = tunnel; + } + grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate + (dev, attr, tunnel, dev_flow->tof_type); + } + mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : + MLX5DV_FLOW_TABLE_TYPE_NIC_RX; + ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table, + &grp_info, error); + if (ret) + return ret; + dev_flow->dv.group = table; + if (attr->transfer) + mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; + /* number of actions must be set to 0 in case of dirty stack. */ + mhdr_res->actions_num = 0; + if (is_flow_tunnel_match_rule(dev_flow->tof_type)) { + /* + * do not add decap action if match rule drops packet + * HW rejects rules with decap & drop + * + * if tunnel match rule was inserted before matching tunnel set + * rule flow table used in the match rule must be registered. + * current implementation handles that in the + * flow_dv_match_register() at the function end. + */ + bool add_decap = true; + const struct rte_flow_action *ptr = actions; + + for (; ptr->type != RTE_FLOW_ACTION_TYPE_END; ptr++) { + if (ptr->type == RTE_FLOW_ACTION_TYPE_DROP) { + add_decap = false; + break; } - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.jump->action; - action_flags |= MLX5_FLOW_ACTION_JUMP; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_JUMP; - sample_act->action_flags |= MLX5_FLOW_ACTION_JUMP; - num_of_dest++; - break; - case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC: - case RTE_FLOW_ACTION_TYPE_SET_MAC_DST: - if (flow_dv_convert_action_modify_mac - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_MAC_SRC ? - MLX5_FLOW_ACTION_SET_MAC_SRC : - MLX5_FLOW_ACTION_SET_MAC_DST; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: - case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST: - if (flow_dv_convert_action_modify_ipv4 - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC ? - MLX5_FLOW_ACTION_SET_IPV4_SRC : - MLX5_FLOW_ACTION_SET_IPV4_DST; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC: - case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST: - if (flow_dv_convert_action_modify_ipv6 - (mhdr_res, actions, error)) + } + if (add_decap) { + if (flow_dv_create_action_l2_decap(dev, dev_flow, + attr->transfer, + error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC ? - MLX5_FLOW_ACTION_SET_IPV6_SRC : - MLX5_FLOW_ACTION_SET_IPV6_DST; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_DECAP; + } + } + for (; !actions_end ; actions++) { + const struct rte_flow_action_queue *queue; + const struct rte_flow_action_rss *rss; + const struct rte_flow_action *action = actions; + const uint8_t *rss_key; + struct mlx5_flow_tbl_resource *tbl; + struct mlx5_aso_age_action *age_act; + struct mlx5_flow_counter *cnt_act; + uint32_t port_id = 0; + struct mlx5_flow_dv_port_id_action_resource port_id_resource; + int action_type = actions->type; + const struct rte_flow_action *found_action = NULL; + uint32_t jump_group = 0; + uint32_t owner_idx; + struct mlx5_aso_ct_action *ct; + + if (!mlx5_flow_os_action_supported(action_type)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, + "action not supported"); + switch (action_type) { + case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET: + action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET; break; - case RTE_FLOW_ACTION_TYPE_SET_TP_SRC: - case RTE_FLOW_ACTION_TYPE_SET_TP_DST: - if (flow_dv_convert_action_modify_tp - (mhdr_res, actions, items, - &flow_attr, dev_flow, !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_TP_SRC ? - MLX5_FLOW_ACTION_SET_TP_SRC : - MLX5_FLOW_ACTION_SET_TP_DST; + case RTE_FLOW_ACTION_TYPE_VOID: break; - case RTE_FLOW_ACTION_TYPE_DEC_TTL: - if (flow_dv_convert_action_modify_dec_ttl - (mhdr_res, items, &flow_attr, dev_flow, - !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) + case RTE_FLOW_ACTION_TYPE_PORT_ID: + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + if (flow_dv_translate_action_port_id(dev, action, + &port_id, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_DEC_TTL; - break; - case RTE_FLOW_ACTION_TYPE_SET_TTL: - if (flow_dv_convert_action_modify_ttl - (mhdr_res, actions, items, &flow_attr, - dev_flow, !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) + port_id_resource.port_id = port_id; + MLX5_ASSERT(!handle->rix_port_id_action); + if (flow_dv_port_id_action_resource_register + (dev, &port_id_resource, dev_flow, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TTL; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.port_id_action->action; + action_flags |= MLX5_FLOW_ACTION_PORT_ID; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_PORT_ID; + sample_act->action_flags |= MLX5_FLOW_ACTION_PORT_ID; + num_of_dest++; break; - case RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ: - case RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ: - if (flow_dv_convert_action_modify_tcp_seq - (mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_FLAG: + action_flags |= MLX5_FLOW_ACTION_FLAG; + wks->mark = 1; + if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { + struct rte_flow_action_mark mark = { + .id = MLX5_FLOW_MARK_DEFAULT, + }; + + if (flow_dv_convert_action_mark(dev, &mark, + mhdr_res, + error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MARK_EXT; + break; + } + tag_be = mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT); + /* + * Only one FLAG or MARK is supported per device flow + * right now. So the pointer to the tag resource must be + * zero before the register process. + */ + MLX5_ASSERT(!handle->dvh.rix_tag); + if (flow_dv_tag_resource_register(dev, tag_be, + dev_flow, error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ ? - MLX5_FLOW_ACTION_INC_TCP_SEQ : - MLX5_FLOW_ACTION_DEC_TCP_SEQ; + MLX5_ASSERT(dev_flow->dv.tag_resource); + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.tag_resource->action; break; + case RTE_FLOW_ACTION_TYPE_MARK: + action_flags |= MLX5_FLOW_ACTION_MARK; + wks->mark = 1; + if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { + const struct rte_flow_action_mark *mark = + (const struct rte_flow_action_mark *) + actions->conf; - case RTE_FLOW_ACTION_TYPE_INC_TCP_ACK: - case RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK: - if (flow_dv_convert_action_modify_tcp_ack - (mhdr_res, actions, error)) + if (flow_dv_convert_action_mark(dev, mark, + mhdr_res, + error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MARK_EXT; + break; + } + /* Fall-through */ + case MLX5_RTE_FLOW_ACTION_TYPE_MARK: + /* Legacy (non-extensive) MARK action. */ + tag_be = mlx5_flow_mark_set + (((const struct rte_flow_action_mark *) + (actions->conf))->id); + MLX5_ASSERT(!handle->dvh.rix_tag); + if (flow_dv_tag_resource_register(dev, tag_be, + dev_flow, error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_INC_TCP_ACK ? - MLX5_FLOW_ACTION_INC_TCP_ACK : - MLX5_FLOW_ACTION_DEC_TCP_ACK; + MLX5_ASSERT(dev_flow->dv.tag_resource); + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.tag_resource->action; break; - case MLX5_RTE_FLOW_ACTION_TYPE_TAG: - if (flow_dv_convert_action_set_reg - (mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_SET_META: + if (flow_dv_convert_action_set_meta + (dev, mhdr_res, attr, + (const struct rte_flow_action_set_meta *) + actions->conf, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TAG; + action_flags |= MLX5_FLOW_ACTION_SET_META; break; - case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG: - if (flow_dv_convert_action_copy_mreg - (dev, mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_SET_TAG: + if (flow_dv_convert_action_set_tag + (dev, mhdr_res, + (const struct rte_flow_action_set_tag *) + actions->conf, error)) return -rte_errno; action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case MLX5_RTE_FLOW_ACTION_TYPE_DEFAULT_MISS: - action_flags |= MLX5_FLOW_ACTION_DEFAULT_MISS; - dev_flow->handle->fate_action = - MLX5_FLOW_FATE_DEFAULT_MISS; - break; - case RTE_FLOW_ACTION_TYPE_METER: - if (!wks->fm) - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, "Failed to get meter in flow."); - /* Set the meter action. */ - dev_flow->dv.actions[actions_n++] = - wks->fm->meter_action_g; - action_flags |= MLX5_FLOW_ACTION_METER; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV4_DSCP: - if (flow_dv_convert_action_modify_ipv4_dscp(mhdr_res, - actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_IPV4_DSCP; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV6_DSCP: - if (flow_dv_convert_action_modify_ipv6_dscp(mhdr_res, - actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_IPV6_DSCP; + case RTE_FLOW_ACTION_TYPE_DROP: + action_flags |= MLX5_FLOW_ACTION_DROP; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_DROP; break; - case RTE_FLOW_ACTION_TYPE_SAMPLE: - sample_act_pos = actions_n; - sample = (const struct rte_flow_action_sample *) - action->conf; - actions_n++; - action_flags |= MLX5_FLOW_ACTION_SAMPLE; - /* put encap action into group if work with port id */ - if ((action_flags & MLX5_FLOW_ACTION_ENCAP) && - (action_flags & MLX5_FLOW_ACTION_PORT_ID)) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ACTION_TYPE_QUEUE: + queue = actions->conf; + rss_desc->queue_num = 1; + rss_desc->queue[0] = queue->index; + action_flags |= MLX5_FLOW_ACTION_QUEUE; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_QUEUE; + sample_act->action_flags |= MLX5_FLOW_ACTION_QUEUE; + num_of_dest++; break; - case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: - if (flow_dv_convert_action_modify_field - (dev, mhdr_res, actions, attr, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; + case RTE_FLOW_ACTION_TYPE_RSS: + rss = actions->conf; + memcpy(rss_desc->queue, rss->queue, + rss->queue_num * sizeof(uint16_t)); + rss_desc->queue_num = rss->queue_num; + /* NULL RSS key indicates default RSS key. */ + rss_key = !rss->key ? rss_hash_default_key : rss->key; + memcpy(rss_desc->key, rss_key, MLX5_RSS_HASH_KEY_LEN); + /* + * rss->level and rss.types should be set in advance + * when expanding items for RSS. + */ + action_flags |= MLX5_FLOW_ACTION_RSS; + dev_flow->handle->fate_action = rss_desc->shared_rss ? + MLX5_FLOW_FATE_SHARED_RSS : + MLX5_FLOW_FATE_QUEUE; break; - case RTE_FLOW_ACTION_TYPE_CONNTRACK: + case MLX5_RTE_FLOW_ACTION_TYPE_AGE: owner_idx = (uint32_t)(uintptr_t)action->conf; - ct = flow_aso_ct_get_by_idx(dev, owner_idx); - if (!ct) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "Failed to get CT object."); - if (mlx5_aso_ct_available(priv->sh, ct)) - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "CT is unavailable."); - if (ct->is_original) - dev_flow->dv.actions[actions_n] = - ct->dr_action_orig; - else - dev_flow->dv.actions[actions_n] = - ct->dr_action_rply; - if (flow->ct == 0) { - flow->indirect_type = - MLX5_INDIRECT_ACTION_TYPE_CT; - flow->ct = owner_idx; - __atomic_fetch_add(&ct->refcnt, 1, + age_act = flow_aso_age_get_by_idx(dev, owner_idx); + if (flow->age == 0) { + flow->age = owner_idx; + __atomic_fetch_add(&age_act->refcnt, 1, __ATOMIC_RELAXED); } - actions_n++; - action_flags |= MLX5_FLOW_ACTION_CT; + age_act_pos = actions_n++; + action_flags |= MLX5_FLOW_ACTION_AGE; break; - case RTE_FLOW_ACTION_TYPE_END: - actions_end = true; - if (mhdr_res->actions_num) { - /* create modify action if needed. */ - if (flow_dv_modify_hdr_resource_register - (dev, mhdr_res, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[modify_action_position] = - handle->dvh.modify_hdr->action; - } - /* - * Handle AGE and COUNT action by single HW counter - * when they are not shared. + case RTE_FLOW_ACTION_TYPE_AGE: + non_shared_age = action->conf; + age_act_pos = actions_n++; + action_flags |= MLX5_FLOW_ACTION_AGE; + break; + case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: + owner_idx = (uint32_t)(uintptr_t)action->conf; + cnt_act = flow_dv_counter_get_by_idx(dev, owner_idx, + NULL); + MLX5_ASSERT(cnt_act != NULL); + /** + * When creating meter drop flow in drop table, the + * counter should not overwrite the rte flow counter. */ - if (action_flags & MLX5_FLOW_ACTION_AGE) { - if ((non_shared_age && count) || - !flow_hit_aso_supported(priv->sh, attr)) { - /* Creates age by counters. */ - cnt_act = flow_dv_prepare_counter - (dev, dev_flow, - flow, count, - non_shared_age, - error); - if (!cnt_act) - return -rte_errno; - dev_flow->dv.actions[age_act_pos] = - cnt_act->action; - break; - } - if (!flow->age && non_shared_age) { - flow->age = flow_dv_aso_age_alloc - (dev, error); - if (!flow->age) - return -rte_errno; - flow_dv_aso_age_params_init - (dev, flow->age, - non_shared_age->context ? - non_shared_age->context : - (void *)(uintptr_t) - (dev_flow->flow_idx), - non_shared_age->timeout); - } - age_act = flow_aso_age_get_by_idx(dev, - flow->age); - dev_flow->dv.actions[age_act_pos] = - age_act->dr_action; - } - if (action_flags & MLX5_FLOW_ACTION_COUNT) { - /* - * Create one count action, to be used - * by all sub-flows. - */ - cnt_act = flow_dv_prepare_counter(dev, dev_flow, - flow, count, - NULL, error); - if (!cnt_act) - return -rte_errno; + if (attr->group == MLX5_FLOW_TABLE_LEVEL_METER && + dev_flow->dv.table_id == MLX5_MTR_TABLE_ID_DROP) { dev_flow->dv.actions[actions_n++] = - cnt_act->action; + cnt_act->action; + } else { + if (flow->counter == 0) { + flow->counter = owner_idx; + __atomic_fetch_add + (&cnt_act->shared_info.refcnt, + 1, __ATOMIC_RELAXED); + } + /* Save information first, will apply later. */ + action_flags |= MLX5_FLOW_ACTION_COUNT; } - default: break; - } - if (mhdr_res->actions_num && - modify_action_position == UINT32_MAX) - modify_action_position = actions_n++; - } - for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { - int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); - int item_type = items->type; - - if (!mlx5_flow_os_item_supported(item_type)) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "item not supported"); - switch (item_type) { - case RTE_FLOW_ITEM_TYPE_ESP: - flow_dv_translate_item_esp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_ITEM_ESP; + case RTE_FLOW_ACTION_TYPE_COUNT: + if (!priv->sh->cdev->config.devx) { + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "count action not supported"); + } + /* Save information first, will apply later. */ + count = action->conf; + action_flags |= MLX5_FLOW_ACTION_COUNT; break; - case RTE_FLOW_ITEM_TYPE_PORT_ID: - flow_dv_translate_item_port_id - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_PORT_ID; + case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: + dev_flow->dv.actions[actions_n++] = + priv->sh->pop_vlan_action; + action_flags |= MLX5_FLOW_ACTION_OF_POP_VLAN; break; - case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: - flow_dv_translate_item_represented_port - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; + case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: + if (!(action_flags & + MLX5_FLOW_ACTION_OF_SET_VLAN_VID)) + flow_dev_get_vlan_info_from_items(items, &vlan); + vlan.eth_proto = rte_be_to_cpu_16 + ((((const struct rte_flow_action_of_push_vlan *) + actions->conf)->ethertype)); + found_action = mlx5_flow_find_action + (actions + 1, + RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID); + if (found_action) + mlx5_update_vlan_vid_pcp(found_action, &vlan); + found_action = mlx5_flow_find_action + (actions + 1, + RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP); + if (found_action) + mlx5_update_vlan_vid_pcp(found_action, &vlan); + if (flow_dv_create_action_push_vlan + (dev, attr, &vlan, dev_flow, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.push_vlan_res->action; + action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN; break; - case RTE_FLOW_ITEM_TYPE_ETH: - flow_dv_translate_item_eth(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = action_flags & - MLX5_FLOW_ACTION_DEFAULT_MISS && - !dev_flow->external ? - MLX5_PRIORITY_MAP_L3 : - MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : - MLX5_FLOW_LAYER_OUTER_L2; + case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP: + /* of_vlan_push action handled this action */ + MLX5_ASSERT(action_flags & + MLX5_FLOW_ACTION_OF_PUSH_VLAN); break; - case RTE_FLOW_ITEM_TYPE_VLAN: - flow_dv_translate_item_vlan(dev_flow, - match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | - MLX5_FLOW_LAYER_INNER_VLAN) : - (MLX5_FLOW_LAYER_OUTER_L2 | - MLX5_FLOW_LAYER_OUTER_VLAN); + case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID: + if (action_flags & MLX5_FLOW_ACTION_OF_PUSH_VLAN) + break; + flow_dev_get_vlan_info_from_items(items, &vlan); + mlx5_update_vlan_vid_pcp(actions, &vlan); + /* If no VLAN push - this is a modify header action */ + if (flow_dv_convert_action_modify_vlan_vid + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID; break; - case RTE_FLOW_ITEM_TYPE_IPV4: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv4(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : - MLX5_FLOW_LAYER_OUTER_L3_IPV4; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv4 *) - items->mask)->hdr.next_proto_id) { - next_protocol = - ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - next_protocol &= - ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: + if (flow_dv_create_action_l2_encap(dev, actions, + dev_flow, + attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_ENCAP; + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_IPV6: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv6(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : - MLX5_FLOW_LAYER_OUTER_L3_IPV6; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto) { - next_protocol = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol &= - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: + if (flow_dv_create_action_l2_decap(dev, dev_flow, + attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_DECAP; break; - case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: - flow_dv_translate_item_ipv6_frag_ext(match_mask, - match_value, - items, tunnel); - last_item = tunnel ? - MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : - MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header) { - next_protocol = - ((const struct rte_flow_item_ipv6_frag_ext *) - items->spec)->hdr.next_header; - next_protocol &= - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + /* Handle encap with preceding decap. */ + if (action_flags & MLX5_FLOW_ACTION_DECAP) { + if (flow_dv_create_action_raw_encap + (dev, actions, dev_flow, attr, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; } else { - /* Reset for inner layer. */ - next_protocol = 0xff; + /* Handle encap without preceding decap. */ + if (flow_dv_create_action_l2_encap + (dev, actions, dev_flow, attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; } + action_flags |= MLX5_FLOW_ACTION_ENCAP; + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_TCP: - flow_dv_translate_item_tcp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : - MLX5_FLOW_LAYER_OUTER_L4_TCP; - break; - case RTE_FLOW_ITEM_TYPE_UDP: - flow_dv_translate_item_udp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : - MLX5_FLOW_LAYER_OUTER_L4_UDP; - break; - case RTE_FLOW_ITEM_TYPE_GRE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - gre_item = items; + case RTE_FLOW_ACTION_TYPE_RAW_DECAP: + while ((++action)->type == RTE_FLOW_ACTION_TYPE_VOID) + ; + if (action->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { + if (flow_dv_create_action_l2_decap + (dev, dev_flow, attr->transfer, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + } + /* If decap is followed by encap, handle it at encap. */ + action_flags |= MLX5_FLOW_ACTION_DECAP; break; - case RTE_FLOW_ITEM_TYPE_GRE_KEY: - flow_dv_translate_item_gre_key(match_mask, - match_value, items); - last_item = MLX5_FLOW_LAYER_GRE_KEY; + case MLX5_RTE_FLOW_ACTION_TYPE_JUMP: + dev_flow->dv.actions[actions_n++] = + (void *)(uintptr_t)action->conf; + action_flags |= MLX5_FLOW_ACTION_JUMP; break; - case RTE_FLOW_ITEM_TYPE_GRE_OPTION: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_JUMP: + jump_group = ((const struct rte_flow_action_jump *) + action->conf)->group; + grp_info.std_tbl_fix = 0; + if (dev_flow->skip_scale & + (1 << MLX5_SCALE_JUMP_FLOW_GROUP_BIT)) + grp_info.skip_scale = 1; + else + grp_info.skip_scale = 0; + ret = mlx5_flow_group_to_table(dev, tunnel, + jump_group, + &table, + &grp_info, error); + if (ret) + return ret; + tbl = flow_dv_tbl_resource_get(dev, table, attr->egress, + attr->transfer, + !!dev_flow->external, + tunnel, jump_group, 0, + 0, error); + if (!tbl) + return rte_flow_error_set + (error, errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create jump action."); + if (flow_dv_jump_tbl_resource_register + (dev, tbl, dev_flow, error)) { + flow_dv_tbl_resource_release(MLX5_SH(dev), tbl); + return rte_flow_error_set + (error, errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create jump action."); + } + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.jump->action; + action_flags |= MLX5_FLOW_ACTION_JUMP; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_JUMP; + sample_act->action_flags |= MLX5_FLOW_ACTION_JUMP; + num_of_dest++; break; - case RTE_FLOW_ITEM_TYPE_NVGRE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC: + case RTE_FLOW_ACTION_TYPE_SET_MAC_DST: + if (flow_dv_convert_action_modify_mac + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_MAC_SRC ? + MLX5_FLOW_ACTION_SET_MAC_SRC : + MLX5_FLOW_ACTION_SET_MAC_DST; break; - case RTE_FLOW_ITEM_TYPE_VXLAN: - flow_dv_translate_item_vxlan(dev, attr, - match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN; + case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: + case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST: + if (flow_dv_convert_action_modify_ipv4 + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC ? + MLX5_FLOW_ACTION_SET_IPV4_SRC : + MLX5_FLOW_ACTION_SET_IPV4_DST; break; - case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN_GPE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC: + case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST: + if (flow_dv_convert_action_modify_ipv6 + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC ? + MLX5_FLOW_ACTION_SET_IPV6_SRC : + MLX5_FLOW_ACTION_SET_IPV6_DST; break; - case RTE_FLOW_ITEM_TYPE_GENEVE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GENEVE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_TP_SRC: + case RTE_FLOW_ACTION_TYPE_SET_TP_DST: + if (flow_dv_convert_action_modify_tp + (mhdr_res, actions, items, + &flow_attr, dev_flow, !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_TP_SRC ? + MLX5_FLOW_ACTION_SET_TP_SRC : + MLX5_FLOW_ACTION_SET_TP_DST; break; - case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: - ret = flow_dv_translate_item_geneve_opt(dev, match_mask, - match_value, - items, error); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GENEVE TLV option"); - flow->geneve_tlv_option = 1; - last_item = MLX5_FLOW_LAYER_GENEVE_OPT; + case RTE_FLOW_ACTION_TYPE_DEC_TTL: + if (flow_dv_convert_action_modify_dec_ttl + (mhdr_res, items, &flow_attr, dev_flow, + !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_DEC_TTL; break; - case RTE_FLOW_ITEM_TYPE_MPLS: - flow_dv_translate_item_mpls(match_mask, match_value, - items, last_item, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_MPLS; + case RTE_FLOW_ACTION_TYPE_SET_TTL: + if (flow_dv_convert_action_modify_ttl + (mhdr_res, actions, items, &flow_attr, + dev_flow, !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TTL; break; - case RTE_FLOW_ITEM_TYPE_MARK: - flow_dv_translate_item_mark(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_MARK; + case RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ: + case RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ: + if (flow_dv_convert_action_modify_tcp_seq + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ ? + MLX5_FLOW_ACTION_INC_TCP_SEQ : + MLX5_FLOW_ACTION_DEC_TCP_SEQ; break; - case RTE_FLOW_ITEM_TYPE_META: - flow_dv_translate_item_meta(dev, match_mask, - match_value, attr, items); - last_item = MLX5_FLOW_ITEM_METADATA; + + case RTE_FLOW_ACTION_TYPE_INC_TCP_ACK: + case RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK: + if (flow_dv_convert_action_modify_tcp_ack + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_INC_TCP_ACK ? + MLX5_FLOW_ACTION_INC_TCP_ACK : + MLX5_FLOW_ACTION_DEC_TCP_ACK; break; - case RTE_FLOW_ITEM_TYPE_ICMP: - flow_dv_translate_item_icmp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP; + case MLX5_RTE_FLOW_ACTION_TYPE_TAG: + if (flow_dv_convert_action_set_reg + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case RTE_FLOW_ITEM_TYPE_ICMP6: - flow_dv_translate_item_icmp6(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP6; + case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG: + if (flow_dv_convert_action_copy_mreg + (dev, mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; + case MLX5_RTE_FLOW_ACTION_TYPE_DEFAULT_MISS: + action_flags |= MLX5_FLOW_ACTION_DEFAULT_MISS; + dev_flow->handle->fate_action = + MLX5_FLOW_FATE_DEFAULT_MISS; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_mlx5_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; + case RTE_FLOW_ACTION_TYPE_METER: + if (!wks->fm) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "Failed to get meter in flow."); + /* Set the meter action. */ + dev_flow->dv.actions[actions_n++] = + wks->fm->meter_action_g; + action_flags |= MLX5_FLOW_ACTION_METER; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - flow_dv_translate_item_tx_queue(dev, match_mask, - match_value, - items); - last_item = MLX5_FLOW_ITEM_TX_QUEUE; + case RTE_FLOW_ACTION_TYPE_SET_IPV4_DSCP: + if (flow_dv_convert_action_modify_ipv4_dscp(mhdr_res, + actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_IPV4_DSCP; break; - case RTE_FLOW_ITEM_TYPE_GTP: - flow_dv_translate_item_gtp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GTP; + case RTE_FLOW_ACTION_TYPE_SET_IPV6_DSCP: + if (flow_dv_convert_action_modify_ipv6_dscp(mhdr_res, + actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_IPV6_DSCP; break; - case RTE_FLOW_ITEM_TYPE_GTP_PSC: - ret = flow_dv_translate_item_gtp_psc(match_mask, - match_value, - items); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GTP PSC item"); - last_item = MLX5_FLOW_LAYER_GTP_PSC; + case RTE_FLOW_ACTION_TYPE_SAMPLE: + sample_act_pos = actions_n; + sample = (const struct rte_flow_action_sample *) + action->conf; + actions_n++; + action_flags |= MLX5_FLOW_ACTION_SAMPLE; + /* put encap action into group if work with port id */ + if ((action_flags & MLX5_FLOW_ACTION_ENCAP) && + (action_flags & MLX5_FLOW_ACTION_PORT_ID)) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_ECPRI: - if (!mlx5_flex_parser_ecpri_exist(dev)) { - /* Create it only the first time to be used. */ - ret = mlx5_flex_parser_ecpri_alloc(dev); - if (ret) - return rte_flow_error_set - (error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, + case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: + if (flow_dv_convert_action_modify_field + (dev, mhdr_res, actions, attr, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; + break; + case RTE_FLOW_ACTION_TYPE_CONNTRACK: + owner_idx = (uint32_t)(uintptr_t)action->conf; + ct = flow_aso_ct_get_by_idx(dev, owner_idx); + if (!ct) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, - "cannot create eCPRI parser"); + "Failed to get CT object."); + if (mlx5_aso_ct_available(priv->sh, ct)) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "CT is unavailable."); + if (ct->is_original) + dev_flow->dv.actions[actions_n] = + ct->dr_action_orig; + else + dev_flow->dv.actions[actions_n] = + ct->dr_action_rply; + if (flow->ct == 0) { + flow->indirect_type = + MLX5_INDIRECT_ACTION_TYPE_CT; + flow->ct = owner_idx; + __atomic_fetch_add(&ct->refcnt, 1, + __ATOMIC_RELAXED); } - flow_dv_translate_item_ecpri(dev, match_mask, - match_value, items, - last_item); - /* No other protocol should follow eCPRI layer. */ - last_item = MLX5_FLOW_LAYER_ECPRI; - break; - case RTE_FLOW_ITEM_TYPE_INTEGRITY: - flow_dv_translate_item_integrity(items, integrity_items, - &last_item); - break; - case RTE_FLOW_ITEM_TYPE_CONNTRACK: - flow_dv_translate_item_aso_ct(dev, match_mask, - match_value, items); - break; - case RTE_FLOW_ITEM_TYPE_FLEX: - flow_dv_translate_item_flex(dev, match_mask, - match_value, items, - dev_flow, tunnel != 0); - last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : - MLX5_FLOW_ITEM_OUTER_FLEX; + actions_n++; + action_flags |= MLX5_FLOW_ACTION_CT; break; + case RTE_FLOW_ACTION_TYPE_END: + actions_end = true; + if (mhdr_res->actions_num) { + /* create modify action if needed. */ + if (flow_dv_modify_hdr_resource_register + (dev, mhdr_res, dev_flow, error)) + return -rte_errno; + dev_flow->dv.actions[modify_action_position] = + handle->dvh.modify_hdr->action; + } + /* + * Handle AGE and COUNT action by single HW counter + * when they are not shared. + */ + if (action_flags & MLX5_FLOW_ACTION_AGE) { + if ((non_shared_age && count) || + !flow_hit_aso_supported(priv->sh, attr)) { + /* Creates age by counters. */ + cnt_act = flow_dv_prepare_counter + (dev, dev_flow, + flow, count, + non_shared_age, + error); + if (!cnt_act) + return -rte_errno; + dev_flow->dv.actions[age_act_pos] = + cnt_act->action; + break; + } + if (!flow->age && non_shared_age) { + flow->age = flow_dv_aso_age_alloc + (dev, error); + if (!flow->age) + return -rte_errno; + flow_dv_aso_age_params_init + (dev, flow->age, + non_shared_age->context ? + non_shared_age->context : + (void *)(uintptr_t) + (dev_flow->flow_idx), + non_shared_age->timeout); + } + age_act = flow_aso_age_get_by_idx(dev, + flow->age); + dev_flow->dv.actions[age_act_pos] = + age_act->dr_action; + } + if (action_flags & MLX5_FLOW_ACTION_COUNT) { + /* + * Create one count action, to be used + * by all sub-flows. + */ + cnt_act = flow_dv_prepare_counter(dev, dev_flow, + flow, count, + NULL, error); + if (!cnt_act) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + cnt_act->action; + } default: break; } - item_flags |= last_item; - } - /* - * When E-Switch mode is enabled, we have two cases where we need to - * set the source port manually. - * The first one, is in case of NIC ingress steering rule, and the - * second is E-Switch rule where no port_id item was found. - * In both cases the source port is set according the current port - * in use. - */ - if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && - !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && - !(attr->egress && !attr->transfer)) { - if (flow_dv_translate_item_port_id(dev, match_mask, - match_value, NULL, attr)) - return -rte_errno; - } - if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { - flow_dv_translate_item_integrity_post(match_mask, match_value, - integrity_items, - item_flags); - } - if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) - flow_dv_translate_item_vxlan_gpe(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GENEVE) - flow_dv_translate_item_geneve(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GRE) { - if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) - flow_dv_translate_item_gre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) - flow_dv_translate_item_nvgre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) - flow_dv_translate_item_gre_option(match_mask, match_value, - tunnel_item, gre_item, item_flags); - else - MLX5_ASSERT(false); + if (mhdr_res->actions_num && + modify_action_position == UINT32_MAX) + modify_action_position = actions_n++; } -#ifdef RTE_LIBRTE_MLX5_DEBUG - MLX5_ASSERT(!flow_dv_check_valid_spec(matcher.mask.buf, - dev_flow->dv.value.buf)); -#endif - /* - * Layers may be already initialized from prefix flow if this dev_flow - * is the suffix flow. - */ - handle->layers |= item_flags; + dev_flow->act_flags = action_flags; + ret = flow_dv_translate_items(dev, dev_flow, attr, items, &matcher, + error); + if (ret) + return -rte_errno; if (action_flags & MLX5_FLOW_ACTION_RSS) flow_dv_hashfields_set(dev_flow->handle->layers, rss_desc, @@ -14153,7 +14197,6 @@ flow_dv_translate(struct rte_eth_dev *dev, actions_n = tmp_actions_n; } dev_flow->dv.actions_n = actions_n; - dev_flow->act_flags = action_flags; if (wks->skip_matcher_reg) return 0; /* Register matcher. */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 02/18] net/mlx5: split flow item matcher and value translation 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker 2022-10-19 20:57 ` [v5 01/18] net/mlx5: split flow item translation Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 03/18] net/mlx5: add hardware steering item translation function Alex Vesker ` (15 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> As hardware steering mode translates flow matcher and value in two different stages, split the flow item matcher and value translation to help reuse the code. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow.h | 32 + drivers/net/mlx5/mlx5_flow_dv.c | 2314 +++++++++++++++---------------- 2 files changed, 1185 insertions(+), 1161 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 0fa1735b1a..2ebb8496f2 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1264,6 +1264,38 @@ struct mlx5_flow_workspace { uint32_t skip_matcher_reg:1; /* Indicates if need to skip matcher register in translate. */ uint32_t mark:1; /* Indicates if flow contains mark action. */ + uint32_t vport_meta_tag; /* Used for vport index match. */ +}; + +/* Matcher translate type. */ +enum MLX5_SET_MATCHER { + MLX5_SET_MATCHER_SW_V = 1 << 0, + MLX5_SET_MATCHER_SW_M = 1 << 1, + MLX5_SET_MATCHER_HS_V = 1 << 2, + MLX5_SET_MATCHER_HS_M = 1 << 3, +}; + +#define MLX5_SET_MATCHER_SW (MLX5_SET_MATCHER_SW_V | MLX5_SET_MATCHER_SW_M) +#define MLX5_SET_MATCHER_HS (MLX5_SET_MATCHER_HS_V | MLX5_SET_MATCHER_HS_M) +#define MLX5_SET_MATCHER_V (MLX5_SET_MATCHER_SW_V | MLX5_SET_MATCHER_HS_V) +#define MLX5_SET_MATCHER_M (MLX5_SET_MATCHER_SW_M | MLX5_SET_MATCHER_HS_M) + +/* Flow matcher workspace intermediate data. */ +struct mlx5_dv_matcher_workspace { + uint8_t priority; /* Flow priority. */ + uint64_t last_item; /* Last item in pattern. */ + uint64_t item_flags; /* Flow item pattern flags. */ + uint64_t action_flags; /* Flow action flags. */ + bool external; /* External flow or not. */ + uint32_t vlan_tag:12; /* Flow item VLAN tag. */ + uint8_t next_protocol; /* Tunnel next protocol */ + uint32_t geneve_tlv_option; /* Flow item Geneve TLV option. */ + uint32_t group; /* Flow group. */ + uint16_t udp_dport; /* Flow item UDP port. */ + const struct rte_flow_attr *attr; /* Flow attribute. */ + struct mlx5_flow_rss_desc *rss_desc; /* RSS descriptor. */ + const struct rte_flow_item *tunnel_item; /* Flow tunnel item. */ + const struct rte_flow_item *gre_item; /* Flow GRE item. */ }; struct mlx5_flow_split_info { diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 70a3279e2f..0589cafc30 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -63,6 +63,25 @@ #define MLX5DV_FLOW_VLAN_PCP_MASK_BE RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK) #define MLX5DV_FLOW_VLAN_VID_MASK_BE RTE_BE16(MLX5DV_FLOW_VLAN_VID_MASK) +#define MLX5_ITEM_VALID(item, key_type) \ + (((MLX5_SET_MATCHER_SW & (key_type)) && !((item)->spec)) || \ + ((MLX5_SET_MATCHER_HS_V == (key_type)) && !((item)->spec)) || \ + ((MLX5_SET_MATCHER_HS_M == (key_type)) && !((item)->mask))) + +#define MLX5_ITEM_UPDATE(item, key_type, v, m, gm) \ + do { \ + if ((key_type) == MLX5_SET_MATCHER_SW_V) { \ + v = (item)->spec; \ + m = (item)->mask ? (item)->mask : (gm); \ + } else if ((key_type) == MLX5_SET_MATCHER_HS_V) { \ + v = (item)->spec; \ + m = (v); \ + } else { \ + v = (item)->mask ? (item)->mask : (gm); \ + m = (v); \ + } \ + } while (0) + union flow_dv_attr { struct { uint32_t valid:1; @@ -8323,70 +8342,61 @@ flow_dv_check_valid_spec(void *match_mask, void *match_value) static inline void flow_dv_set_match_ip_version(uint32_t group, void *headers_v, - void *headers_m, + uint32_t key_type, uint8_t ip_version) { - if (group == 0) - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, 0xf); + if (group == 0 && (key_type & MLX5_SET_MATCHER_M)) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, 0xf); else - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, ip_version); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, ip_version); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, 0); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ethertype, 0); } /** - * Add Ethernet item to matcher and to the value. + * Add Ethernet item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] grpup + * Flow matcher group. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_eth(void *matcher, void *key, - const struct rte_flow_item *item, int inner, - uint32_t group) +flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_eth *eth_m = item->mask; - const struct rte_flow_item_eth *eth_v = item->spec; + const struct rte_flow_item_eth *eth_vv = item->spec; + const struct rte_flow_item_eth *eth_m; + const struct rte_flow_item_eth *eth_v; const struct rte_flow_item_eth nic_mask = { .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", .src.addr_bytes = "\xff\xff\xff\xff\xff\xff", .type = RTE_BE16(0xffff), .has_vlan = 0, }; - void *hdrs_m; void *hdrs_v; char *l24_v; unsigned int i; - if (!eth_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!eth_m) - eth_m = &nic_mask; - if (inner) { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); + MLX5_ITEM_UPDATE(item, key_type, eth_v, eth_m, &nic_mask); + if (!eth_vv) + eth_vv = eth_v; + if (inner) hdrs_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); + else hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, dmac_47_16), - ð_m->dst, sizeof(eth_m->dst)); /* The value must be in the range of the mask. */ l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, dmac_47_16); for (i = 0; i < sizeof(eth_m->dst); ++i) l24_v[i] = eth_m->dst.addr_bytes[i] & eth_v->dst.addr_bytes[i]; - memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, smac_47_16), - ð_m->src, sizeof(eth_m->src)); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, smac_47_16); /* The value must be in the range of the mask. */ for (i = 0; i < sizeof(eth_m->dst); ++i) @@ -8400,145 +8410,149 @@ flow_dv_translate_item_eth(void *matcher, void *key, * eCPRI over Ether layer will use type value 0xAEFE. */ if (eth_m->type == 0xFFFF) { + rte_be16_t type = eth_v->type; + + /* + * When set the matcher mask, refer to the original spec + * value. + */ + if (key_type == MLX5_SET_MATCHER_SW_M) { + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + type = eth_vv->type; + } /* Set cvlan_tag mask for any single\multi\un-tagged case. */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); - switch (eth_v->type) { + switch (type) { case RTE_BE16(RTE_ETHER_TYPE_VLAN): MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); return; case RTE_BE16(RTE_ETHER_TYPE_QINQ): - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); return; case RTE_BE16(RTE_ETHER_TYPE_IPV4): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 4); + flow_dv_set_match_ip_version(group, hdrs_v, key_type, + 4); return; case RTE_BE16(RTE_ETHER_TYPE_IPV6): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 6); + flow_dv_set_match_ip_version(group, hdrs_v, key_type, + 6); return; default: break; } } - if (eth_m->has_vlan) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); - if (eth_v->has_vlan) { - /* - * Here, when also has_more_vlan field in VLAN item is - * not set, only single-tagged packets will be matched. - */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + /* + * Only SW steering value should refer to the mask value. + * Other cases are using the fake masks, just ignore the mask. + */ + if (eth_v->has_vlan && eth_m->has_vlan) { + /* + * Here, when also has_more_vlan field in VLAN item is + * not set, only single-tagged packets will be matched. + */ + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + if (key_type != MLX5_SET_MATCHER_HS_M && eth_vv->has_vlan) return; - } } - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, ethertype, - rte_be_to_cpu_16(eth_m->type)); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype); *(uint16_t *)(l24_v) = eth_m->type & eth_v->type; } /** - * Add VLAN item to matcher and to the value. + * Add VLAN item to the value. * - * @param[in, out] dev_flow - * Flow descriptor. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] wks + * Item workspace. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow, - void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_vlan *vlan_m = item->mask; - const struct rte_flow_item_vlan *vlan_v = item->spec; - void *hdrs_m; + const struct rte_flow_item_vlan *vlan_m; + const struct rte_flow_item_vlan *vlan_v; + const struct rte_flow_item_vlan *vlan_vv = item->spec; void *hdrs_v; - uint16_t tci_m; uint16_t tci_v; if (inner) { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); } else { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); /* * This is workaround, masks are not supported, * and pre-validated. */ - if (vlan_v) - dev_flow->handle->vf_vlan.tag = - rte_be_to_cpu_16(vlan_v->tci) & 0x0fff; + if (vlan_vv) + wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->tci) & 0x0fff; } /* * When VLAN item exists in flow, mark packet as tagged, * even if TCI is not specified. */ - if (!MLX5_GET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag)) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); + if (!MLX5_GET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag)) MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); - } - if (!vlan_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!vlan_m) - vlan_m = &rte_flow_item_vlan_mask; - tci_m = rte_be_to_cpu_16(vlan_m->tci); + MLX5_ITEM_UPDATE(item, key_type, vlan_v, vlan_m, + &rte_flow_item_vlan_mask); tci_v = rte_be_to_cpu_16(vlan_m->tci & vlan_v->tci); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_vid, tci_m); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_vid, tci_v); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_cfi, tci_m >> 12); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_cfi, tci_v >> 12); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_prio, tci_m >> 13); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_prio, tci_v >> 13); /* * HW is optimized for IPv4/IPv6. In such cases, avoid setting * ethertype, and use ip_version field instead. */ if (vlan_m->inner_type == 0xFFFF) { - switch (vlan_v->inner_type) { + rte_be16_t inner_type = vlan_v->inner_type; + + /* + * When set the matcher mask, refer to the original spec + * value. + */ + if (key_type == MLX5_SET_MATCHER_SW_M) + inner_type = vlan_vv->inner_type; + switch (inner_type) { case RTE_BE16(RTE_ETHER_TYPE_VLAN): - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); + if (key_type & MLX5_SET_MATCHER_V) + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, + cvlan_tag, 0); return; case RTE_BE16(RTE_ETHER_TYPE_IPV4): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 4); + flow_dv_set_match_ip_version + (wks->group, hdrs_v, key_type, 4); return; case RTE_BE16(RTE_ETHER_TYPE_IPV6): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 6); + flow_dv_set_match_ip_version + (wks->group, hdrs_v, key_type, 6); return; default: break; } } if (vlan_m->has_more_vlan && vlan_v->has_more_vlan) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); /* Only one vlan_tag bit can be set. */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); + if (key_type & MLX5_SET_MATCHER_V) + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); return; } - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, ethertype, - rte_be_to_cpu_16(vlan_m->inner_type)); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, ethertype, rte_be_to_cpu_16(vlan_m->inner_type & vlan_v->inner_type)); } /** - * Add IPV4 item to matcher and to the value. + * Add IPV4 item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8547,14 +8561,15 @@ flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow, * Item is inner pattern. * @param[in] group * The group to insert the rule. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv4(void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_ipv4(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_ipv4 *ipv4_m = item->mask; - const struct rte_flow_item_ipv4 *ipv4_v = item->spec; + const struct rte_flow_item_ipv4 *ipv4_m; + const struct rte_flow_item_ipv4 *ipv4_v; const struct rte_flow_item_ipv4 nic_mask = { .hdr = { .src_addr = RTE_BE32(0xffffffff), @@ -8564,68 +8579,41 @@ flow_dv_translate_item_ipv4(void *matcher, void *key, .time_to_live = 0xff, }, }; - void *headers_m; void *headers_v; - char *l24_m; char *l24_v; - uint8_t tos, ihl_m, ihl_v; + uint8_t tos; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - flow_dv_set_match_ip_version(group, headers_v, headers_m, 4); - if (!ipv4_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + flow_dv_set_match_ip_version(group, headers_v, key_type, 4); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv4_m) - ipv4_m = &nic_mask; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - dst_ipv4_dst_ipv6.ipv4_layout.ipv4); + MLX5_ITEM_UPDATE(item, key_type, ipv4_v, ipv4_m, &nic_mask); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, dst_ipv4_dst_ipv6.ipv4_layout.ipv4); - *(uint32_t *)l24_m = ipv4_m->hdr.dst_addr; *(uint32_t *)l24_v = ipv4_m->hdr.dst_addr & ipv4_v->hdr.dst_addr; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - src_ipv4_src_ipv6.ipv4_layout.ipv4); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, src_ipv4_src_ipv6.ipv4_layout.ipv4); - *(uint32_t *)l24_m = ipv4_m->hdr.src_addr; *(uint32_t *)l24_v = ipv4_m->hdr.src_addr & ipv4_v->hdr.src_addr; tos = ipv4_m->hdr.type_of_service & ipv4_v->hdr.type_of_service; - ihl_m = ipv4_m->hdr.version_ihl & RTE_IPV4_HDR_IHL_MASK; - ihl_v = ipv4_v->hdr.version_ihl & RTE_IPV4_HDR_IHL_MASK; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_ihl, ihl_m); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_ihl, ihl_m & ihl_v); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ecn, - ipv4_m->hdr.type_of_service); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_ihl, + ipv4_v->hdr.ihl & ipv4_m->hdr.ihl); + if (key_type == MLX5_SET_MATCHER_SW_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, + ipv4_v->hdr.type_of_service); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, tos); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_dscp, - ipv4_m->hdr.type_of_service >> 2); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp, tos >> 2); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv4_m->hdr.next_proto_id); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv4_v->hdr.next_proto_id & ipv4_m->hdr.next_proto_id); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ttl_hoplimit, - ipv4_m->hdr.time_to_live); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit, ipv4_v->hdr.time_to_live & ipv4_m->hdr.time_to_live); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, - !!(ipv4_m->hdr.fragment_offset)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, !!(ipv4_v->hdr.fragment_offset & ipv4_m->hdr.fragment_offset)); } /** - * Add IPV6 item to matcher and to the value. + * Add IPV6 item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8634,14 +8622,15 @@ flow_dv_translate_item_ipv4(void *matcher, void *key, * Item is inner pattern. * @param[in] group * The group to insert the rule. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv6(void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_ipv6(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_ipv6 *ipv6_m = item->mask; - const struct rte_flow_item_ipv6 *ipv6_v = item->spec; + const struct rte_flow_item_ipv6 *ipv6_m; + const struct rte_flow_item_ipv6 *ipv6_v; const struct rte_flow_item_ipv6 nic_mask = { .hdr = { .src_addr = @@ -8655,287 +8644,217 @@ flow_dv_translate_item_ipv6(void *matcher, void *key, .hop_limits = 0xff, }, }; - void *headers_m; void *headers_v; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - char *l24_m; char *l24_v; - uint32_t vtc_m; uint32_t vtc_v; int i; int size; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - flow_dv_set_match_ip_version(group, headers_v, headers_m, 6); - if (!ipv6_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + flow_dv_set_match_ip_version(group, headers_v, key_type, 6); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv6_m) - ipv6_m = &nic_mask; + MLX5_ITEM_UPDATE(item, key_type, ipv6_v, ipv6_m, &nic_mask); size = sizeof(ipv6_m->hdr.dst_addr); - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - dst_ipv4_dst_ipv6.ipv6_layout.ipv6); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, dst_ipv4_dst_ipv6.ipv6_layout.ipv6); - memcpy(l24_m, ipv6_m->hdr.dst_addr, size); for (i = 0; i < size; ++i) - l24_v[i] = l24_m[i] & ipv6_v->hdr.dst_addr[i]; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - src_ipv4_src_ipv6.ipv6_layout.ipv6); + l24_v[i] = ipv6_m->hdr.dst_addr[i] & ipv6_v->hdr.dst_addr[i]; l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, src_ipv4_src_ipv6.ipv6_layout.ipv6); - memcpy(l24_m, ipv6_m->hdr.src_addr, size); for (i = 0; i < size; ++i) - l24_v[i] = l24_m[i] & ipv6_v->hdr.src_addr[i]; + l24_v[i] = ipv6_m->hdr.src_addr[i] & ipv6_v->hdr.src_addr[i]; /* TOS. */ - vtc_m = rte_be_to_cpu_32(ipv6_m->hdr.vtc_flow); vtc_v = rte_be_to_cpu_32(ipv6_m->hdr.vtc_flow & ipv6_v->hdr.vtc_flow); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ecn, vtc_m >> 20); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, vtc_v >> 20); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_dscp, vtc_m >> 22); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp, vtc_v >> 22); /* Label. */ - if (inner) { - MLX5_SET(fte_match_set_misc, misc_m, inner_ipv6_flow_label, - vtc_m); + if (inner) MLX5_SET(fte_match_set_misc, misc_v, inner_ipv6_flow_label, vtc_v); - } else { - MLX5_SET(fte_match_set_misc, misc_m, outer_ipv6_flow_label, - vtc_m); + else MLX5_SET(fte_match_set_misc, misc_v, outer_ipv6_flow_label, vtc_v); - } /* Protocol. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv6_m->hdr.proto); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv6_v->hdr.proto & ipv6_m->hdr.proto); /* Hop limit. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ttl_hoplimit, - ipv6_m->hdr.hop_limits); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit, ipv6_v->hdr.hop_limits & ipv6_m->hdr.hop_limits); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, - !!(ipv6_m->has_frag_ext)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, !!(ipv6_v->has_frag_ext & ipv6_m->has_frag_ext)); } /** - * Add IPV6 fragment extension item to matcher and to the value. + * Add IPV6 fragment extension item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv6_frag_ext(void *matcher, void *key, +flow_dv_translate_item_ipv6_frag_ext(void *key, const struct rte_flow_item *item, - int inner) + int inner, uint32_t key_type) { - const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_m = item->mask; - const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_v = item->spec; + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_m; + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_v; const struct rte_flow_item_ipv6_frag_ext nic_mask = { .hdr = { .next_header = 0xff, .frag_data = RTE_BE16(0xffff), }, }; - void *headers_m; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); /* IPv6 fragment extension item exists, so packet is IP fragment. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 1); - if (!ipv6_frag_ext_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv6_frag_ext_m) - ipv6_frag_ext_m = &nic_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv6_frag_ext_m->hdr.next_header); + MLX5_ITEM_UPDATE(item, key_type, ipv6_frag_ext_v, + ipv6_frag_ext_m, &nic_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv6_frag_ext_v->hdr.next_header & ipv6_frag_ext_m->hdr.next_header); } /** - * Add TCP item to matcher and to the value. + * Add TCP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_tcp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_tcp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_tcp *tcp_m = item->mask; - const struct rte_flow_item_tcp *tcp_v = item->spec; - void *headers_m; + const struct rte_flow_item_tcp *tcp_m; + const struct rte_flow_item_tcp *tcp_v; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_TCP); - if (!tcp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_TCP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!tcp_m) - tcp_m = &rte_flow_item_tcp_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_sport, - rte_be_to_cpu_16(tcp_m->hdr.src_port)); + MLX5_ITEM_UPDATE(item, key_type, tcp_v, tcp_m, + &rte_flow_item_tcp_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_sport, rte_be_to_cpu_16(tcp_v->hdr.src_port & tcp_m->hdr.src_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_dport, - rte_be_to_cpu_16(tcp_m->hdr.dst_port)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_dport, rte_be_to_cpu_16(tcp_v->hdr.dst_port & tcp_m->hdr.dst_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_flags, - tcp_m->hdr.tcp_flags); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_flags, - (tcp_v->hdr.tcp_flags & tcp_m->hdr.tcp_flags)); + tcp_v->hdr.tcp_flags & tcp_m->hdr.tcp_flags); } /** - * Add ESP item to matcher and to the value. + * Add ESP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_esp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_esp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_esp *esp_m = item->mask; - const struct rte_flow_item_esp *esp_v = item->spec; - void *headers_m; + const struct rte_flow_item_esp *esp_m; + const struct rte_flow_item_esp *esp_v; void *headers_v; - char *spi_m; char *spi_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ESP); - if (!esp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_ESP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!esp_m) - esp_m = &rte_flow_item_esp_mask; - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + MLX5_ITEM_UPDATE(item, key_type, esp_v, esp_m, + &rte_flow_item_esp_mask); headers_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - if (inner) { - spi_m = MLX5_ADDR_OF(fte_match_set_misc, headers_m, inner_esp_spi); - spi_v = MLX5_ADDR_OF(fte_match_set_misc, headers_v, inner_esp_spi); - } else { - spi_m = MLX5_ADDR_OF(fte_match_set_misc, headers_m, outer_esp_spi); - spi_v = MLX5_ADDR_OF(fte_match_set_misc, headers_v, outer_esp_spi); - } - *(uint32_t *)spi_m = esp_m->hdr.spi; + spi_v = inner ? MLX5_ADDR_OF(fte_match_set_misc, headers_v, + inner_esp_spi) : MLX5_ADDR_OF(fte_match_set_misc + , headers_v, outer_esp_spi); *(uint32_t *)spi_v = esp_m->hdr.spi & esp_v->hdr.spi; } /** - * Add UDP item to matcher and to the value. + * Add UDP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_udp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_udp(void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_udp *udp_m = item->mask; - const struct rte_flow_item_udp *udp_v = item->spec; - void *headers_m; + const struct rte_flow_item_udp *udp_m; + const struct rte_flow_item_udp *udp_v; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_UDP); - if (!udp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_UDP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!udp_m) - udp_m = &rte_flow_item_udp_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_sport, - rte_be_to_cpu_16(udp_m->hdr.src_port)); + MLX5_ITEM_UPDATE(item, key_type, udp_v, udp_m, + &rte_flow_item_udp_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_sport, rte_be_to_cpu_16(udp_v->hdr.src_port & udp_m->hdr.src_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, - rte_be_to_cpu_16(udp_m->hdr.dst_port)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, rte_be_to_cpu_16(udp_v->hdr.dst_port & udp_m->hdr.dst_port)); + /* Force get UDP dport in case to be used in VXLAN translate. */ + if (key_type & MLX5_SET_MATCHER_SW) { + udp_v = item->spec; + wks->udp_dport = rte_be_to_cpu_16(udp_v->hdr.dst_port & + udp_m->hdr.dst_port); + } } /** - * Add GRE optional Key item to matcher and to the value. + * Add GRE optional Key item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8944,55 +8863,46 @@ flow_dv_translate_item_udp(void *matcher, void *key, * Item is inner pattern. */ static void -flow_dv_translate_item_gre_key(void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_gre_key(void *key, const struct rte_flow_item *item, + uint32_t key_type) { - const rte_be32_t *key_m = item->mask; - const rte_be32_t *key_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const rte_be32_t *key_m; + const rte_be32_t *key_v; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); rte_be32_t gre_key_default_mask = RTE_BE32(UINT32_MAX); /* GRE K bit must be on and should already be validated */ - MLX5_SET(fte_match_set_misc, misc_m, gre_k_present, 1); MLX5_SET(fte_match_set_misc, misc_v, gre_k_present, 1); - if (!key_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!key_m) - key_m = &gre_key_default_mask; - MLX5_SET(fte_match_set_misc, misc_m, gre_key_h, - rte_be_to_cpu_32(*key_m) >> 8); + MLX5_ITEM_UPDATE(item, key_type, key_v, key_m, + &gre_key_default_mask); MLX5_SET(fte_match_set_misc, misc_v, gre_key_h, rte_be_to_cpu_32((*key_v) & (*key_m)) >> 8); - MLX5_SET(fte_match_set_misc, misc_m, gre_key_l, - rte_be_to_cpu_32(*key_m) & 0xFF); MLX5_SET(fte_match_set_misc, misc_v, gre_key_l, rte_be_to_cpu_32((*key_v) & (*key_m)) & 0xFF); } /** - * Add GRE item to matcher and to the value. + * Add GRE item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gre(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t pattern_flags) +flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item, + uint64_t pattern_flags, uint32_t key_type) { static const struct rte_flow_item_gre empty_gre = {0,}; const struct rte_flow_item_gre *gre_m = item->mask; const struct rte_flow_item_gre *gre_v = item->spec; - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); struct { union { @@ -9010,8 +8920,11 @@ flow_dv_translate_item_gre(void *matcher, void *key, } gre_crks_rsvd0_ver_m, gre_crks_rsvd0_ver_v; uint16_t protocol_m, protocol_v; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_GRE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, + IPPROTO_GRE); if (!gre_v) { gre_v = &empty_gre; gre_m = &empty_gre; @@ -9019,20 +8932,18 @@ flow_dv_translate_item_gre(void *matcher, void *key, if (!gre_m) gre_m = &rte_flow_item_gre_mask; } + if (key_type & MLX5_SET_MATCHER_M) + gre_v = gre_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + gre_m = gre_v; gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->c_rsvd0_ver); gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->c_rsvd0_ver); - MLX5_SET(fte_match_set_misc, misc_m, gre_c_present, - gre_crks_rsvd0_ver_m.c_present); MLX5_SET(fte_match_set_misc, misc_v, gre_c_present, gre_crks_rsvd0_ver_v.c_present & gre_crks_rsvd0_ver_m.c_present); - MLX5_SET(fte_match_set_misc, misc_m, gre_k_present, - gre_crks_rsvd0_ver_m.k_present); MLX5_SET(fte_match_set_misc, misc_v, gre_k_present, gre_crks_rsvd0_ver_v.k_present & gre_crks_rsvd0_ver_m.k_present); - MLX5_SET(fte_match_set_misc, misc_m, gre_s_present, - gre_crks_rsvd0_ver_m.s_present); MLX5_SET(fte_match_set_misc, misc_v, gre_s_present, gre_crks_rsvd0_ver_v.s_present & gre_crks_rsvd0_ver_m.s_present); @@ -9043,17 +8954,17 @@ flow_dv_translate_item_gre(void *matcher, void *key, protocol_v = mlx5_translate_tunnel_etypes(pattern_flags); if (protocol_v) protocol_m = 0xFFFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + protocol_v = protocol_m; } - MLX5_SET(fte_match_set_misc, misc_m, gre_protocol, protocol_m); MLX5_SET(fte_match_set_misc, misc_v, gre_protocol, protocol_m & protocol_v); } /** - * Add GRE optional items to matcher and to the value. + * Add GRE optional items to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -9062,13 +8973,16 @@ flow_dv_translate_item_gre(void *matcher, void *key, * Pointer to gre_item. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gre_option(void *matcher, void *key, +flow_dv_translate_item_gre_option(void *key, const struct rte_flow_item *item, const struct rte_flow_item *gre_item, - uint64_t pattern_flags) + uint64_t pattern_flags, uint32_t key_type) { + void *misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); const struct rte_flow_item_gre_opt *option_m = item->mask; const struct rte_flow_item_gre_opt *option_v = item->spec; const struct rte_flow_item_gre *gre_m = gre_item->mask; @@ -9077,8 +8991,6 @@ flow_dv_translate_item_gre_option(void *matcher, void *key, struct rte_flow_item gre_key_item; uint16_t c_rsvd0_ver_m, c_rsvd0_ver_v; uint16_t protocol_m, protocol_v; - void *misc5_m; - void *misc5_v; /* * If only match key field, keep using misc for matching. @@ -9087,11 +8999,10 @@ flow_dv_translate_item_gre_option(void *matcher, void *key, */ if (!(option_m->sequence.sequence || option_m->checksum_rsvd.checksum)) { - flow_dv_translate_item_gre(matcher, key, gre_item, - pattern_flags); + flow_dv_translate_item_gre(key, gre_item, pattern_flags, key_type); gre_key_item.spec = &option_v->key.key; gre_key_item.mask = &option_m->key.key; - flow_dv_translate_item_gre_key(matcher, key, &gre_key_item); + flow_dv_translate_item_gre_key(key, &gre_key_item, key_type); return; } if (!gre_v) { @@ -9126,57 +9037,49 @@ flow_dv_translate_item_gre_option(void *matcher, void *key, c_rsvd0_ver_v |= RTE_BE16(0x8000); c_rsvd0_ver_m |= RTE_BE16(0x8000); } + if (key_type & MLX5_SET_MATCHER_M) { + c_rsvd0_ver_v = c_rsvd0_ver_m; + protocol_v = protocol_m; + option_v = option_m; + } /* * Hardware parses GRE optional field into the fixed location, * do not need to adjust the tunnel dword indices. */ - misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); - misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_0, rte_be_to_cpu_32((c_rsvd0_ver_v | protocol_v << 16) & (c_rsvd0_ver_m | protocol_m << 16))); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_0, - rte_be_to_cpu_32(c_rsvd0_ver_m | protocol_m << 16)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_1, rte_be_to_cpu_32(option_v->checksum_rsvd.checksum & option_m->checksum_rsvd.checksum)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_1, - rte_be_to_cpu_32(option_m->checksum_rsvd.checksum)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_2, rte_be_to_cpu_32(option_v->key.key & option_m->key.key)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_2, - rte_be_to_cpu_32(option_m->key.key)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_3, rte_be_to_cpu_32(option_v->sequence.sequence & option_m->sequence.sequence)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_3, - rte_be_to_cpu_32(option_m->sequence.sequence)); } /** * Add NVGRE item to matcher and to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_nvgre(void *matcher, void *key, - const struct rte_flow_item *item, - unsigned long pattern_flags) +flow_dv_translate_item_nvgre(void *key, const struct rte_flow_item *item, + unsigned long pattern_flags, uint32_t key_type) { - const struct rte_flow_item_nvgre *nvgre_m = item->mask; - const struct rte_flow_item_nvgre *nvgre_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const struct rte_flow_item_nvgre *nvgre_m; + const struct rte_flow_item_nvgre *nvgre_v; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); const char *tni_flow_id_m; const char *tni_flow_id_v; - char *gre_key_m; char *gre_key_v; int size; int i; @@ -9195,158 +9098,145 @@ flow_dv_translate_item_nvgre(void *matcher, void *key, .mask = &gre_mask, .last = NULL, }; - flow_dv_translate_item_gre(matcher, key, &gre_item, pattern_flags); - if (!nvgre_v) + flow_dv_translate_item_gre(key, &gre_item, pattern_flags, key_type); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!nvgre_m) - nvgre_m = &rte_flow_item_nvgre_mask; + MLX5_ITEM_UPDATE(item, key_type, nvgre_v, nvgre_m, + &rte_flow_item_nvgre_mask); tni_flow_id_m = (const char *)nvgre_m->tni; tni_flow_id_v = (const char *)nvgre_v->tni; size = sizeof(nvgre_m->tni) + sizeof(nvgre_m->flow_id); - gre_key_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, gre_key_h); gre_key_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, gre_key_h); - memcpy(gre_key_m, tni_flow_id_m, size); for (i = 0; i < size; ++i) - gre_key_v[i] = gre_key_m[i] & tni_flow_id_v[i]; + gre_key_v[i] = tni_flow_id_m[i] & tni_flow_id_v[i]; } /** - * Add VXLAN item to matcher and to the value. + * Add VXLAN item to the value. * * @param[in] dev * Pointer to the Ethernet device structure. * @param[in] attr * Flow rule attributes. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] wks + * Matcher workspace. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_vxlan(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, - void *matcher, void *key, - const struct rte_flow_item *item, - int inner) + void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_vxlan *vxlan_m = item->mask; - const struct rte_flow_item_vxlan *vxlan_v = item->spec; - void *headers_m; + const struct rte_flow_item_vxlan *vxlan_m; + const struct rte_flow_item_vxlan *vxlan_v; + const struct rte_flow_item_vxlan *vxlan_vv = item->spec; void *headers_v; - void *misc5_m; + void *misc_v; void *misc5_v; + uint32_t tunnel_v; uint32_t *tunnel_header_v; - uint32_t *tunnel_header_m; + char *vni_v; uint16_t dport; + int size; + int i; struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_item_vxlan nic_mask = { .vni = "\xff\xff\xff", .rsvd1 = 0xff, }; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); dport = item->type == RTE_FLOW_ITEM_TYPE_VXLAN ? MLX5_UDP_PORT_VXLAN : MLX5_UDP_PORT_VXLAN_GPE; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, dport); - } - dport = MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport); - if (!vxlan_v) - return; - if (!vxlan_m) { - if ((!attr->group && !priv->sh->tunnel_header_0_1) || - (attr->group && !priv->sh->misc5_cap)) - vxlan_m = &rte_flow_item_vxlan_mask; + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xFFFF); else - vxlan_m = &nic_mask; + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, dport); } + /* + * Read the UDP dport to check if the value satisfies the VXLAN + * matching with MISC5 for CX5. + */ + if (wks->udp_dport) + dport = wks->udp_dport; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, vxlan_v, vxlan_m, &nic_mask); + if (item->mask == &nic_mask && + ((!attr->group && !priv->sh->tunnel_header_0_1) || + (attr->group && !priv->sh->misc5_cap))) + vxlan_m = &rte_flow_item_vxlan_mask; if ((priv->sh->steering_format_version == - MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 && - dport != MLX5_UDP_PORT_VXLAN) || - (!attr->group && !attr->transfer && !priv->sh->tunnel_header_0_1) || + MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 && + dport != MLX5_UDP_PORT_VXLAN) || + (!attr->group && !attr->transfer) || ((attr->group || attr->transfer) && !priv->sh->misc5_cap)) { - void *misc_m; - void *misc_v; - char *vni_m; - char *vni_v; - int size; - int i; - misc_m = MLX5_ADDR_OF(fte_match_param, - matcher, misc_parameters); misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); size = sizeof(vxlan_m->vni); - vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni); vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni); - memcpy(vni_m, vxlan_m->vni, size); for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & vxlan_v->vni[i]; + vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i]; return; } - misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5); - misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5, misc5_v, tunnel_header_1); - tunnel_header_m = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5, - misc5_m, - tunnel_header_1); - *tunnel_header_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) | - (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 | - (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16; - if (*tunnel_header_v) - *tunnel_header_m = vxlan_m->vni[0] | - vxlan_m->vni[1] << 8 | - vxlan_m->vni[2] << 16; - else - *tunnel_header_m = 0x0; - *tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24; - if (vxlan_v->rsvd1 & vxlan_m->rsvd1) - *tunnel_header_m |= vxlan_m->rsvd1 << 24; + tunnel_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) | + (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 | + (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16; + *tunnel_header_v = tunnel_v; + if (key_type == MLX5_SET_MATCHER_SW_M) { + tunnel_v = (vxlan_vv->vni[0] & vxlan_m->vni[0]) | + (vxlan_vv->vni[1] & vxlan_m->vni[1]) << 8 | + (vxlan_vv->vni[2] & vxlan_m->vni[2]) << 16; + if (!tunnel_v) + *tunnel_header_v = 0x0; + if (vxlan_vv->rsvd1 & vxlan_m->rsvd1) + *tunnel_header_v |= vxlan_v->rsvd1 << 24; + } else { + *tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24; + } } /** - * Add VXLAN-GPE item to matcher and to the value. + * Add VXLAN-GPE item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] pattern_flags + * Item pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, - const struct rte_flow_item *item, - const uint64_t pattern_flags) +flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item, + const uint64_t pattern_flags, + uint32_t key_type) { static const struct rte_flow_item_vxlan_gpe dummy_vxlan_gpe_hdr = {0, }; const struct rte_flow_item_vxlan_gpe *vxlan_m = item->mask; const struct rte_flow_item_vxlan_gpe *vxlan_v = item->spec; /* The item was validated to be on the outer side */ - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = - MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_3); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - char *vni_m = - MLX5_ADDR_OF(fte_match_set_misc3, misc_m, outer_vxlan_gpe_vni); char *vni_v = MLX5_ADDR_OF(fte_match_set_misc3, misc_v, outer_vxlan_gpe_vni); int i, size = sizeof(vxlan_m->vni); @@ -9355,9 +9245,12 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, uint8_t m_protocol, v_protocol; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_VXLAN_GPE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + MLX5_UDP_PORT_VXLAN_GPE); } if (!vxlan_v) { vxlan_v = &dummy_vxlan_gpe_hdr; @@ -9366,15 +9259,18 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, if (!vxlan_m) vxlan_m = &rte_flow_item_vxlan_gpe_mask; } - memcpy(vni_m, vxlan_m->vni, size); + if (key_type & MLX5_SET_MATCHER_M) + vxlan_v = vxlan_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + vxlan_m = vxlan_v; for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & vxlan_v->vni[i]; + vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i]; if (vxlan_m->flags) { flags_m = vxlan_m->flags; flags_v = vxlan_v->flags; } - MLX5_SET(fte_match_set_misc3, misc_m, outer_vxlan_gpe_flags, flags_m); - MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags, flags_v); + MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags, + flags_m & flags_v); m_protocol = vxlan_m->protocol; v_protocol = vxlan_v->protocol; if (!m_protocol) { @@ -9387,50 +9283,50 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, v_protocol = RTE_VXLAN_GPE_TYPE_IPV6; if (v_protocol) m_protocol = 0xFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + v_protocol = m_protocol; } - MLX5_SET(fte_match_set_misc3, misc_m, - outer_vxlan_gpe_next_protocol, m_protocol); MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_next_protocol, m_protocol & v_protocol); } /** - * Add Geneve item to matcher and to the value. + * Add Geneve item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] pattern_flags + * Item pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_geneve(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t pattern_flags) +flow_dv_translate_item_geneve(void *key, const struct rte_flow_item *item, + uint64_t pattern_flags, uint32_t key_type) { static const struct rte_flow_item_geneve empty_geneve = {0,}; const struct rte_flow_item_geneve *geneve_m = item->mask; const struct rte_flow_item_geneve *geneve_v = item->spec; /* GENEVE flow item validation allows single tunnel item */ - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); uint16_t gbhdr_m; uint16_t gbhdr_v; - char *vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, geneve_vni); char *vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, geneve_vni); size_t size = sizeof(geneve_m->vni), i; uint16_t protocol_m, protocol_v; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_GENEVE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + MLX5_UDP_PORT_GENEVE); } if (!geneve_v) { geneve_v = &empty_geneve; @@ -9439,17 +9335,16 @@ flow_dv_translate_item_geneve(void *matcher, void *key, if (!geneve_m) geneve_m = &rte_flow_item_geneve_mask; } - memcpy(vni_m, geneve_m->vni, size); + if (key_type & MLX5_SET_MATCHER_M) + geneve_v = geneve_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + geneve_m = geneve_v; for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & geneve_v->vni[i]; + vni_v[i] = geneve_m->vni[i] & geneve_v->vni[i]; gbhdr_m = rte_be_to_cpu_16(geneve_m->ver_opt_len_o_c_rsvd0); gbhdr_v = rte_be_to_cpu_16(geneve_v->ver_opt_len_o_c_rsvd0); - MLX5_SET(fte_match_set_misc, misc_m, geneve_oam, - MLX5_GENEVE_OAMF_VAL(gbhdr_m)); MLX5_SET(fte_match_set_misc, misc_v, geneve_oam, MLX5_GENEVE_OAMF_VAL(gbhdr_v) & MLX5_GENEVE_OAMF_VAL(gbhdr_m)); - MLX5_SET(fte_match_set_misc, misc_m, geneve_opt_len, - MLX5_GENEVE_OPTLEN_VAL(gbhdr_m)); MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, MLX5_GENEVE_OPTLEN_VAL(gbhdr_v) & MLX5_GENEVE_OPTLEN_VAL(gbhdr_m)); @@ -9460,8 +9355,10 @@ flow_dv_translate_item_geneve(void *matcher, void *key, protocol_v = mlx5_translate_tunnel_etypes(pattern_flags); if (protocol_v) protocol_m = 0xFFFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + protocol_v = protocol_m; } - MLX5_SET(fte_match_set_misc, misc_m, geneve_protocol_type, protocol_m); MLX5_SET(fte_match_set_misc, misc_v, geneve_protocol_type, protocol_m & protocol_v); } @@ -9471,10 +9368,8 @@ flow_dv_translate_item_geneve(void *matcher, void *key, * * @param dev[in, out] * Pointer to rte_eth_dev structure. - * @param[in, out] tag_be24 - * Tag value in big endian then R-shift 8. - * @parm[in, out] dev_flow - * Pointer to the dev_flow. + * @param[in] item + * Flow pattern to translate. * @param[out] error * pointer to error structure. * @@ -9551,38 +9446,38 @@ flow_dev_geneve_tlv_option_resource_register(struct rte_eth_dev *dev, } /** - * Add Geneve TLV option item to matcher. + * Add Geneve TLV option item to value. * * @param[in, out] dev * Pointer to rte_eth_dev structure. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. * @param[out] error * Pointer to error structure. */ static int -flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, +flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type, struct rte_flow_error *error) { - const struct rte_flow_item_geneve_opt *geneve_opt_m = item->mask; - const struct rte_flow_item_geneve_opt *geneve_opt_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const struct rte_flow_item_geneve_opt *geneve_opt_m; + const struct rte_flow_item_geneve_opt *geneve_opt_v; + const struct rte_flow_item_geneve_opt *geneve_opt_vv = item->spec; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); rte_be32_t opt_data_key = 0, opt_data_mask = 0; + uint32_t *data; int ret = 0; - if (!geneve_opt_v) + if (MLX5_ITEM_VALID(item, key_type)) return -1; - if (!geneve_opt_m) - geneve_opt_m = &rte_flow_item_geneve_opt_mask; + MLX5_ITEM_UPDATE(item, key_type, geneve_opt_v, geneve_opt_m, + &rte_flow_item_geneve_opt_mask); ret = flow_dev_geneve_tlv_option_resource_register(dev, item, error); if (ret) { @@ -9596,17 +9491,21 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, * If the option length was not requested but the GENEVE TLV option item * is present we set the option length field implicitly. */ - if (!MLX5_GET16(fte_match_set_misc, misc_m, geneve_opt_len)) { - MLX5_SET(fte_match_set_misc, misc_m, geneve_opt_len, - MLX5_GENEVE_OPTLEN_MASK); - MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, - geneve_opt_v->option_len + 1); - } - MLX5_SET(fte_match_set_misc, misc_m, geneve_tlv_option_0_exist, 1); - MLX5_SET(fte_match_set_misc, misc_v, geneve_tlv_option_0_exist, 1); + if (!MLX5_GET16(fte_match_set_misc, misc_v, geneve_opt_len)) { + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, + MLX5_GENEVE_OPTLEN_MASK); + else + MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, + geneve_opt_v->option_len + 1); + } /* Set the data. */ - if (geneve_opt_v->data) { - memcpy(&opt_data_key, geneve_opt_v->data, + if (key_type == MLX5_SET_MATCHER_SW_V) + data = geneve_opt_vv->data; + else + data = geneve_opt_v->data; + if (data) { + memcpy(&opt_data_key, data, RTE_MIN((uint32_t)(geneve_opt_v->option_len * 4), sizeof(opt_data_key))); MLX5_ASSERT((uint32_t)(geneve_opt_v->option_len * 4) <= @@ -9616,9 +9515,6 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, sizeof(opt_data_mask))); MLX5_ASSERT((uint32_t)(geneve_opt_v->option_len * 4) <= sizeof(opt_data_mask)); - MLX5_SET(fte_match_set_misc3, misc3_m, - geneve_tlv_option_0_data, - rte_be_to_cpu_32(opt_data_mask)); MLX5_SET(fte_match_set_misc3, misc3_v, geneve_tlv_option_0_data, rte_be_to_cpu_32(opt_data_key & opt_data_mask)); @@ -9627,10 +9523,8 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, } /** - * Add MPLS item to matcher and to the value. + * Add MPLS item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -9639,93 +9533,78 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, * The protocol layer indicated in previous item. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_mpls(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t prev_layer, - int inner) +flow_dv_translate_item_mpls(void *key, const struct rte_flow_item *item, + uint64_t prev_layer, int inner, + uint32_t key_type) { - const uint32_t *in_mpls_m = item->mask; - const uint32_t *in_mpls_v = item->spec; - uint32_t *out_mpls_m = 0; + const uint32_t *in_mpls_m; + const uint32_t *in_mpls_v; uint32_t *out_mpls_v = 0; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - void *misc2_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_2); void *misc2_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2); - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); switch (prev_layer) { case MLX5_FLOW_LAYER_OUTER_L4_UDP: if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, - 0xffff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_MPLS); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xffff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, MLX5_UDP_PORT_MPLS); } break; case MLX5_FLOW_LAYER_GRE: /* Fall-through. */ case MLX5_FLOW_LAYER_GRE_KEY: if (!MLX5_GET16(fte_match_set_misc, misc_v, gre_protocol)) { - MLX5_SET(fte_match_set_misc, misc_m, gre_protocol, - 0xffff); - MLX5_SET(fte_match_set_misc, misc_v, gre_protocol, - RTE_ETHER_TYPE_MPLS); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_misc, misc_v, + gre_protocol, 0xffff); + else + MLX5_SET(fte_match_set_misc, misc_v, + gre_protocol, RTE_ETHER_TYPE_MPLS); } break; default: break; } - if (!in_mpls_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!in_mpls_m) - in_mpls_m = (const uint32_t *)&rte_flow_item_mpls_mask; + MLX5_ITEM_UPDATE(item, key_type, in_mpls_v, in_mpls_m, + &rte_flow_item_mpls_mask); switch (prev_layer) { case MLX5_FLOW_LAYER_OUTER_L4_UDP: - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_m, - outer_first_mpls_over_udp); out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls_over_udp); break; case MLX5_FLOW_LAYER_GRE: - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_m, - outer_first_mpls_over_gre); out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls_over_gre); break; default: /* Inner MPLS not over GRE is not supported. */ - if (!inner) { - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, - misc2_m, - outer_first_mpls); + if (!inner) out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls); - } break; } - if (out_mpls_m && out_mpls_v) { - *out_mpls_m = *in_mpls_m; + if (out_mpls_v) *out_mpls_v = *in_mpls_v & *in_mpls_m; - } } /** * Add metadata register item to matcher * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] reg_type @@ -9736,12 +9615,9 @@ flow_dv_translate_item_mpls(void *matcher, void *key, * Register mask */ static void -flow_dv_match_meta_reg(void *matcher, void *key, - enum modify_reg reg_type, +flow_dv_match_meta_reg(void *key, enum modify_reg reg_type, uint32_t data, uint32_t mask) { - void *misc2_m = - MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2); void *misc2_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2); uint32_t temp; @@ -9749,11 +9625,9 @@ flow_dv_match_meta_reg(void *matcher, void *key, data &= mask; switch (reg_type) { case REG_A: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_a, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_a, data); break; case REG_B: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_b, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_b, data); break; case REG_C_0: @@ -9762,40 +9636,31 @@ flow_dv_match_meta_reg(void *matcher, void *key, * source vport index and META item value, we should set * this field according to specified mask, not as whole one. */ - temp = MLX5_GET(fte_match_set_misc2, misc2_m, metadata_reg_c_0); - temp |= mask; - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_0, temp); temp = MLX5_GET(fte_match_set_misc2, misc2_v, metadata_reg_c_0); - temp &= ~mask; + if (mask) + temp &= ~mask; temp |= data; MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_0, temp); break; case REG_C_1: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_1, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_1, data); break; case REG_C_2: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_2, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_2, data); break; case REG_C_3: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_3, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_3, data); break; case REG_C_4: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_4, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_4, data); break; case REG_C_5: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_5, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_5, data); break; case REG_C_6: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_6, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_6, data); break; case REG_C_7: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_7, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_7, data); break; default: @@ -9804,34 +9669,71 @@ flow_dv_match_meta_reg(void *matcher, void *key, } } +/** + * Add metadata register item to matcher + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] reg_type + * Type of device metadata register + * @param[in] value + * Register value + * @param[in] mask + * Register mask + */ +static void +flow_dv_match_meta_reg_all(void *matcher, void *key, enum modify_reg reg_type, + uint32_t data, uint32_t mask) +{ + flow_dv_match_meta_reg(key, reg_type, data, mask); + flow_dv_match_meta_reg(matcher, reg_type, mask, mask); +} + /** * Add MARK item to matcher * * @param[in] dev * The device to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_mark(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_mark(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_item_mark *mark; uint32_t value; - uint32_t mask; - - mark = item->mask ? (const void *)item->mask : - &rte_flow_item_mark_mask; - mask = mark->id & priv->sh->dv_mark_mask; - mark = (const void *)item->spec; - MLX5_ASSERT(mark); - value = mark->id & priv->sh->dv_mark_mask & mask; + uint32_t mask = 0; + + if (key_type & MLX5_SET_MATCHER_SW) { + mark = item->mask ? (const void *)item->mask : + &rte_flow_item_mark_mask; + mask = mark->id; + if (key_type == MLX5_SET_MATCHER_SW_M) { + value = mask; + } else { + mark = (const void *)item->spec; + MLX5_ASSERT(mark); + value = mark->id; + } + } else { + mark = (key_type == MLX5_SET_MATCHER_HS_V) ? + (const void *)item->spec : (const void *)item->mask; + MLX5_ASSERT(mark); + value = mark->id; + if (key_type == MLX5_SET_MATCHER_HS_M) + mask = value; + } + mask &= priv->sh->dv_mark_mask; + value &= mask; if (mask) { enum modify_reg reg; @@ -9847,7 +9749,7 @@ flow_dv_translate_item_mark(struct rte_eth_dev *dev, mask <<= shl_c0; value <<= shl_c0; } - flow_dv_match_meta_reg(matcher, key, reg, value, mask); + flow_dv_match_meta_reg(key, reg, value, mask); } } @@ -9856,65 +9758,66 @@ flow_dv_translate_item_mark(struct rte_eth_dev *dev, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] attr * Attributes of flow that includes this item. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_meta(struct rte_eth_dev *dev, - void *matcher, void *key, + void *key, const struct rte_flow_attr *attr, - const struct rte_flow_item *item) + const struct rte_flow_item *item, + uint32_t key_type) { const struct rte_flow_item_meta *meta_m; const struct rte_flow_item_meta *meta_v; + uint32_t value; + uint32_t mask = 0; + int reg; - meta_m = (const void *)item->mask; - if (!meta_m) - meta_m = &rte_flow_item_meta_mask; - meta_v = (const void *)item->spec; - if (meta_v) { - int reg; - uint32_t value = meta_v->data; - uint32_t mask = meta_m->data; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, meta_v, meta_m, + &rte_flow_item_meta_mask); + value = meta_v->data; + mask = meta_m->data; + if (key_type == MLX5_SET_MATCHER_HS_M) + mask = value; + reg = flow_dv_get_metadata_reg(dev, attr, NULL); + if (reg < 0) + return; + MLX5_ASSERT(reg != REG_NON); + if (reg == REG_C_0) { + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t msk_c0 = priv->sh->dv_regc0_mask; + uint32_t shl_c0 = rte_bsf32(msk_c0); - reg = flow_dv_get_metadata_reg(dev, attr, NULL); - if (reg < 0) - return; - MLX5_ASSERT(reg != REG_NON); - if (reg == REG_C_0) { - struct mlx5_priv *priv = dev->data->dev_private; - uint32_t msk_c0 = priv->sh->dv_regc0_mask; - uint32_t shl_c0 = rte_bsf32(msk_c0); - - mask &= msk_c0; - mask <<= shl_c0; - value <<= shl_c0; - } - flow_dv_match_meta_reg(matcher, key, reg, value, mask); + mask &= msk_c0; + mask <<= shl_c0; + value <<= shl_c0; } + flow_dv_match_meta_reg(key, reg, value, mask); } /** * Add vport metadata Reg C0 item to matcher * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. - * @param[in] reg - * Flow pattern to translate. + * @param[in] value + * Register value + * @param[in] mask + * Register mask */ static void -flow_dv_translate_item_meta_vport(void *matcher, void *key, - uint32_t value, uint32_t mask) +flow_dv_translate_item_meta_vport(void *key, uint32_t value, uint32_t mask) { - flow_dv_match_meta_reg(matcher, key, REG_C_0, value, mask); + flow_dv_match_meta_reg(key, REG_C_0, value, mask); } /** @@ -9922,17 +9825,17 @@ flow_dv_translate_item_meta_vport(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { const struct mlx5_rte_flow_item_tag *tag_v = item->spec; const struct mlx5_rte_flow_item_tag *tag_m = item->mask; @@ -9941,6 +9844,8 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, MLX5_ASSERT(tag_v); value = tag_v->data; mask = tag_m ? tag_m->data : UINT32_MAX; + if (key_type & MLX5_SET_MATCHER_M) + value = mask; if (tag_v->id == REG_C_0) { struct mlx5_priv *priv = dev->data->dev_private; uint32_t msk_c0 = priv->sh->dv_regc0_mask; @@ -9950,7 +9855,7 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, mask <<= shl_c0; value <<= shl_c0; } - flow_dv_match_meta_reg(matcher, key, tag_v->id, value, mask); + flow_dv_match_meta_reg(key, tag_v->id, value, mask); } /** @@ -9958,50 +9863,50 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_tag(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_tag(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { - const struct rte_flow_item_tag *tag_v = item->spec; - const struct rte_flow_item_tag *tag_m = item->mask; + const struct rte_flow_item_tag *tag_vv = item->spec; + const struct rte_flow_item_tag *tag_v; + const struct rte_flow_item_tag *tag_m; enum modify_reg reg; + uint32_t index; - MLX5_ASSERT(tag_v); - tag_m = tag_m ? tag_m : &rte_flow_item_tag_mask; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, tag_v, tag_m, + &rte_flow_item_tag_mask); + /* When set mask, the index should be from spec. */ + index = tag_vv ? tag_vv->index : tag_v->index; /* Get the metadata register index for the tag. */ - reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, tag_v->index, NULL); + reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, index, NULL); MLX5_ASSERT(reg > 0); - flow_dv_match_meta_reg(matcher, key, reg, tag_v->data, tag_m->data); + flow_dv_match_meta_reg(key, reg, tag_v->data, tag_m->data); } /** * Add source vport match to the specified matcher. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] port * Source vport value to match - * @param[in] mask - * Mask */ static void -flow_dv_translate_item_source_vport(void *matcher, void *key, - int16_t port, uint16_t mask) +flow_dv_translate_item_source_vport(void *key, + int16_t port) { - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - MLX5_SET(fte_match_set_misc, misc_m, source_port, mask); MLX5_SET(fte_match_set_misc, misc_v, source_port, port); } @@ -10010,31 +9915,34 @@ flow_dv_translate_item_source_vport(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] + * @param[in] attr * Flow attributes. + * @param[in] key_type + * Set flow matcher mask or value. * * @return * 0 on success, a negative errno value otherwise. */ static int -flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, - const struct rte_flow_attr *attr) +flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + const struct rte_flow_attr *attr, + uint32_t key_type) { const struct rte_flow_item_port_id *pid_m = item ? item->mask : NULL; const struct rte_flow_item_port_id *pid_v = item ? item->spec : NULL; struct mlx5_priv *priv; uint16_t mask, id; + uint32_t vport_meta; if (pid_v && pid_v->id == MLX5_PORT_ESW_MGR) { - flow_dv_translate_item_source_vport(matcher, key, - mlx5_flow_get_esw_manager_vport_id(dev), 0xffff); + flow_dv_translate_item_source_vport(key, + key_type & MLX5_SET_MATCHER_V ? + mlx5_flow_get_esw_manager_vport_id(dev) : 0xffff); return 0; } mask = pid_m ? pid_m->id : 0xffff; @@ -10042,6 +9950,13 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, priv = mlx5_port_to_eswitch_info(id, item == NULL); if (!priv) return -rte_errno; + if (key_type & MLX5_SET_MATCHER_M) { + id = mask; + vport_meta = priv->vport_meta_mask; + } else { + id = priv->vport_id; + vport_meta = priv->vport_meta_tag; + } /* * Translate to vport field or to metadata, depending on mode. * Kernel can use either misc.source_port or half of C0 metadata @@ -10055,20 +9970,17 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, */ if (mask == 0xffff && priv->vport_id == 0xffff && priv->pf_bond < 0 && attr->transfer) - flow_dv_translate_item_source_vport - (matcher, key, priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); /* * We should always set the vport metadata register, * otherwise the SW steering library can drop * the rule if wire vport metadata value is not zero, * it depends on kernel configuration. */ - flow_dv_translate_item_meta_vport(matcher, key, - priv->vport_meta_tag, - priv->vport_meta_mask); + flow_dv_translate_item_meta_vport + (key, vport_meta, priv->vport_meta_mask); } else { - flow_dv_translate_item_source_vport(matcher, key, - priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); } return 0; } @@ -10078,8 +9990,6 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -10091,21 +10001,25 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, * 0 on success, a negative errno value otherwise. */ static int -flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, - void *key, +flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *key, const struct rte_flow_item *item, - const struct rte_flow_attr *attr) + const struct rte_flow_attr *attr, + uint32_t key_type) { const struct rte_flow_item_ethdev *pid_m = item ? item->mask : NULL; const struct rte_flow_item_ethdev *pid_v = item ? item->spec : NULL; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); struct mlx5_priv *priv; uint16_t mask, id; + uint32_t vport_meta; + MLX5_ASSERT(wks); if (!pid_m && !pid_v) return 0; if (pid_v && pid_v->port_id == UINT16_MAX) { - flow_dv_translate_item_source_vport(matcher, key, - mlx5_flow_get_esw_manager_vport_id(dev), UINT16_MAX); + flow_dv_translate_item_source_vport(key, + key_type & MLX5_SET_MATCHER_V ? + mlx5_flow_get_esw_manager_vport_id(dev) : 0xffff); return 0; } mask = pid_m ? pid_m->port_id : UINT16_MAX; @@ -10113,6 +10027,14 @@ flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, priv = mlx5_port_to_eswitch_info(id, item == NULL); if (!priv) return -rte_errno; + if (key_type & MLX5_SET_MATCHER_M) { + id = mask; + vport_meta = priv->vport_meta_mask; + } else { + id = priv->vport_id; + vport_meta = priv->vport_meta_tag; + wks->vport_meta_tag = vport_meta; + } /* * Translate to vport field or to metadata, depending on mode. * Kernel can use either misc.source_port or half of C0 metadata @@ -10125,119 +10047,133 @@ flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, * save the extra vport match. */ if (mask == UINT16_MAX && priv->vport_id == UINT16_MAX && - priv->pf_bond < 0 && attr->transfer) - flow_dv_translate_item_source_vport - (matcher, key, priv->vport_id, mask); + priv->pf_bond < 0 && attr->transfer && + priv->sh->config.dv_flow_en != 2) + flow_dv_translate_item_source_vport(key, id); /* * We should always set the vport metadata register, * otherwise the SW steering library can drop * the rule if wire vport metadata value is not zero, * it depends on kernel configuration. */ - flow_dv_translate_item_meta_vport(matcher, key, - priv->vport_meta_tag, + flow_dv_translate_item_meta_vport(key, vport_meta, priv->vport_meta_mask); } else { - flow_dv_translate_item_source_vport(matcher, key, - priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); } return 0; } /** - * Add ICMP6 item to matcher and to the value. + * Translate port-id item to eswitch match on port-id. * + * @param[in] dev + * The devich to configure through. * @param[in, out] matcher * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] attr + * Flow attributes. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +static int +flow_dv_translate_item_port_id_all(struct rte_eth_dev *dev, + void *matcher, void *key, + const struct rte_flow_item *item, + const struct rte_flow_attr *attr) +{ + int ret; + + ret = flow_dv_translate_item_port_id + (dev, matcher, item, attr, MLX5_SET_MATCHER_SW_M); + if (ret) + return ret; + ret = flow_dv_translate_item_port_id + (dev, key, item, attr, MLX5_SET_MATCHER_SW_V); + return ret; +} + + +/** + * Add ICMP6 item to the value. + * + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_icmp6(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_icmp6(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_icmp6 *icmp6_m = item->mask; - const struct rte_flow_item_icmp6 *icmp6_v = item->spec; - void *headers_m; + const struct rte_flow_item_icmp6 *icmp6_m; + const struct rte_flow_item_icmp6 *icmp6_v; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ICMPV6); - if (!icmp6_v) + + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, 0xFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, + IPPROTO_ICMPV6); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!icmp6_m) - icmp6_m = &rte_flow_item_icmp6_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, icmpv6_type, icmp6_m->type); + MLX5_ITEM_UPDATE(item, key_type, icmp6_v, icmp6_m, + &rte_flow_item_icmp6_mask); MLX5_SET(fte_match_set_misc3, misc3_v, icmpv6_type, icmp6_v->type & icmp6_m->type); - MLX5_SET(fte_match_set_misc3, misc3_m, icmpv6_code, icmp6_m->code); MLX5_SET(fte_match_set_misc3, misc3_v, icmpv6_code, icmp6_v->code & icmp6_m->code); } /** - * Add ICMP item to matcher and to the value. + * Add ICMP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_icmp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_icmp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_icmp *icmp_m = item->mask; - const struct rte_flow_item_icmp *icmp_v = item->spec; + const struct rte_flow_item_icmp *icmp_m; + const struct rte_flow_item_icmp *icmp_v; uint32_t icmp_header_data_m = 0; uint32_t icmp_header_data_v = 0; - void *headers_m; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ICMP); - if (!icmp_v) + + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_ICMP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!icmp_m) - icmp_m = &rte_flow_item_icmp_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_type, - icmp_m->hdr.icmp_type); + MLX5_ITEM_UPDATE(item, key_type, icmp_v, icmp_m, + &rte_flow_item_icmp_mask); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_type, icmp_v->hdr.icmp_type & icmp_m->hdr.icmp_type); - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_code, - icmp_m->hdr.icmp_code); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_code, icmp_v->hdr.icmp_code & icmp_m->hdr.icmp_code); icmp_header_data_m = rte_be_to_cpu_16(icmp_m->hdr.icmp_seq_nb); @@ -10246,64 +10182,51 @@ flow_dv_translate_item_icmp(void *matcher, void *key, icmp_header_data_v = rte_be_to_cpu_16(icmp_v->hdr.icmp_seq_nb); icmp_header_data_v |= rte_be_to_cpu_16(icmp_v->hdr.icmp_ident) << 16; - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_header_data, - icmp_header_data_m); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_header_data, icmp_header_data_v & icmp_header_data_m); } } /** - * Add GTP item to matcher and to the value. + * Add GTP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gtp(void *matcher, void *key, - const struct rte_flow_item *item, int inner) +flow_dv_translate_item_gtp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_gtp *gtp_m = item->mask; - const struct rte_flow_item_gtp *gtp_v = item->spec; - void *headers_m; + const struct rte_flow_item_gtp *gtp_m; + const struct rte_flow_item_gtp *gtp_v; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); uint16_t dport = RTE_GTPU_UDP_PORT; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, dport); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, dport); } - if (!gtp_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!gtp_m) - gtp_m = &rte_flow_item_gtp_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_flags, - gtp_m->v_pt_rsv_flags); + MLX5_ITEM_UPDATE(item, key_type, gtp_v, gtp_m, + &rte_flow_item_gtp_mask); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags, gtp_v->v_pt_rsv_flags & gtp_m->v_pt_rsv_flags); - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_type, gtp_m->msg_type); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_type, gtp_v->msg_type & gtp_m->msg_type); - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_teid, - rte_be_to_cpu_32(gtp_m->teid)); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_teid, rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid)); } @@ -10311,21 +10234,19 @@ flow_dv_translate_item_gtp(void *matcher, void *key, /** * Add GTP PSC item to matcher. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static int -flow_dv_translate_item_gtp_psc(void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_gtp_psc(void *key, const struct rte_flow_item *item, + uint32_t key_type) { - const struct rte_flow_item_gtp_psc *gtp_psc_m = item->mask; - const struct rte_flow_item_gtp_psc *gtp_psc_v = item->spec; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); + const struct rte_flow_item_gtp_psc *gtp_psc_m; + const struct rte_flow_item_gtp_psc *gtp_psc_v; void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); union { uint32_t w32; @@ -10335,52 +10256,40 @@ flow_dv_translate_item_gtp_psc(void *matcher, void *key, uint8_t next_ext_header_type; }; } dw_2; + union { + uint32_t w32; + struct { + uint8_t len; + uint8_t type_flags; + uint8_t qfi; + uint8_t reserved; + }; + } dw_0; uint8_t gtp_flags; /* Always set E-flag match on one, regardless of GTP item settings. */ - gtp_flags = MLX5_GET(fte_match_set_misc3, misc3_m, gtpu_msg_flags); - gtp_flags |= MLX5_GTP_EXT_HEADER_FLAG; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_flags, gtp_flags); gtp_flags = MLX5_GET(fte_match_set_misc3, misc3_v, gtpu_msg_flags); gtp_flags |= MLX5_GTP_EXT_HEADER_FLAG; MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags, gtp_flags); /*Set next extension header type. */ dw_2.seq_num = 0; dw_2.npdu_num = 0; - dw_2.next_ext_header_type = 0xff; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_dw_2, - rte_cpu_to_be_32(dw_2.w32)); - dw_2.seq_num = 0; - dw_2.npdu_num = 0; - dw_2.next_ext_header_type = 0x85; + if (key_type & MLX5_SET_MATCHER_M) + dw_2.next_ext_header_type = 0xff; + else + dw_2.next_ext_header_type = 0x85; MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_dw_2, rte_cpu_to_be_32(dw_2.w32)); - if (gtp_psc_v) { - union { - uint32_t w32; - struct { - uint8_t len; - uint8_t type_flags; - uint8_t qfi; - uint8_t reserved; - }; - } dw_0; - - /*Set extension header PDU type and Qos. */ - if (!gtp_psc_m) - gtp_psc_m = &rte_flow_item_gtp_psc_mask; - dw_0.w32 = 0; - dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_m->hdr.type); - dw_0.qfi = gtp_psc_m->hdr.qfi; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_first_ext_dw_0, - rte_cpu_to_be_32(dw_0.w32)); - dw_0.w32 = 0; - dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_v->hdr.type & - gtp_psc_m->hdr.type); - dw_0.qfi = gtp_psc_v->hdr.qfi & gtp_psc_m->hdr.qfi; - MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_first_ext_dw_0, - rte_cpu_to_be_32(dw_0.w32)); - } + if (MLX5_ITEM_VALID(item, key_type)) + return 0; + MLX5_ITEM_UPDATE(item, key_type, gtp_psc_v, + gtp_psc_m, &rte_flow_item_gtp_psc_mask); + dw_0.w32 = 0; + dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_v->hdr.type & + gtp_psc_m->hdr.type); + dw_0.qfi = gtp_psc_v->hdr.qfi & gtp_psc_m->hdr.qfi; + MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_first_ext_dw_0, + rte_cpu_to_be_32(dw_0.w32)); return 0; } @@ -10389,29 +10298,27 @@ flow_dv_translate_item_gtp_psc(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] last_item * Last item flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, - uint64_t last_item) +flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint64_t last_item, uint32_t key_type) { struct mlx5_priv *priv = dev->data->dev_private; - const struct rte_flow_item_ecpri *ecpri_m = item->mask; - const struct rte_flow_item_ecpri *ecpri_v = item->spec; + const struct rte_flow_item_ecpri *ecpri_m; + const struct rte_flow_item_ecpri *ecpri_v; + const struct rte_flow_item_ecpri *ecpri_vv = item->spec; struct rte_ecpri_common_hdr common; - void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_4); void *misc4_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_4); uint32_t *samples; - void *dw_m; void *dw_v; /* @@ -10419,21 +10326,22 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, * match on eCPRI EtherType implicitly. */ if (last_item & MLX5_FLOW_LAYER_OUTER_L2) { - void *hdrs_m, *hdrs_v, *l2m, *l2v; + void *hdrs_v, *l2v; - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - l2m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, ethertype); l2v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype); - if (*(uint16_t *)l2m == 0 && *(uint16_t *)l2v == 0) { - *(uint16_t *)l2m = UINT16_MAX; - *(uint16_t *)l2v = RTE_BE16(RTE_ETHER_TYPE_ECPRI); + if (*(uint16_t *)l2v == 0) { + if (key_type & MLX5_SET_MATCHER_M) + *(uint16_t *)l2v = UINT16_MAX; + else + *(uint16_t *)l2v = + RTE_BE16(RTE_ETHER_TYPE_ECPRI); } } - if (!ecpri_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ecpri_m) - ecpri_m = &rte_flow_item_ecpri_mask; + MLX5_ITEM_UPDATE(item, key_type, ecpri_v, ecpri_m, + &rte_flow_item_ecpri_mask); /* * Maximal four DW samples are supported in a single matching now. * Two are used now for a eCPRI matching: @@ -10445,16 +10353,11 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, return; samples = priv->sh->ecpri_parser.ids; /* Need to take the whole DW as the mask to fill the entry. */ - dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m, - prog_sample_field_value_0); dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v, prog_sample_field_value_0); /* Already big endian (network order) in the header. */ - *(uint32_t *)dw_m = ecpri_m->hdr.common.u32; *(uint32_t *)dw_v = ecpri_v->hdr.common.u32 & ecpri_m->hdr.common.u32; /* Sample#0, used for matching type, offset 0. */ - MLX5_SET(fte_match_set_misc4, misc4_m, - prog_sample_field_id_0, samples[0]); /* It makes no sense to set the sample ID in the mask field. */ MLX5_SET(fte_match_set_misc4, misc4_v, prog_sample_field_id_0, samples[0]); @@ -10463,21 +10366,19 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, * Some wildcard rules only matching type field should be supported. */ if (ecpri_m->hdr.dummy[0]) { - common.u32 = rte_be_to_cpu_32(ecpri_v->hdr.common.u32); + if (key_type == MLX5_SET_MATCHER_SW_M) + common.u32 = rte_be_to_cpu_32(ecpri_vv->hdr.common.u32); + else + common.u32 = rte_be_to_cpu_32(ecpri_v->hdr.common.u32); switch (common.type) { case RTE_ECPRI_MSG_TYPE_IQ_DATA: case RTE_ECPRI_MSG_TYPE_RTC_CTRL: case RTE_ECPRI_MSG_TYPE_DLY_MSR: - dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m, - prog_sample_field_value_1); dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v, prog_sample_field_value_1); - *(uint32_t *)dw_m = ecpri_m->hdr.dummy[0]; *(uint32_t *)dw_v = ecpri_v->hdr.dummy[0] & ecpri_m->hdr.dummy[0]; /* Sample#1, to match message body, offset 4. */ - MLX5_SET(fte_match_set_misc4, misc4_m, - prog_sample_field_id_1, samples[1]); MLX5_SET(fte_match_set_misc4, misc4_v, prog_sample_field_id_1, samples[1]); break; @@ -10542,7 +10443,7 @@ flow_dv_translate_item_aso_ct(struct rte_eth_dev *dev, reg_id = mlx5_flow_get_reg_id(dev, MLX5_ASO_CONNTRACK, 0, &error); if (reg_id == REG_NON) return; - flow_dv_match_meta_reg(matcher, key, (enum modify_reg)reg_id, + flow_dv_match_meta_reg_all(matcher, key, (enum modify_reg)reg_id, reg_value, reg_mask); } @@ -11328,42 +11229,48 @@ flow_dv_translate_create_counter(struct rte_eth_dev *dev, * * @param[in] dev * Pointer to the dev struct. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) + void *key, + const struct rte_flow_item *item, + uint32_t key_type) { const struct mlx5_rte_flow_item_tx_queue *queue_m; const struct mlx5_rte_flow_item_tx_queue *queue_v; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); - void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - struct mlx5_txq_ctrl *txq; - uint32_t queue, mask; + const struct mlx5_rte_flow_item_tx_queue queue_mask = { + .queue = UINT32_MAX, + }; + void *misc_v = + MLX5_ADDR_OF(fte_match_param, key, misc_parameters); + struct mlx5_txq_ctrl *txq = NULL; + uint32_t queue; - queue_m = (const void *)item->mask; - queue_v = (const void *)item->spec; - if (!queue_v) + MLX5_ITEM_UPDATE(item, key_type, queue_v, queue_m, &queue_mask); + if (!queue_m || !queue_v) return; - txq = mlx5_txq_get(dev, queue_v->queue); - if (!txq) - return; - if (txq->is_hairpin) - queue = txq->obj->sq->id; - else - queue = txq->obj->sq_obj.sq->id; - mask = queue_m == NULL ? UINT32_MAX : queue_m->queue; - MLX5_SET(fte_match_set_misc, misc_m, source_sqn, mask); - MLX5_SET(fte_match_set_misc, misc_v, source_sqn, queue & mask); - mlx5_txq_release(dev, queue_v->queue); + if (key_type & MLX5_SET_MATCHER_V) { + txq = mlx5_txq_get(dev, queue_v->queue); + if (!txq) + return; + if (txq->is_hairpin) + queue = txq->obj->sq->id; + else + queue = txq->obj->sq_obj.sq->id; + if (key_type == MLX5_SET_MATCHER_SW_V) + queue &= queue_m->queue; + } else { + queue = queue_m->queue; + } + MLX5_SET(fte_match_set_misc, misc_v, source_sqn, queue); + if (txq) + mlx5_txq_release(dev, queue_v->queue); } /** @@ -13029,7 +12936,298 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, } /** - * Translate the flow item to matcher. + * Fill the flow matcher with DV spec. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] items + * Pointer to the list of items. + * @param[in] wks + * Pointer to the matcher workspace. + * @param[in] key + * Pointer to the flow matcher key. + * @param[in] key_type + * Key type. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_translate_items(struct rte_eth_dev *dev, + const struct rte_flow_item *items, + struct mlx5_dv_matcher_workspace *wks, + void *key, uint32_t key_type, + struct rte_flow_error *error) +{ + struct mlx5_flow_rss_desc *rss_desc = wks->rss_desc; + uint8_t next_protocol = wks->next_protocol; + int tunnel = !!(wks->item_flags & MLX5_FLOW_LAYER_TUNNEL); + int item_type = items->type; + uint64_t last_item = wks->last_item; + int ret; + + switch (item_type) { + case RTE_FLOW_ITEM_TYPE_ESP: + flow_dv_translate_item_esp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_ITEM_ESP; + break; + case RTE_FLOW_ITEM_TYPE_PORT_ID: + flow_dv_translate_item_port_id + (dev, key, items, wks->attr, key_type); + last_item = MLX5_FLOW_ITEM_PORT_ID; + break; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + flow_dv_translate_item_represented_port + (dev, key, items, wks->attr, key_type); + last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; + break; + case RTE_FLOW_ITEM_TYPE_ETH: + flow_dv_translate_item_eth(key, items, tunnel, + wks->group, key_type); + wks->priority = wks->action_flags & + MLX5_FLOW_ACTION_DEFAULT_MISS && + !wks->external ? + MLX5_PRIORITY_MAP_L3 : + MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + flow_dv_translate_item_vlan(key, items, tunnel, wks, key_type); + wks->priority = MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | + MLX5_FLOW_LAYER_INNER_VLAN) : + (MLX5_FLOW_LAYER_OUTER_L2 | + MLX5_FLOW_LAYER_OUTER_VLAN); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &wks->item_flags, &tunnel); + flow_dv_translate_item_ipv4(key, items, tunnel, + wks->group, key_type); + wks->priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv4 *) + items->mask)->hdr.next_proto_id) { + next_protocol = + ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + next_protocol &= + ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &wks->item_flags, &tunnel); + flow_dv_translate_item_ipv6(key, items, tunnel, + wks->group, key_type); + wks->priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto) { + next_protocol = + ((const struct rte_flow_item_ipv6 *) + items->spec)->hdr.proto; + next_protocol &= + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6 *) + (items->mask))->hdr.proto; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6 *) + (items->spec))->hdr.proto; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: + flow_dv_translate_item_ipv6_frag_ext + (key, items, tunnel, key_type); + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : + MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header) { + next_protocol = + ((const struct rte_flow_item_ipv6_frag_ext *) + items->spec)->hdr.next_header; + next_protocol &= + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *) + (items->mask))->hdr.next_header; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *) + (items->spec))->hdr.next_header; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_TCP: + flow_dv_translate_item_tcp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; + break; + case RTE_FLOW_ITEM_TYPE_UDP: + flow_dv_translate_item_udp(key, items, tunnel, wks, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; + break; + case RTE_FLOW_ITEM_TYPE_GRE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + wks->gre_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + flow_dv_translate_item_gre_key(key, items, key_type); + last_item = MLX5_FLOW_LAYER_GRE_KEY; + break; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_NVGRE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + flow_dv_translate_item_vxlan(dev, wks->attr, key, + items, tunnel, wks, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_VXLAN_GPE; + break; + case RTE_FLOW_ITEM_TYPE_GENEVE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GENEVE; + break; + case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: + ret = flow_dv_translate_item_geneve_opt + (dev, key, items, key_type, error); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GENEVE TLV option"); + wks->geneve_tlv_option = 1; + last_item = MLX5_FLOW_LAYER_GENEVE_OPT; + break; + case RTE_FLOW_ITEM_TYPE_MPLS: + flow_dv_translate_item_mpls(key, items, last_item, + tunnel, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_MPLS; + break; + case RTE_FLOW_ITEM_TYPE_MARK: + flow_dv_translate_item_mark(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_MARK; + break; + case RTE_FLOW_ITEM_TYPE_META: + flow_dv_translate_item_meta + (dev, key, wks->attr, items, key_type); + last_item = MLX5_FLOW_ITEM_METADATA; + break; + case RTE_FLOW_ITEM_TYPE_ICMP: + flow_dv_translate_item_icmp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP; + break; + case RTE_FLOW_ITEM_TYPE_ICMP6: + flow_dv_translate_item_icmp6(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP6; + break; + case RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_item_tag(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TAG; + break; + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_mlx5_item_tag(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TAG; + break; + case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + flow_dv_translate_item_tx_queue(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TX_QUEUE; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + flow_dv_translate_item_gtp(key, items, tunnel, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = flow_dv_translate_item_gtp_psc(key, items, key_type); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GTP PSC item"); + last_item = MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_ECPRI: + if (!mlx5_flex_parser_ecpri_exist(dev)) { + /* Create it only the first time to be used. */ + ret = mlx5_flex_parser_ecpri_alloc(dev); + if (ret) + return rte_flow_error_set + (error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, + "cannot create eCPRI parser"); + } + flow_dv_translate_item_ecpri + (dev, key, items, last_item, key_type); + /* No other protocol should follow eCPRI layer. */ + last_item = MLX5_FLOW_LAYER_ECPRI; + break; + default: + break; + } + wks->item_flags |= last_item; + wks->last_item = last_item; + wks->next_protocol = next_protocol; + return 0; +} + +/** + * Fill the SW steering flow with DV spec. * * @param[in] dev * Pointer to rte_eth_dev structure. @@ -13039,7 +13237,7 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * Pointer to the flow attributes. * @param[in] items * Pointer to the list of items. - * @param[in] matcher + * @param[in, out] matcher * Pointer to the flow matcher. * @param[out] error * Pointer to the error structure. @@ -13048,287 +13246,41 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -flow_dv_translate_items(struct rte_eth_dev *dev, - struct mlx5_flow *dev_flow, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - struct mlx5_flow_dv_matcher *matcher, - struct rte_flow_error *error) +flow_dv_translate_items_sws(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item *items, + struct mlx5_flow_dv_matcher *matcher, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow *flow = dev_flow->flow; - struct mlx5_flow_handle *handle = dev_flow->handle; - struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); - struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc; - uint64_t item_flags = 0; - uint64_t last_item = 0; void *match_mask = matcher->mask.buf; void *match_value = dev_flow->dv.value.buf; - uint8_t next_protocol = 0xff; - uint16_t priority = 0; + struct mlx5_dv_matcher_workspace wks = { + .action_flags = dev_flow->act_flags, + .item_flags = 0, + .external = dev_flow->external, + .next_protocol = 0xff, + .group = dev_flow->dv.group, + .attr = attr, + .rss_desc = &((struct mlx5_flow_workspace *) + mlx5_flow_get_thread_workspace())->rss_desc, + }; + struct mlx5_dv_matcher_workspace wks_m = wks; const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; - const struct rte_flow_item *tunnel_item = NULL; - const struct rte_flow_item *gre_item = NULL; int ret = 0; + int tunnel; for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { - int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); - int item_type = items->type; - - if (!mlx5_flow_os_item_supported(item_type)) + if (!mlx5_flow_os_item_supported(items->type)) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, NULL, "item not supported"); - switch (item_type) { - case RTE_FLOW_ITEM_TYPE_ESP: - flow_dv_translate_item_esp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_ITEM_ESP; - break; - case RTE_FLOW_ITEM_TYPE_PORT_ID: - flow_dv_translate_item_port_id - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_PORT_ID; - break; - case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: - flow_dv_translate_item_represented_port - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; - break; - case RTE_FLOW_ITEM_TYPE_ETH: - flow_dv_translate_item_eth(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = dev_flow->act_flags & - MLX5_FLOW_ACTION_DEFAULT_MISS && - !dev_flow->external ? - MLX5_PRIORITY_MAP_L3 : - MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : - MLX5_FLOW_LAYER_OUTER_L2; - break; - case RTE_FLOW_ITEM_TYPE_VLAN: - flow_dv_translate_item_vlan(dev_flow, - match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | - MLX5_FLOW_LAYER_INNER_VLAN) : - (MLX5_FLOW_LAYER_OUTER_L2 | - MLX5_FLOW_LAYER_OUTER_VLAN); - break; - case RTE_FLOW_ITEM_TYPE_IPV4: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv4(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : - MLX5_FLOW_LAYER_OUTER_L3_IPV4; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv4 *) - items->mask)->hdr.next_proto_id) { - next_protocol = - ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - next_protocol &= - ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_IPV6: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv6(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : - MLX5_FLOW_LAYER_OUTER_L3_IPV6; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto) { - next_protocol = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol &= - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: - flow_dv_translate_item_ipv6_frag_ext(match_mask, - match_value, - items, tunnel); - last_item = tunnel ? - MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : - MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header) { - next_protocol = - ((const struct rte_flow_item_ipv6_frag_ext *) - items->spec)->hdr.next_header; - next_protocol &= - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_TCP: - flow_dv_translate_item_tcp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : - MLX5_FLOW_LAYER_OUTER_L4_TCP; - break; - case RTE_FLOW_ITEM_TYPE_UDP: - flow_dv_translate_item_udp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : - MLX5_FLOW_LAYER_OUTER_L4_UDP; - break; - case RTE_FLOW_ITEM_TYPE_GRE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - gre_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GRE_KEY: - flow_dv_translate_item_gre_key(match_mask, - match_value, items); - last_item = MLX5_FLOW_LAYER_GRE_KEY; - break; - case RTE_FLOW_ITEM_TYPE_GRE_OPTION: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_NVGRE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_VXLAN: - flow_dv_translate_item_vxlan(dev, attr, - match_mask, match_value, - items, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN; - break; - case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN_GPE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GENEVE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GENEVE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: - ret = flow_dv_translate_item_geneve_opt(dev, match_mask, - match_value, - items, error); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GENEVE TLV option"); - flow->geneve_tlv_option = 1; - last_item = MLX5_FLOW_LAYER_GENEVE_OPT; - break; - case RTE_FLOW_ITEM_TYPE_MPLS: - flow_dv_translate_item_mpls(match_mask, match_value, - items, last_item, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_MPLS; - break; - case RTE_FLOW_ITEM_TYPE_MARK: - flow_dv_translate_item_mark(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_MARK; - break; - case RTE_FLOW_ITEM_TYPE_META: - flow_dv_translate_item_meta(dev, match_mask, - match_value, attr, items); - last_item = MLX5_FLOW_ITEM_METADATA; - break; - case RTE_FLOW_ITEM_TYPE_ICMP: - flow_dv_translate_item_icmp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP; - break; - case RTE_FLOW_ITEM_TYPE_ICMP6: - flow_dv_translate_item_icmp6(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP6; - break; - case RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; - break; - case MLX5_RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_mlx5_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; - break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - flow_dv_translate_item_tx_queue(dev, match_mask, - match_value, - items); - last_item = MLX5_FLOW_ITEM_TX_QUEUE; - break; - case RTE_FLOW_ITEM_TYPE_GTP: - flow_dv_translate_item_gtp(match_mask, match_value, - items, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GTP; - break; - case RTE_FLOW_ITEM_TYPE_GTP_PSC: - ret = flow_dv_translate_item_gtp_psc(match_mask, - match_value, - items); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GTP PSC item"); - last_item = MLX5_FLOW_LAYER_GTP_PSC; - break; - case RTE_FLOW_ITEM_TYPE_ECPRI: - if (!mlx5_flex_parser_ecpri_exist(dev)) { - /* Create it only the first time to be used. */ - ret = mlx5_flex_parser_ecpri_alloc(dev); - if (ret) - return rte_flow_error_set - (error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, - "cannot create eCPRI parser"); - } - flow_dv_translate_item_ecpri(dev, match_mask, - match_value, items, - last_item); - /* No other protocol should follow eCPRI layer. */ - last_item = MLX5_FLOW_LAYER_ECPRI; - break; + tunnel = !!(wks.item_flags & MLX5_FLOW_LAYER_TUNNEL); + switch (items->type) { case RTE_FLOW_ITEM_TYPE_INTEGRITY: flow_dv_translate_item_integrity(items, integrity_items, - &last_item); + &wks.last_item); break; case RTE_FLOW_ITEM_TYPE_CONNTRACK: flow_dv_translate_item_aso_ct(dev, match_mask, @@ -13338,13 +13290,22 @@ flow_dv_translate_items(struct rte_eth_dev *dev, flow_dv_translate_item_flex(dev, match_mask, match_value, items, dev_flow, tunnel != 0); - last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : - MLX5_FLOW_ITEM_OUTER_FLEX; + wks.last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : + MLX5_FLOW_ITEM_OUTER_FLEX; break; + default: + ret = flow_dv_translate_items(dev, items, &wks_m, + match_mask, MLX5_SET_MATCHER_SW_M, error); + if (ret) + return ret; + ret = flow_dv_translate_items(dev, items, &wks, + match_value, MLX5_SET_MATCHER_SW_V, error); + if (ret) + return ret; break; } - item_flags |= last_item; + wks.item_flags |= wks.last_item; } /* * When E-Switch mode is enabled, we have two cases where we need to @@ -13354,48 +13315,82 @@ flow_dv_translate_items(struct rte_eth_dev *dev, * In both cases the source port is set according the current port * in use. */ - if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && - !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && + if (!(wks.item_flags & MLX5_FLOW_ITEM_PORT_ID) && + !(wks.item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && !(attr->egress && !attr->transfer)) { - if (flow_dv_translate_item_port_id(dev, match_mask, + if (flow_dv_translate_item_port_id_all(dev, match_mask, match_value, NULL, attr)) return -rte_errno; } - if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + if (wks.item_flags & MLX5_FLOW_ITEM_INTEGRITY) { flow_dv_translate_item_integrity_post(match_mask, match_value, integrity_items, - item_flags); - } - if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) - flow_dv_translate_item_vxlan_gpe(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GENEVE) - flow_dv_translate_item_geneve(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GRE) { - if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) - flow_dv_translate_item_gre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) - flow_dv_translate_item_nvgre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) - flow_dv_translate_item_gre_option(match_mask, match_value, - tunnel_item, gre_item, item_flags); - else + wks.item_flags); + } + if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { + flow_dv_translate_item_vxlan_gpe(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_vxlan_gpe(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GENEVE) { + flow_dv_translate_item_geneve(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_geneve(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GRE) { + if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) { + flow_dv_translate_item_gre(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_gre(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) { + flow_dv_translate_item_nvgre(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_nvgre(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) { + flow_dv_translate_item_gre_option(match_mask, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_gre_option(match_value, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else { MLX5_ASSERT(false); + } } - matcher->priority = priority; + dev_flow->handle->vf_vlan.tag = wks.vlan_tag; + matcher->priority = wks.priority; #ifdef RTE_LIBRTE_MLX5_DEBUG - MLX5_ASSERT(!flow_dv_check_valid_spec(matcher->mask.buf, - dev_flow->dv.value.buf)); + MLX5_ASSERT(!flow_dv_check_valid_spec(match_mask, match_value)); #endif /* * Layers may be already initialized from prefix flow if this dev_flow * is the suffix flow. */ - handle->layers |= item_flags; - return ret; + dev_flow->handle->layers |= wks.item_flags; + dev_flow->flow->geneve_tlv_option = wks.geneve_tlv_option; + return 0; } /** @@ -14124,7 +14119,7 @@ flow_dv_translate(struct rte_eth_dev *dev, modify_action_position = actions_n++; } dev_flow->act_flags = action_flags; - ret = flow_dv_translate_items(dev, dev_flow, attr, items, &matcher, + ret = flow_dv_translate_items_sws(dev, dev_flow, attr, items, &matcher, error); if (ret) return -rte_errno; @@ -16690,27 +16685,23 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev, struct mlx5_flow_dv_match_params value = { .size = sizeof(value.buf), }; - struct mlx5_flow_dv_match_params matcher = { - .size = sizeof(matcher.buf), - }; struct mlx5_priv *priv = dev->data->dev_private; uint8_t misc_mask; if (match_src_port && priv->sh->esw_mode) { if (item && item->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT) - ret = flow_dv_translate_item_represented_port(dev, matcher.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_represented_port(dev, value.buf, + item, attr, MLX5_SET_MATCHER_SW_V); else - ret = flow_dv_translate_item_port_id(dev, matcher.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_port_id(dev, value.buf, + item, attr, MLX5_SET_MATCHER_SW_V); if (ret) { DRV_LOG(ERR, "Failed to create meter policy%d flow's" " value with port.", color); return -1; } } - flow_dv_match_meta_reg(matcher.buf, value.buf, - (enum modify_reg)color_reg_c_idx, + flow_dv_match_meta_reg(value.buf, (enum modify_reg)color_reg_c_idx, rte_col_2_mlx5_col(color), UINT32_MAX); misc_mask = flow_dv_matcher_enable(value.buf); __flow_dv_adjust_buf_size(&value.size, misc_mask); @@ -16742,9 +16733,6 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, }, .tbl = tbl_rsc, }; - struct mlx5_flow_dv_match_params value = { - .size = sizeof(value.buf), - }; struct mlx5_flow_cb_ctx ctx = { .error = error, .data = &matcher, @@ -16757,10 +16745,10 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, if (match_src_port && priv->sh->esw_mode) { if (item && item->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT) ret = flow_dv_translate_item_represented_port(dev, matcher.mask.buf, - value.buf, item, attr); + item, attr, MLX5_SET_MATCHER_SW_M); else - ret = flow_dv_translate_item_port_id(dev, matcher.mask.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_port_id(dev, matcher.mask.buf, + item, attr, MLX5_SET_MATCHER_SW_M); if (ret) { DRV_LOG(ERR, "Failed to register meter policy%d matcher" " with port.", priority); @@ -16769,7 +16757,7 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, } tbl_data = container_of(tbl_rsc, struct mlx5_flow_tbl_data_entry, tbl); if (priority < RTE_COLOR_RED) - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg(matcher.mask.buf, (enum modify_reg)color_reg_c_idx, 0, color_mask); matcher.priority = priority; matcher.crc = rte_raw_cksum((const void *)matcher.mask.buf, @@ -17305,7 +17293,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, tbl_data = container_of(mtrmng->drop_tbl[domain], struct mlx5_flow_tbl_data_entry, tbl); if (!mtrmng->def_matcher[domain]) { - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg_all(matcher.mask.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, 0); matcher.priority = MLX5_MTRS_DEFAULT_RULE_PRIORITY; @@ -17325,7 +17313,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, if (!mtrmng->def_rule[domain]) { i = 0; actions[i++] = priv->sh->dr_drop_action; - flow_dv_match_meta_reg(matcher_para.buf, value.buf, + flow_dv_match_meta_reg_all(matcher_para.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, 0); misc_mask = flow_dv_matcher_enable(value.buf); __flow_dv_adjust_buf_size(&value.size, misc_mask); @@ -17344,7 +17332,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, MLX5_ASSERT(mtrmng->max_mtr_bits); if (!mtrmng->drop_matcher[domain][mtrmng->max_mtr_bits - 1]) { /* Create matchers for Drop. */ - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg_all(matcher.mask.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, (mtr_id_mask << mtr_id_offset)); matcher.priority = MLX5_REG_BITS - mtrmng->max_mtr_bits; @@ -17364,7 +17352,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, drop_matcher = mtrmng->drop_matcher[domain][mtrmng->max_mtr_bits - 1]; /* Create drop rule, matching meter_id only. */ - flow_dv_match_meta_reg(matcher_para.buf, value.buf, + flow_dv_match_meta_reg_all(matcher_para.buf, value.buf, (enum modify_reg)mtr_id_reg_c, (mtr_idx << mtr_id_offset), UINT32_MAX); i = 0; @@ -18846,8 +18834,12 @@ flow_dv_discover_priorities(struct rte_eth_dev *dev, flow.dv.actions[0] = action; flow.dv.actions_n = 1; memset(ð, 0, sizeof(eth)); - flow_dv_translate_item_eth(matcher.mask.buf, flow.dv.value.buf, - &item, /* inner */ false, /* group */ 0); + flow_dv_translate_item_eth(matcher.mask.buf, &item, + /* inner */ false, /* group */ 0, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_eth(flow.dv.value.buf, &item, + /* inner */ false, /* group */ 0, + MLX5_SET_MATCHER_SW_V); matcher.crc = rte_raw_cksum(matcher.mask.buf, matcher.mask.size); for (i = 0; i < vprio_n; i++) { /* Configure the next proposed maximum priority. */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 03/18] net/mlx5: add hardware steering item translation function 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker 2022-10-19 20:57 ` [v5 01/18] net/mlx5: split flow item translation Alex Vesker 2022-10-19 20:57 ` [v5 02/18] net/mlx5: split flow item matcher and value translation Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 04/18] net/mlx5: add port to metadata conversion Alex Vesker ` (14 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> As hardware steering root table flows still work under FW steering mode. This commit provides shared item tranlsation code for hardware steering root table flows. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow.c | 10 +-- drivers/net/mlx5/mlx5_flow.h | 52 ++++++++++++- drivers/net/mlx5/mlx5_flow_dv.c | 134 ++++++++++++++++++++++++-------- 3 files changed, 155 insertions(+), 41 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 6fb1d53fc5..742dbd6358 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -7108,7 +7108,7 @@ mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) struct rte_flow_item_port_id port_spec = { .id = MLX5_PORT_ESW_MGR, }; - struct mlx5_rte_flow_item_tx_queue txq_spec = { + struct mlx5_rte_flow_item_sq txq_spec = { .queue = txq, }; struct rte_flow_item pattern[] = { @@ -7118,7 +7118,7 @@ mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) }, { .type = (enum rte_flow_item_type) - MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, + MLX5_RTE_FLOW_ITEM_TYPE_SQ, .spec = &txq_spec, }, { @@ -7504,16 +7504,16 @@ mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, .egress = 1, .priority = 0, }; - struct mlx5_rte_flow_item_tx_queue queue_spec = { + struct mlx5_rte_flow_item_sq queue_spec = { .queue = queue, }; - struct mlx5_rte_flow_item_tx_queue queue_mask = { + struct mlx5_rte_flow_item_sq queue_mask = { .queue = UINT32_MAX, }; struct rte_flow_item items[] = { { .type = (enum rte_flow_item_type) - MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, + MLX5_RTE_FLOW_ITEM_TYPE_SQ, .spec = &queue_spec, .last = NULL, .mask = &queue_mask, diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2ebb8496f2..288e09d5ba 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -28,7 +28,7 @@ enum mlx5_rte_flow_item_type { MLX5_RTE_FLOW_ITEM_TYPE_END = INT_MIN, MLX5_RTE_FLOW_ITEM_TYPE_TAG, - MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, + MLX5_RTE_FLOW_ITEM_TYPE_SQ, MLX5_RTE_FLOW_ITEM_TYPE_VLAN, MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL, }; @@ -95,7 +95,7 @@ struct mlx5_flow_action_copy_mreg { }; /* Matches on source queue. */ -struct mlx5_rte_flow_item_tx_queue { +struct mlx5_rte_flow_item_sq { uint32_t queue; }; @@ -159,7 +159,7 @@ enum mlx5_feature_name { #define MLX5_FLOW_LAYER_GENEVE (1u << 26) /* Queue items. */ -#define MLX5_FLOW_ITEM_TX_QUEUE (1u << 27) +#define MLX5_FLOW_ITEM_SQ (1u << 27) /* Pattern tunnel Layer bits (continued). */ #define MLX5_FLOW_LAYER_GTP (1u << 28) @@ -196,6 +196,9 @@ enum mlx5_feature_name { #define MLX5_FLOW_ITEM_PORT_REPRESENTOR (UINT64_C(1) << 41) #define MLX5_FLOW_ITEM_REPRESENTED_PORT (UINT64_C(1) << 42) +/* Meter color item */ +#define MLX5_FLOW_ITEM_METER_COLOR (UINT64_C(1) << 44) + /* Outer Masks. */ #define MLX5_FLOW_LAYER_OUTER_L3 \ (MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6) @@ -1006,6 +1009,18 @@ flow_items_to_tunnel(const struct rte_flow_item items[]) return items[0].spec; } +/* HW steering flow attributes. */ +struct mlx5_flow_attr { + uint32_t port_id; /* Port index. */ + uint32_t group; /* Flow group. */ + uint32_t priority; /* Original Priority. */ + /* rss level, used by priority adjustment. */ + uint32_t rss_level; + /* Action flags, used by priority adjustment. */ + uint32_t act_flags; + uint32_t tbl_type; /* Flow table type. */ +}; + /* Flow structure. */ struct rte_flow { uint32_t dev_handles; @@ -1766,6 +1781,32 @@ mlx5_translate_tunnel_etypes(uint64_t pattern_flags) int flow_hw_q_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error); + +/* + * Convert rte_mtr_color to mlx5 color. + * + * @param[in] rcol + * rte_mtr_color. + * + * @return + * mlx5 color. + */ +static inline int +rte_col_2_mlx5_col(enum rte_color rcol) +{ + switch (rcol) { + case RTE_COLOR_GREEN: + return MLX5_FLOW_COLOR_GREEN; + case RTE_COLOR_YELLOW: + return MLX5_FLOW_COLOR_YELLOW; + case RTE_COLOR_RED: + return MLX5_FLOW_COLOR_RED; + default: + break; + } + return MLX5_FLOW_COLOR_UNDEFINED; +} + int mlx5_flow_group_to_table(struct rte_eth_dev *dev, const struct mlx5_flow_tunnel *tunnel, uint32_t group, uint32_t *table, @@ -2122,4 +2163,9 @@ int mlx5_flow_get_item_vport_id(struct rte_eth_dev *dev, bool *all_ports, struct rte_flow_error *error); +int flow_dv_translate_items_hws(const struct rte_flow_item *items, + struct mlx5_flow_attr *attr, void *key, + uint32_t key_type, uint64_t *item_flags, + uint8_t *match_criteria, + struct rte_flow_error *error); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 0589cafc30..0cf757898d 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -216,31 +216,6 @@ flow_dv_attr_init(const struct rte_flow_item *item, union flow_dv_attr *attr, attr->valid = 1; } -/* - * Convert rte_mtr_color to mlx5 color. - * - * @param[in] rcol - * rte_mtr_color. - * - * @return - * mlx5 color. - */ -static inline int -rte_col_2_mlx5_col(enum rte_color rcol) -{ - switch (rcol) { - case RTE_COLOR_GREEN: - return MLX5_FLOW_COLOR_GREEN; - case RTE_COLOR_YELLOW: - return MLX5_FLOW_COLOR_YELLOW; - case RTE_COLOR_RED: - return MLX5_FLOW_COLOR_RED; - default: - break; - } - return MLX5_FLOW_COLOR_UNDEFINED; -} - struct field_modify_info { uint32_t size; /* Size of field in protocol header, in bytes. */ uint32_t offset; /* Offset of field in protocol header, in bytes. */ @@ -7342,8 +7317,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, return ret; last_item = MLX5_FLOW_ITEM_TAG; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - last_item = MLX5_FLOW_ITEM_TX_QUEUE; + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: + last_item = MLX5_FLOW_ITEM_SQ; break; case MLX5_RTE_FLOW_ITEM_TYPE_TAG: break; @@ -8223,7 +8198,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, * work due to metadata regC0 mismatch. */ if ((!attr->transfer && attr->egress) && priv->representor && - !(item_flags & MLX5_FLOW_ITEM_TX_QUEUE)) + !(item_flags & MLX5_FLOW_ITEM_SQ)) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, @@ -11242,9 +11217,9 @@ flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, const struct rte_flow_item *item, uint32_t key_type) { - const struct mlx5_rte_flow_item_tx_queue *queue_m; - const struct mlx5_rte_flow_item_tx_queue *queue_v; - const struct mlx5_rte_flow_item_tx_queue queue_mask = { + const struct mlx5_rte_flow_item_sq *queue_m; + const struct mlx5_rte_flow_item_sq *queue_v; + const struct mlx5_rte_flow_item_sq queue_mask = { .queue = UINT32_MAX, }; void *misc_v = @@ -13184,9 +13159,9 @@ flow_dv_translate_items(struct rte_eth_dev *dev, flow_dv_translate_mlx5_item_tag(dev, key, items, key_type); last_item = MLX5_FLOW_ITEM_TAG; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: flow_dv_translate_item_tx_queue(dev, key, items, key_type); - last_item = MLX5_FLOW_ITEM_TX_QUEUE; + last_item = MLX5_FLOW_ITEM_SQ; break; case RTE_FLOW_ITEM_TYPE_GTP: flow_dv_translate_item_gtp(key, items, tunnel, key_type); @@ -13226,6 +13201,99 @@ flow_dv_translate_items(struct rte_eth_dev *dev, return 0; } +/** + * Fill the HW steering flow with DV spec. + * + * @param[in] items + * Pointer to the list of items. + * @param[in] attr + * Pointer to the flow attributes. + * @param[in] key + * Pointer to the flow matcher key. + * @param[in] key_type + * Key type. + * @param[in, out] item_flags + * Pointer to the flow item flags. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +flow_dv_translate_items_hws(const struct rte_flow_item *items, + struct mlx5_flow_attr *attr, void *key, + uint32_t key_type, uint64_t *item_flags, + uint8_t *match_criteria, + struct rte_flow_error *error) +{ + struct mlx5_flow_rss_desc rss_desc = { .level = attr->rss_level }; + struct rte_flow_attr rattr = { + .group = attr->group, + .priority = attr->priority, + .ingress = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_NIC_RX), + .egress = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_NIC_TX), + .transfer = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_FDB), + }; + struct mlx5_dv_matcher_workspace wks = { + .action_flags = attr->act_flags, + .item_flags = item_flags ? *item_flags : 0, + .external = 0, + .next_protocol = 0xff, + .attr = &rattr, + .rss_desc = &rss_desc, + }; + int ret; + + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { + if (!mlx5_flow_os_item_supported(items->type)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "item not supported"); + ret = flow_dv_translate_items(&rte_eth_devices[attr->port_id], + items, &wks, key, key_type, NULL); + if (ret) + return ret; + } + if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { + flow_dv_translate_item_vxlan_gpe(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GENEVE) { + flow_dv_translate_item_geneve(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GRE) { + if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) { + flow_dv_translate_item_gre(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) { + flow_dv_translate_item_gre_option(key, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + key_type); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) { + flow_dv_translate_item_nvgre(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else { + MLX5_ASSERT(false); + } + } + + if (match_criteria) + *match_criteria = flow_dv_matcher_enable(key); + if (item_flags) + *item_flags = wks.item_flags; + return 0; +} + /** * Fill the SW steering flow with DV spec. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 04/18] net/mlx5: add port to metadata conversion 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (2 preceding siblings ...) 2022-10-19 20:57 ` [v5 03/18] net/mlx5: add hardware steering item translation function Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 05/18] common/mlx5: query set capability of registers Alex Vesker ` (13 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Dariusz Sosnowski From: Dariusz Sosnowski <dsosnowski@nvidia.com> This patch initial version of functions used to: - convert between ethdev port_id and internal tag/mask value, - convert between IB context and internal tag/mask value. Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- drivers/net/mlx5/linux/mlx5_os.c | 10 +++++- drivers/net/mlx5/mlx5.c | 1 + drivers/net/mlx5/mlx5_flow.c | 6 ++++ drivers/net/mlx5/mlx5_flow.h | 52 ++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 29 ++++++++++++++++++ 5 files changed, 97 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 60677eb8d7..98c6374547 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1541,8 +1541,16 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, if (!priv->hrxqs) goto error; rte_rwlock_init(&priv->ind_tbls_lock); - if (priv->sh->config.dv_flow_en == 2) + if (priv->sh->config.dv_flow_en == 2) { +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + if (priv->vport_meta_mask) + flow_hw_set_port_info(eth_dev); return eth_dev; +#else + DRV_LOG(ERR, "DV support is missing for HWS."); + goto error; +#endif + } /* Port representor shares the same max priority with pf port. */ if (!priv->sh->flow_priority_check_flag) { /* Supported Verbs flow priority number detection. */ diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 752b60d769..1d10932619 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1944,6 +1944,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_flex_item_port_cleanup(dev); #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) flow_hw_resource_release(dev); + flow_hw_clear_port_info(dev); #endif if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 742dbd6358..9d94da0868 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -33,6 +33,12 @@ #include "mlx5_common_os.h" #include "rte_pmd_mlx5.h" +/* + * Shared array for quick translation between port_id and vport mask/values + * used for HWS rules. + */ +struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; + struct tunnel_default_miss_ctx { uint16_t *queue; __extension__ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 288e09d5ba..17102623c1 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1323,6 +1323,58 @@ struct mlx5_flow_split_info { uint64_t prefix_layers; /**< Prefix subflow layers. */ }; +struct flow_hw_port_info { + uint32_t regc_mask; + uint32_t regc_value; + uint32_t is_wire:1; +}; + +extern struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; + +/* + * Get metadata match tag and mask for given rte_eth_dev port. + * Used in HWS rule creation. + */ +static __rte_always_inline const struct flow_hw_port_info * +flow_hw_conv_port_id(const uint16_t port_id) +{ + struct flow_hw_port_info *port_info; + + if (port_id >= RTE_MAX_ETHPORTS) + return NULL; + port_info = &mlx5_flow_hw_port_infos[port_id]; + return !!port_info->regc_mask ? port_info : NULL; +} + +#ifdef HAVE_IBV_FLOW_DV_SUPPORT +/* + * Get metadata match tag and mask for the uplink port represented + * by given IB context. Used in HWS context creation. + */ +static __rte_always_inline const struct flow_hw_port_info * +flow_hw_get_wire_port(struct ibv_context *ibctx) +{ + struct ibv_device *ibdev = ibctx->device; + uint16_t port_id; + + MLX5_ETH_FOREACH_DEV(port_id, NULL) { + const struct mlx5_priv *priv = + rte_eth_devices[port_id].data->dev_private; + + if (priv && priv->master) { + struct ibv_context *port_ibctx = priv->sh->cdev->ctx; + + if (port_ibctx->device == ibdev) + return flow_hw_conv_port_id(port_id); + } + } + return NULL; +} +#endif + +void flow_hw_set_port_info(struct rte_eth_dev *dev); +void flow_hw_clear_port_info(struct rte_eth_dev *dev); + typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 12498794a5..fe809a83b9 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2208,6 +2208,35 @@ flow_hw_resource_release(struct rte_eth_dev *dev) priv->nb_queue = 0; } +/* Sets vport tag and mask, for given port, used in HWS rules. */ +void +flow_hw_set_port_info(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint16_t port_id = dev->data->port_id; + struct flow_hw_port_info *info; + + MLX5_ASSERT(port_id < RTE_MAX_ETHPORTS); + info = &mlx5_flow_hw_port_infos[port_id]; + info->regc_mask = priv->vport_meta_mask; + info->regc_value = priv->vport_meta_tag; + info->is_wire = priv->master; +} + +/* Clears vport tag and mask used for HWS rules. */ +void +flow_hw_clear_port_info(struct rte_eth_dev *dev) +{ + uint16_t port_id = dev->data->port_id; + struct flow_hw_port_info *info; + + MLX5_ASSERT(port_id < RTE_MAX_ETHPORTS); + info = &mlx5_flow_hw_port_infos[port_id]; + info->regc_mask = 0; + info->regc_value = 0; + info->is_wire = 0; +} + /** * Create shared action. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 05/18] common/mlx5: query set capability of registers 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (3 preceding siblings ...) 2022-10-19 20:57 ` [v5 04/18] net/mlx5: add port to metadata conversion Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 06/18] net/mlx5: provide the available tag registers Alex Vesker ` (12 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Bing Zhao From: Bing Zhao <bingz@nvidia.com> In the flow table capabilities, new fields are added to query the capability to set, add, copy to a REG_C_x. The set capability are queried and saved for the future usage. Signed-off-by: Bing Zhao <bingz@nvidia.com> --- drivers/common/mlx5/mlx5_devx_cmds.c | 30 +++++++++++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 2 ++ drivers/common/mlx5/mlx5_prm.h | 45 +++++++++++++++++++++++++--- 3 files changed, 73 insertions(+), 4 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 76f0b6724f..9c185366d0 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -1064,6 +1064,24 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->modify_outer_ip_ecn = MLX5_GET (flow_table_nic_cap, hcattr, ft_header_modify_nic_receive.outer_ip_ecn); + attr->set_reg_c = 0xff; + if (attr->nic_flow_table) { +#define GET_RX_REG_X_BITS \ + MLX5_GET(flow_table_nic_cap, hcattr, \ + ft_header_modify_nic_receive.metadata_reg_c_x) +#define GET_TX_REG_X_BITS \ + MLX5_GET(flow_table_nic_cap, hcattr, \ + ft_header_modify_nic_transmit.metadata_reg_c_x) + + uint32_t tx_reg, rx_reg; + + tx_reg = GET_TX_REG_X_BITS; + rx_reg = GET_RX_REG_X_BITS; + attr->set_reg_c &= (rx_reg & tx_reg); + +#undef GET_RX_REG_X_BITS +#undef GET_TX_REG_X_BITS + } attr->pkt_integrity_match = mlx5_devx_query_pkt_integrity_match(hcattr); attr->inner_ipv4_ihl = MLX5_GET (flow_table_nic_cap, hcattr, @@ -1163,6 +1181,18 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->esw_mgr_vport_id = MLX5_GET(esw_cap, hcattr, esw_manager_vport_number); } + if (attr->eswitch_manager) { + uint32_t esw_reg; + + hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + if (!hcattr) + return rc; + esw_reg = MLX5_GET(flow_table_esw_cap, hcattr, + ft_header_modify_esw_fdb.metadata_reg_c_x); + attr->set_reg_c &= esw_reg; + } return 0; error: rc = (rc > 0) ? -rc : rc; diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index cceaf3411d..a10aa3331b 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -263,6 +263,8 @@ struct mlx5_hca_attr { uint32_t crypto_wrapped_import_method:1; uint16_t esw_mgr_vport_id; /* E-Switch Mgr vport ID . */ uint16_t max_wqe_sz_sq; + uint32_t set_reg_c:8; + uint32_t nic_flow_table:1; uint32_t modify_outer_ip_ecn:1; }; diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 9c1c93f916..ca4763f53d 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1295,6 +1295,7 @@ enum { MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP = 0xc << 1, MLX5_GET_HCA_CAP_OP_MOD_ROCE = 0x4 << 1, MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE = 0x7 << 1, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE = 0x8 << 1, MLX5_SET_HCA_CAP_OP_MOD_ESW = 0x9 << 1, MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1, MLX5_GET_HCA_CAP_OP_MOD_CRYPTO = 0x1A << 1, @@ -1892,6 +1893,7 @@ struct mlx5_ifc_roce_caps_bits { }; struct mlx5_ifc_ft_fields_support_bits { + /* set_action_field_support */ u8 outer_dmac[0x1]; u8 outer_smac[0x1]; u8 outer_ether_type[0x1]; @@ -1919,7 +1921,7 @@ struct mlx5_ifc_ft_fields_support_bits { u8 outer_gre_key[0x1]; u8 outer_vxlan_vni[0x1]; u8 reserved_at_1a[0x5]; - u8 source_eswitch_port[0x1]; + u8 source_eswitch_port[0x1]; /* end of DW0 */ u8 inner_dmac[0x1]; u8 inner_smac[0x1]; u8 inner_ether_type[0x1]; @@ -1943,8 +1945,33 @@ struct mlx5_ifc_ft_fields_support_bits { u8 inner_tcp_sport[0x1]; u8 inner_tcp_dport[0x1]; u8 inner_tcp_flags[0x1]; - u8 reserved_at_37[0x9]; - u8 reserved_at_40[0x40]; + u8 reserved_at_37[0x9]; /* end of DW1 */ + u8 reserved_at_40[0x20]; /* end of DW2 */ + u8 reserved_at_60[0x18]; + union { + struct { + u8 metadata_reg_c_7[0x1]; + u8 metadata_reg_c_6[0x1]; + u8 metadata_reg_c_5[0x1]; + u8 metadata_reg_c_4[0x1]; + u8 metadata_reg_c_3[0x1]; + u8 metadata_reg_c_2[0x1]; + u8 metadata_reg_c_1[0x1]; + u8 metadata_reg_c_0[0x1]; + }; + u8 metadata_reg_c_x[0x8]; + }; /* end of DW3 */ + /* set_action_field_support_2 */ + u8 reserved_at_80[0x80]; + /* add_action_field_support */ + u8 reserved_at_100[0x80]; + /* add_action_field_support_2 */ + u8 reserved_at_180[0x80]; + /* copy_action_field_support */ + u8 reserved_at_200[0x80]; + /* copy_action_field_support_2 */ + u8 reserved_at_280[0x80]; + u8 reserved_at_300[0x100]; }; /* @@ -1989,9 +2016,18 @@ struct mlx5_ifc_flow_table_nic_cap_bits { u8 reserved_at_e00[0x200]; struct mlx5_ifc_ft_fields_support_bits ft_header_modify_nic_receive; - u8 reserved_at_1080[0x380]; struct mlx5_ifc_ft_fields_support_2_bits ft_field_support_2_nic_receive; + u8 reserved_at_1480[0x780]; + struct mlx5_ifc_ft_fields_support_bits + ft_header_modify_nic_transmit; + u8 reserved_at_2000[0x6000]; +}; + +struct mlx5_ifc_flow_table_esw_cap_bits { + u8 reserved_at_0[0x800]; + struct mlx5_ifc_ft_fields_support_bits ft_header_modify_esw_fdb; + u8 reserved_at_C00[0x7400]; }; /* @@ -2046,6 +2082,7 @@ union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_qos_cap_bits qos_cap; struct mlx5_ifc_virtio_emulation_cap_bits vdpa_caps; struct mlx5_ifc_flow_table_nic_cap_bits flow_table_nic_cap; + struct mlx5_ifc_flow_table_esw_cap_bits flow_table_esw_cap; struct mlx5_ifc_esw_cap_bits esw_cap; struct mlx5_ifc_roce_caps_bits roce_caps; u8 reserved_at_0[0x8000]; -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 06/18] net/mlx5: provide the available tag registers 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (4 preceding siblings ...) 2022-10-19 20:57 ` [v5 05/18] common/mlx5: query set capability of registers Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 07/18] net/mlx5: Add additional glue functions for HWS Alex Vesker ` (11 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Bing Zhao From: Bing Zhao <bingz@nvidia.com> The available tags that can be used by the application are fixed after startup. A global array is used to store the information and transfer the TAG item directly from the ID to the REG_C_x. Signed-off-by: Bing Zhao <bingz@nvidia.com> --- drivers/net/mlx5/linux/mlx5_os.c | 2 + drivers/net/mlx5/mlx5.c | 2 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_defs.h | 2 + drivers/net/mlx5/mlx5_flow.c | 11 +++++ drivers/net/mlx5/mlx5_flow.h | 27 ++++++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 76 ++++++++++++++++++++++++++++++++ 7 files changed, 121 insertions(+) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 98c6374547..aed55e6a62 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1545,6 +1545,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, #ifdef HAVE_IBV_FLOW_DV_SUPPORT if (priv->vport_meta_mask) flow_hw_set_port_info(eth_dev); + /* Only HWS requires this information. */ + flow_hw_init_tags_set(eth_dev); return eth_dev; #else DRV_LOG(ERR, "DV support is missing for HWS."); diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 1d10932619..b39ef1ecbe 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1945,6 +1945,8 @@ mlx5_dev_close(struct rte_eth_dev *dev) #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) flow_hw_resource_release(dev); flow_hw_clear_port_info(dev); + if (priv->sh->config.dv_flow_en == 2) + flow_hw_clear_tags_set(dev); #endif if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 3c9e6bad53..741be2df98 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1200,6 +1200,7 @@ struct mlx5_dev_ctx_shared { uint32_t drop_action_check_flag:1; /* Check Flag for drop action. */ uint32_t flow_priority_check_flag:1; /* Check Flag for flow priority. */ uint32_t metadata_regc_check_flag:1; /* Check Flag for metadata REGC. */ + uint32_t hws_tags:1; /* Check if tags info for HWS initialized. */ uint32_t max_port; /* Maximal IB device port index. */ struct mlx5_bond_info bond; /* Bonding information. */ struct mlx5_common_device *cdev; /* Backend mlx5 device. */ diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index 018d3f0f0c..585afb0a98 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -139,6 +139,8 @@ #define MLX5_XMETA_MODE_META32 2 /* Provide info on patrial hw miss. Implies MLX5_XMETA_MODE_META16 */ #define MLX5_XMETA_MODE_MISS_INFO 3 +/* Only valid in HWS, 32bits extended META without MARK support in FDB. */ +#define MLX5_XMETA_MODE_META32_HWS 4 /* Tx accurate scheduling on timestamps parameters. */ #define MLX5_TXPP_WAIT_INIT_TS 1000ul /* How long to wait timestamp. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 9d94da0868..dd3d2bb1a4 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -39,6 +39,17 @@ */ struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; +/* + * A global structure to save the available REG_C_x for tags usage. + * The Meter color REG (ASO) and the last available one will be reserved + * for PMD internal usage. + * Since there is no "port" concept in the driver, it is assumed that the + * available tags set will be the minimum intersection. + * 3 - in FDB mode / 5 - in legacy mode + */ +uint32_t mlx5_flow_hw_avl_tags_init_cnt; +enum modify_reg mlx5_flow_hw_avl_tags[MLX5_FLOW_HW_TAGS_MAX] = {REG_NON}; + struct tunnel_default_miss_ctx { uint16_t *queue; __extension__ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 17102623c1..2002f6ef4b 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1331,6 +1331,10 @@ struct flow_hw_port_info { extern struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; +#define MLX5_FLOW_HW_TAGS_MAX 8 +extern uint32_t mlx5_flow_hw_avl_tags_init_cnt; +extern enum modify_reg mlx5_flow_hw_avl_tags[]; + /* * Get metadata match tag and mask for given rte_eth_dev port. * Used in HWS rule creation. @@ -1372,9 +1376,32 @@ flow_hw_get_wire_port(struct ibv_context *ibctx) } #endif +/* + * Convert metadata or tag to the actual register. + * META: Can only be used to match in the FDB in this stage, fixed C_1. + * TAG: C_x expect meter color reg and the reserved ones. + * TODO: Per port / device, FDB or NIC for Meta matching. + */ +static __rte_always_inline int +flow_hw_get_reg_id(enum rte_flow_item_type type, uint32_t id) +{ + switch (type) { + case RTE_FLOW_ITEM_TYPE_META: + return REG_C_1; + case RTE_FLOW_ITEM_TYPE_TAG: + MLX5_ASSERT(id < MLX5_FLOW_HW_TAGS_MAX); + return mlx5_flow_hw_avl_tags[id]; + default: + return REG_NON; + } +} + void flow_hw_set_port_info(struct rte_eth_dev *dev); void flow_hw_clear_port_info(struct rte_eth_dev *dev); +void flow_hw_init_tags_set(struct rte_eth_dev *dev); +void flow_hw_clear_tags_set(struct rte_eth_dev *dev); + typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index fe809a83b9..78c741bb91 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2237,6 +2237,82 @@ flow_hw_clear_port_info(struct rte_eth_dev *dev) info->is_wire = 0; } +/* + * Initialize the information of available tag registers and an intersection + * of all the probed devices' REG_C_Xs. + * PS. No port concept in steering part, right now it cannot be per port level. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + */ +void flow_hw_init_tags_set(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t meta_mode = priv->sh->config.dv_xmeta_en; + uint8_t masks = (uint8_t)priv->sh->cdev->config.hca_attr.set_reg_c; + uint32_t i, j; + enum modify_reg copy[MLX5_FLOW_HW_TAGS_MAX] = {REG_NON}; + uint8_t unset = 0; + uint8_t copy_masks = 0; + + /* + * The CAPA is global for common device but only used in net. + * It is shared per eswitch domain. + */ + if (!!priv->sh->hws_tags) + return; + unset |= 1 << (priv->mtr_color_reg - REG_C_0); + unset |= 1 << (REG_C_6 - REG_C_0); + if (meta_mode == MLX5_XMETA_MODE_META32_HWS) { + unset |= 1 << (REG_C_1 - REG_C_0); + unset |= 1 << (REG_C_0 - REG_C_0); + } + masks &= ~unset; + if (mlx5_flow_hw_avl_tags_init_cnt) { + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) { + if (mlx5_flow_hw_avl_tags[i] != REG_NON && !!((1 << i) & masks)) { + copy[mlx5_flow_hw_avl_tags[i] - REG_C_0] = + mlx5_flow_hw_avl_tags[i]; + copy_masks |= (1 << i); + } + } + if (copy_masks != masks) { + j = 0; + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) + if (!!((1 << i) & copy_masks)) + mlx5_flow_hw_avl_tags[j++] = copy[i]; + } + } else { + j = 0; + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) { + if (!!((1 << i) & masks)) + mlx5_flow_hw_avl_tags[j++] = + (enum modify_reg)(i + (uint32_t)REG_C_0); + } + } + priv->sh->hws_tags = 1; + mlx5_flow_hw_avl_tags_init_cnt++; +} + +/* + * Reset the available tag registers information to NONE. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + */ +void flow_hw_clear_tags_set(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + if (!priv->sh->hws_tags) + return; + priv->sh->hws_tags = 0; + mlx5_flow_hw_avl_tags_init_cnt--; + if (!mlx5_flow_hw_avl_tags_init_cnt) + memset(mlx5_flow_hw_avl_tags, REG_NON, + sizeof(enum modify_reg) * MLX5_FLOW_HW_TAGS_MAX); +} + /** * Create shared action. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 07/18] net/mlx5: Add additional glue functions for HWS 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (5 preceding siblings ...) 2022-10-19 20:57 ` [v5 06/18] net/mlx5: provide the available tag registers Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 08/18] net/mlx5/hws: Add HWS command layer Alex Vesker ` (10 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Add missing glue support for HWS mlx5dr layer. The new glue functions are needed for mlx5dv create matcher and action, which are used as the kernel root table as well as for capabilities query like device name and ports info. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/common/mlx5/linux/mlx5_glue.c | 121 ++++++++++++++++++++++++-- drivers/common/mlx5/linux/mlx5_glue.h | 17 ++++ 2 files changed, 131 insertions(+), 7 deletions(-) diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c index 450dd6a06a..9f5953fbce 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.c +++ b/drivers/common/mlx5/linux/mlx5_glue.c @@ -111,6 +111,12 @@ mlx5_glue_query_device_ex(struct ibv_context *context, return ibv_query_device_ex(context, input, attr); } +static const char * +mlx5_glue_get_device_name(struct ibv_device *device) +{ + return ibv_get_device_name(device); +} + static int mlx5_glue_query_rt_values_ex(struct ibv_context *context, struct ibv_values_ex *values) @@ -620,6 +626,20 @@ mlx5_glue_dv_create_qp(struct ibv_context *context, #endif } +static void * +__mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, + struct mlx5dv_flow_matcher_attr *matcher_attr) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_matcher(context, matcher_attr); +#else + (void)context; + (void)matcher_attr; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, struct mlx5dv_flow_matcher_attr *matcher_attr, @@ -633,7 +653,7 @@ mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, matcher_attr->match_mask); #else (void)tbl; - return mlx5dv_create_flow_matcher(context, matcher_attr); + return __mlx5_glue_dv_create_flow_matcher(context, matcher_attr); #endif #else (void)context; @@ -644,6 +664,26 @@ mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, #endif } +static void * +__mlx5_glue_dv_create_flow(void *matcher, + void *match_value, + size_t num_actions, + void *actions) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow(matcher, + match_value, + num_actions, + (struct mlx5dv_flow_action_attr *)actions); +#else + (void)matcher; + (void)match_value; + (void)num_actions; + (void)actions; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow(void *matcher, void *match_value, @@ -663,8 +703,8 @@ mlx5_glue_dv_create_flow(void *matcher, for (i = 0; i < num_actions; i++) actions_attr[i] = *((struct mlx5dv_flow_action_attr *)(actions[i])); - return mlx5dv_create_flow(matcher, match_value, - num_actions, actions_attr); + return __mlx5_glue_dv_create_flow(matcher, match_value, + num_actions, actions_attr); #endif #else (void)matcher; @@ -735,6 +775,26 @@ mlx5_glue_dv_create_flow_action_dest_devx_tir(void *tir) #endif } +static void * +__mlx5_glue_dv_create_flow_action_modify_header + (struct ibv_context *ctx, + size_t actions_sz, + uint64_t actions[], + enum mlx5dv_flow_table_type ft_type) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_action_modify_header + (ctx, actions_sz, actions, ft_type); +#else + (void)ctx; + (void)ft_type; + (void)actions_sz; + (void)actions; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_action_modify_header (struct ibv_context *ctx, @@ -758,7 +818,7 @@ mlx5_glue_dv_create_flow_action_modify_header if (!action) return NULL; action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; - action->action = mlx5dv_create_flow_action_modify_header + action->action = __mlx5_glue_dv_create_flow_action_modify_header (ctx, actions_sz, actions, ft_type); return action; #endif @@ -774,6 +834,27 @@ mlx5_glue_dv_create_flow_action_modify_header #endif } +static void * +__mlx5_glue_dv_create_flow_action_packet_reformat + (struct ibv_context *ctx, + size_t data_sz, void *data, + enum mlx5dv_flow_action_packet_reformat_type reformat_type, + enum mlx5dv_flow_table_type ft_type) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_action_packet_reformat + (ctx, data_sz, data, reformat_type, ft_type); +#else + (void)ctx; + (void)reformat_type; + (void)ft_type; + (void)data_sz; + (void)data; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_action_packet_reformat (struct ibv_context *ctx, @@ -798,7 +879,7 @@ mlx5_glue_dv_create_flow_action_packet_reformat if (!action) return NULL; action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; - action->action = mlx5dv_create_flow_action_packet_reformat + action->action = __mlx5_glue_dv_create_flow_action_packet_reformat (ctx, data_sz, data, reformat_type, ft_type); return action; #endif @@ -908,6 +989,18 @@ mlx5_glue_dv_destroy_flow(void *flow_id) #endif } +static int +__mlx5_glue_dv_destroy_flow_matcher(void *matcher) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_destroy_flow_matcher(matcher); +#else + (void)matcher; + errno = ENOTSUP; + return errno; +#endif +} + static int mlx5_glue_dv_destroy_flow_matcher(void *matcher) { @@ -915,7 +1008,7 @@ mlx5_glue_dv_destroy_flow_matcher(void *matcher) #ifdef HAVE_MLX5DV_DR return mlx5dv_dr_matcher_destroy(matcher); #else - return mlx5dv_destroy_flow_matcher(matcher); + return __mlx5_glue_dv_destroy_flow_matcher(matcher); #endif #else (void)matcher; @@ -1164,12 +1257,18 @@ mlx5_glue_devx_port_query(struct ibv_context *ctx, info->vport_id = devx_port.vport; info->query_flags |= MLX5_PORT_QUERY_VPORT; } + if (devx_port.flags & MLX5DV_QUERY_PORT_ESW_OWNER_VHCA_ID) { + info->esw_owner_vhca_id = devx_port.esw_owner_vhca_id; + info->query_flags |= MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID; + } #else #ifdef HAVE_MLX5DV_DR_DEVX_PORT /* The legacy DevX port query API is implemented (prior v35). */ struct mlx5dv_devx_port devx_port = { .comp_mask = MLX5DV_DEVX_PORT_VPORT | - MLX5DV_DEVX_PORT_MATCH_REG_C_0 + MLX5DV_DEVX_PORT_MATCH_REG_C_0 | + MLX5DV_DEVX_PORT_VPORT_VHCA_ID | + MLX5DV_DEVX_PORT_ESW_OWNER_VHCA_ID }; err = mlx5dv_query_devx_port(ctx, port_num, &devx_port); @@ -1449,6 +1548,7 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { .close_device = mlx5_glue_close_device, .query_device = mlx5_glue_query_device, .query_device_ex = mlx5_glue_query_device_ex, + .get_device_name = mlx5_glue_get_device_name, .query_rt_values_ex = mlx5_glue_query_rt_values_ex, .query_port = mlx5_glue_query_port, .create_comp_channel = mlx5_glue_create_comp_channel, @@ -1507,7 +1607,9 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { .dv_init_obj = mlx5_glue_dv_init_obj, .dv_create_qp = mlx5_glue_dv_create_qp, .dv_create_flow_matcher = mlx5_glue_dv_create_flow_matcher, + .dv_create_flow_matcher_root = __mlx5_glue_dv_create_flow_matcher, .dv_create_flow = mlx5_glue_dv_create_flow, + .dv_create_flow_root = __mlx5_glue_dv_create_flow, .dv_create_flow_action_counter = mlx5_glue_dv_create_flow_action_counter, .dv_create_flow_action_dest_ibv_qp = @@ -1516,8 +1618,12 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { mlx5_glue_dv_create_flow_action_dest_devx_tir, .dv_create_flow_action_modify_header = mlx5_glue_dv_create_flow_action_modify_header, + .dv_create_flow_action_modify_header_root = + __mlx5_glue_dv_create_flow_action_modify_header, .dv_create_flow_action_packet_reformat = mlx5_glue_dv_create_flow_action_packet_reformat, + .dv_create_flow_action_packet_reformat_root = + __mlx5_glue_dv_create_flow_action_packet_reformat, .dv_create_flow_action_tag = mlx5_glue_dv_create_flow_action_tag, .dv_create_flow_action_meter = mlx5_glue_dv_create_flow_action_meter, .dv_modify_flow_action_meter = mlx5_glue_dv_modify_flow_action_meter, @@ -1526,6 +1632,7 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { mlx5_glue_dr_create_flow_action_default_miss, .dv_destroy_flow = mlx5_glue_dv_destroy_flow, .dv_destroy_flow_matcher = mlx5_glue_dv_destroy_flow_matcher, + .dv_destroy_flow_matcher_root = __mlx5_glue_dv_destroy_flow_matcher, .dv_open_device = mlx5_glue_dv_open_device, .devx_obj_create = mlx5_glue_devx_obj_create, .devx_obj_destroy = mlx5_glue_devx_obj_destroy, diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h index c4903a6dce..ef7341a76a 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.h +++ b/drivers/common/mlx5/linux/mlx5_glue.h @@ -91,10 +91,12 @@ struct mlx5dv_port; #define MLX5_PORT_QUERY_VPORT (1u << 0) #define MLX5_PORT_QUERY_REG_C0 (1u << 1) +#define MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID (1u << 2) struct mlx5_port_info { uint16_t query_flags; uint16_t vport_id; /* Associated VF vport index (if any). */ + uint16_t esw_owner_vhca_id; /* Associated the esw_owner that this VF belongs to. */ uint32_t vport_meta_tag; /* Used for vport index match ove VF LAG. */ uint32_t vport_meta_mask; /* Used for vport index field match mask. */ }; @@ -164,6 +166,7 @@ struct mlx5_glue { int (*query_device_ex)(struct ibv_context *context, const struct ibv_query_device_ex_input *input, struct ibv_device_attr_ex *attr); + const char *(*get_device_name)(struct ibv_device *device); int (*query_rt_values_ex)(struct ibv_context *context, struct ibv_values_ex *values); int (*query_port)(struct ibv_context *context, uint8_t port_num, @@ -268,8 +271,13 @@ struct mlx5_glue { (struct ibv_context *context, struct mlx5dv_flow_matcher_attr *matcher_attr, void *tbl); + void *(*dv_create_flow_matcher_root) + (struct ibv_context *context, + struct mlx5dv_flow_matcher_attr *matcher_attr); void *(*dv_create_flow)(void *matcher, void *match_value, size_t num_actions, void *actions[]); + void *(*dv_create_flow_root)(void *matcher, void *match_value, + size_t num_actions, void *actions); void *(*dv_create_flow_action_counter)(void *obj, uint32_t offset); void *(*dv_create_flow_action_dest_ibv_qp)(void *qp); void *(*dv_create_flow_action_dest_devx_tir)(void *tir); @@ -277,12 +285,20 @@ struct mlx5_glue { (struct ibv_context *ctx, enum mlx5dv_flow_table_type ft_type, void *domain, uint64_t flags, size_t actions_sz, uint64_t actions[]); + void *(*dv_create_flow_action_modify_header_root) + (struct ibv_context *ctx, size_t actions_sz, uint64_t actions[], + enum mlx5dv_flow_table_type ft_type); void *(*dv_create_flow_action_packet_reformat) (struct ibv_context *ctx, enum mlx5dv_flow_action_packet_reformat_type reformat_type, enum mlx5dv_flow_table_type ft_type, struct mlx5dv_dr_domain *domain, uint32_t flags, size_t data_sz, void *data); + void *(*dv_create_flow_action_packet_reformat_root) + (struct ibv_context *ctx, + size_t data_sz, void *data, + enum mlx5dv_flow_action_packet_reformat_type reformat_type, + enum mlx5dv_flow_table_type ft_type); void *(*dv_create_flow_action_tag)(uint32_t tag); void *(*dv_create_flow_action_meter) (struct mlx5dv_dr_flow_meter_attr *attr); @@ -291,6 +307,7 @@ struct mlx5_glue { void *(*dr_create_flow_action_default_miss)(void); int (*dv_destroy_flow)(void *flow); int (*dv_destroy_flow_matcher)(void *matcher); + int (*dv_destroy_flow_matcher_root)(void *matcher); struct ibv_context *(*dv_open_device)(struct ibv_device *device); struct mlx5dv_var *(*dv_alloc_var)(struct ibv_context *context, uint32_t flags); -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 08/18] net/mlx5/hws: Add HWS command layer 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (6 preceding siblings ...) 2022-10-19 20:57 ` [v5 07/18] net/mlx5: Add additional glue functions for HWS Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 09/18] net/mlx5/hws: Add HWS pool and buddy Alex Vesker ` (9 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit From: Erez Shitrit <erezsh@nvidia.com> The command layer is used to communicate with the FW, query capabilities and allocate FW resources needed for HWS. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/common/mlx5/mlx5_prm.h | 607 ++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_cmd.c | 948 ++++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_cmd.h | 230 ++++++++ 3 files changed, 1775 insertions(+), 10 deletions(-) create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.h diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index ca4763f53d..371942ae50 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -289,6 +289,8 @@ /* The alignment needed for CQ buffer. */ #define MLX5_CQE_BUF_ALIGNMENT rte_mem_page_size() +#define MAX_ACTIONS_DATA_IN_HEADER_MODIFY 512 + /* Completion mode. */ enum mlx5_completion_mode { MLX5_COMP_ONLY_ERR = 0x0, @@ -677,6 +679,10 @@ enum { MLX5_MODIFICATION_TYPE_SET = 0x1, MLX5_MODIFICATION_TYPE_ADD = 0x2, MLX5_MODIFICATION_TYPE_COPY = 0x3, + MLX5_MODIFICATION_TYPE_INSERT = 0x4, + MLX5_MODIFICATION_TYPE_REMOVE = 0x5, + MLX5_MODIFICATION_TYPE_NOP = 0x6, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS = 0x7, }; /* The field of packet to be modified. */ @@ -1111,6 +1117,10 @@ enum { MLX5_CMD_OP_QUERY_TIS = 0x915, MLX5_CMD_OP_CREATE_RQT = 0x916, MLX5_CMD_OP_MODIFY_RQT = 0x917, + MLX5_CMD_OP_CREATE_FLOW_TABLE = 0x930, + MLX5_CMD_OP_CREATE_FLOW_GROUP = 0x933, + MLX5_CMD_OP_SET_FLOW_TABLE_ENTRY = 0x936, + MLX5_CMD_OP_MODIFY_FLOW_TABLE = 0x93c, MLX5_CMD_OP_ALLOC_FLOW_COUNTER = 0x939, MLX5_CMD_OP_QUERY_FLOW_COUNTER = 0x93b, MLX5_CMD_OP_CREATE_GENERAL_OBJECT = 0xa00, @@ -1299,6 +1309,7 @@ enum { MLX5_SET_HCA_CAP_OP_MOD_ESW = 0x9 << 1, MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1, MLX5_GET_HCA_CAP_OP_MOD_CRYPTO = 0x1A << 1, + MLX5_GET_HCA_CAP_OP_MOD_WQE_BASED_FLOW_TABLE = 0x1B << 1, MLX5_GET_HCA_CAP_OP_MOD_PARSE_GRAPH_NODE_CAP = 0x1C << 1, MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE_2 = 0x20 << 1, }; @@ -1317,6 +1328,14 @@ enum { (1ULL << MLX5_GENERAL_OBJ_TYPE_GENEVE_TLV_OPT) #define MLX5_GENERAL_OBJ_TYPES_CAP_CONN_TRACK_OFFLOAD \ (1ULL << MLX5_GENERAL_OBJ_TYPE_CONN_TRACK_OFFLOAD) +#define MLX5_GENERAL_OBJ_TYPES_CAP_RTC \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_RTC) +#define MLX5_GENERAL_OBJ_TYPES_CAP_STC \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_STC) +#define MLX5_GENERAL_OBJ_TYPES_CAP_STE \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_STE) +#define MLX5_GENERAL_OBJ_TYPES_CAP_DEFINER \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_DEFINER) #define MLX5_GENERAL_OBJ_TYPES_CAP_DEK \ (1ULL << MLX5_GENERAL_OBJ_TYPE_DEK) #define MLX5_GENERAL_OBJ_TYPES_CAP_IMPORT_KEK \ @@ -1373,6 +1392,11 @@ enum { #define MLX5_HCA_FLEX_VXLAN_GPE_ENABLED (1UL << 7) #define MLX5_HCA_FLEX_ICMP_ENABLED (1UL << 8) #define MLX5_HCA_FLEX_ICMPV6_ENABLED (1UL << 9) +#define MLX5_HCA_FLEX_GTPU_ENABLED (1UL << 11) +#define MLX5_HCA_FLEX_GTPU_DW_2_ENABLED (1UL << 16) +#define MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED (1UL << 17) +#define MLX5_HCA_FLEX_GTPU_DW_0_ENABLED (1UL << 18) +#define MLX5_HCA_FLEX_GTPU_TEID_ENABLED (1UL << 19) /* The device steering logic format. */ #define MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 0x0 @@ -1505,7 +1529,8 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 wol_u[0x1]; u8 wol_p[0x1]; u8 stat_rate_support[0x10]; - u8 reserved_at_1f0[0xc]; + u8 reserved_at_1ef[0xb]; + u8 wqe_based_flow_table_update_cap[0x1]; u8 cqe_version[0x4]; u8 compact_address_vector[0x1]; u8 striding_rq[0x1]; @@ -1681,7 +1706,9 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 cqe_compression[0x1]; u8 cqe_compression_timeout[0x10]; u8 cqe_compression_max_num[0x10]; - u8 reserved_at_5e0[0x10]; + u8 reserved_at_5e0[0x8]; + u8 flex_parser_id_gtpu_dw_0[0x4]; + u8 reserved_at_5ec[0x4]; u8 tag_matching[0x1]; u8 rndv_offload_rc[0x1]; u8 rndv_offload_dc[0x1]; @@ -1691,17 +1718,38 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 affiliate_nic_vport_criteria[0x8]; u8 native_port_num[0x8]; u8 num_vhca_ports[0x8]; - u8 reserved_at_618[0x6]; + u8 flex_parser_id_gtpu_teid[0x4]; + u8 reserved_at_61c[0x2]; u8 sw_owner_id[0x1]; u8 reserved_at_61f[0x6C]; u8 wait_on_data[0x1]; u8 wait_on_time[0x1]; - u8 reserved_at_68d[0xBB]; + u8 reserved_at_68d[0x37]; + u8 flex_parser_id_geneve_opt_0[0x4]; + u8 flex_parser_id_icmp_dw1[0x4]; + u8 flex_parser_id_icmp_dw0[0x4]; + u8 flex_parser_id_icmpv6_dw1[0x4]; + u8 flex_parser_id_icmpv6_dw0[0x4]; + u8 flex_parser_id_outer_first_mpls_over_gre[0x4]; + u8 flex_parser_id_outer_first_mpls_over_udp_label[0x4]; + u8 reserved_at_6e0[0x20]; + u8 flex_parser_id_gtpu_dw_2[0x4]; + u8 flex_parser_id_gtpu_first_ext_dw_0[0x4]; + u8 reserved_at_708[0x40]; u8 dma_mmo_qp[0x1]; u8 regexp_mmo_qp[0x1]; u8 compress_mmo_qp[0x1]; u8 decompress_mmo_qp[0x1]; - u8 reserved_at_624[0xd4]; + u8 reserved_at_74c[0x14]; + u8 reserved_at_760[0x3]; + u8 log_max_num_header_modify_argument[0x5]; + u8 log_header_modify_argument_granularity_offset[0x4]; + u8 log_header_modify_argument_granularity[0x4]; + u8 reserved_at_770[0x3]; + u8 log_header_modify_argument_max_alloc[0x5]; + u8 reserved_at_778[0x8]; + u8 reserved_at_780[0x40]; + u8 match_definer_format_supported[0x40]; }; struct mlx5_ifc_qos_cap_bits { @@ -1876,7 +1924,9 @@ struct mlx5_ifc_flow_table_prop_layout_bits { u8 log_max_ft_sampler_num[8]; u8 metadata_reg_b_width[0x8]; u8 metadata_reg_a_width[0x8]; - u8 reserved_at_60[0x18]; + u8 reserved_at_60[0xa]; + u8 reparse[0x1]; + u8 reserved_at_6b[0xd]; u8 log_max_ft_num[0x8]; u8 reserved_at_80[0x10]; u8 log_max_flow_counter[0x8]; @@ -2061,7 +2111,17 @@ struct mlx5_ifc_cmd_hca_cap_2_bits { u8 hairpin_sq_wqe_bb_size[0x5]; u8 hairpin_sq_wq_in_host_mem[0x1]; u8 hairpin_data_buffer_locked[0x1]; - u8 reserved_at_16a[0x696]; + u8 reserved_at_16a[0x36]; + u8 reserved_at_1a0[0xb]; + u8 format_select_dw_8_6_ext[0x1]; + u8 reserved_at_1ac[0x14]; + u8 general_obj_types_127_64[0x40]; + u8 reserved_at_200[0x80]; + u8 format_select_dw_gtpu_dw_0[0x8]; + u8 format_select_dw_gtpu_dw_1[0x8]; + u8 format_select_dw_gtpu_dw_2[0x8]; + u8 format_select_dw_gtpu_first_ext_dw_0[0x8]; + u8 reserved_at_2a0[0x560]; }; struct mlx5_ifc_esw_cap_bits { @@ -2074,6 +2134,37 @@ struct mlx5_ifc_esw_cap_bits { u8 reserved_at_80[0x780]; }; +struct mlx5_ifc_wqe_based_flow_table_cap_bits { + u8 reserved_at_0[0x3]; + u8 log_max_num_ste[0x5]; + u8 reserved_at_8[0x3]; + u8 log_max_num_stc[0x5]; + u8 reserved_at_10[0x3]; + u8 log_max_num_rtc[0x5]; + u8 reserved_at_18[0x3]; + u8 log_max_num_header_modify_pattern[0x5]; + u8 reserved_at_20[0x3]; + u8 stc_alloc_log_granularity[0x5]; + u8 reserved_at_28[0x3]; + u8 stc_alloc_log_max[0x5]; + u8 reserved_at_30[0x3]; + u8 ste_alloc_log_granularity[0x5]; + u8 reserved_at_38[0x3]; + u8 ste_alloc_log_max[0x5]; + u8 reserved_at_40[0xb]; + u8 rtc_reparse_mode[0x5]; + u8 reserved_at_50[0x3]; + u8 rtc_index_mode[0x5]; + u8 reserved_at_58[0x3]; + u8 rtc_log_depth_max[0x5]; + u8 reserved_at_60[0x10]; + u8 ste_format[0x10]; + u8 stc_action_type[0x80]; + u8 header_insert_type[0x10]; + u8 header_remove_type[0x10]; + u8 trivial_match_definer[0x20]; +}; + union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_cmd_hca_cap_bits cmd_hca_cap; struct mlx5_ifc_cmd_hca_cap_2_bits cmd_hca_cap_2; @@ -2085,6 +2176,7 @@ union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_flow_table_esw_cap_bits flow_table_esw_cap; struct mlx5_ifc_esw_cap_bits esw_cap; struct mlx5_ifc_roce_caps_bits roce_caps; + struct mlx5_ifc_wqe_based_flow_table_cap_bits wqe_based_flow_table_cap; u8 reserved_at_0[0x8000]; }; @@ -2098,6 +2190,20 @@ struct mlx5_ifc_set_action_in_bits { u8 data[0x20]; }; +struct mlx5_ifc_copy_action_in_bits { + u8 action_type[0x4]; + u8 src_field[0xc]; + u8 reserved_at_10[0x3]; + u8 src_offset[0x5]; + u8 reserved_at_18[0x3]; + u8 length[0x5]; + u8 reserved_at_20[0x4]; + u8 dst_field[0xc]; + u8 reserved_at_30[0x3]; + u8 dst_offset[0x5]; + u8 reserved_at_38[0x8]; +}; + struct mlx5_ifc_query_hca_cap_out_bits { u8 status[0x8]; u8 reserved_at_8[0x18]; @@ -2978,6 +3084,7 @@ enum { MLX5_GENERAL_OBJ_TYPE_GENEVE_TLV_OPT = 0x000b, MLX5_GENERAL_OBJ_TYPE_DEK = 0x000c, MLX5_GENERAL_OBJ_TYPE_VIRTQ = 0x000d, + MLX5_GENERAL_OBJ_TYPE_DEFINER = 0x0018, MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS = 0x001c, MLX5_GENERAL_OBJ_TYPE_IMPORT_KEK = 0x001d, MLX5_GENERAL_OBJ_TYPE_CREDENTIAL = 0x001e, @@ -2986,6 +3093,11 @@ enum { MLX5_GENERAL_OBJ_TYPE_FLOW_METER_ASO = 0x0024, MLX5_GENERAL_OBJ_TYPE_FLOW_HIT_ASO = 0x0025, MLX5_GENERAL_OBJ_TYPE_CONN_TRACK_OFFLOAD = 0x0031, + MLX5_GENERAL_OBJ_TYPE_ARG = 0x0023, + MLX5_GENERAL_OBJ_TYPE_STC = 0x0040, + MLX5_GENERAL_OBJ_TYPE_RTC = 0x0041, + MLX5_GENERAL_OBJ_TYPE_STE = 0x0042, + MLX5_GENERAL_OBJ_TYPE_MODIFY_HEADER_PATTERN = 0x0043, }; struct mlx5_ifc_general_obj_in_cmd_hdr_bits { @@ -2993,9 +3105,14 @@ struct mlx5_ifc_general_obj_in_cmd_hdr_bits { u8 reserved_at_10[0x20]; u8 obj_type[0x10]; u8 obj_id[0x20]; - u8 reserved_at_60[0x3]; - u8 log_obj_range[0x5]; - u8 reserved_at_58[0x18]; + union { + struct { + u8 reserved_at_60[0x3]; + u8 log_obj_range[0x5]; + u8 reserved_at_58[0x18]; + }; + u8 obj_offset[0x20]; + }; }; struct mlx5_ifc_general_obj_out_cmd_hdr_bits { @@ -3029,6 +3146,243 @@ struct mlx5_ifc_geneve_tlv_option_bits { u8 reserved_at_80[0x180]; }; + +enum mlx5_ifc_rtc_update_mode { + MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH = 0x0, + MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET = 0x1, +}; + +enum mlx5_ifc_rtc_ste_format { + MLX5_IFC_RTC_STE_FORMAT_8DW = 0x4, + MLX5_IFC_RTC_STE_FORMAT_11DW = 0x5, +}; + +enum mlx5_ifc_rtc_reparse_mode { + MLX5_IFC_RTC_REPARSE_NEVER = 0x0, + MLX5_IFC_RTC_REPARSE_ALWAYS = 0x1, +}; + +struct mlx5_ifc_rtc_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x40]; + u8 update_index_mode[0x2]; + u8 reparse_mode[0x2]; + u8 reserved_at_84[0x4]; + u8 pd[0x18]; + u8 reserved_at_a0[0x13]; + u8 log_depth[0x5]; + u8 log_hash_size[0x8]; + u8 ste_format[0x8]; + u8 table_type[0x8]; + u8 reserved_at_d0[0x10]; + u8 match_definer_id[0x20]; + u8 stc_id[0x20]; + u8 ste_table_base_id[0x20]; + u8 ste_table_offset[0x20]; + u8 reserved_at_160[0x8]; + u8 miss_flow_table_id[0x18]; + u8 reserved_at_180[0x280]; +}; + +enum mlx5_ifc_stc_action_type { + MLX5_IFC_STC_ACTION_TYPE_NOP = 0x00, + MLX5_IFC_STC_ACTION_TYPE_COPY = 0x05, + MLX5_IFC_STC_ACTION_TYPE_SET = 0x06, + MLX5_IFC_STC_ACTION_TYPE_ADD = 0x07, + MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS = 0x08, + MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE = 0x09, + MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT = 0x0b, + MLX5_IFC_STC_ACTION_TYPE_TAG = 0x0c, + MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST = 0x0e, + MLX5_IFC_STC_ACTION_TYPE_ASO = 0x12, + MLX5_IFC_STC_ACTION_TYPE_COUNTER = 0x14, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE = 0x80, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR = 0x81, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT = 0x82, + MLX5_IFC_STC_ACTION_TYPE_DROP = 0x83, + MLX5_IFC_STC_ACTION_TYPE_ALLOW = 0x84, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT = 0x85, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK = 0x86, +}; + +struct mlx5_ifc_stc_ste_param_ste_table_bits { + u8 ste_obj_id[0x20]; + u8 match_definer_id[0x20]; + u8 reserved_at_40[0x3]; + u8 log_hash_size[0x5]; + u8 reserved_at_48[0x38]; +}; + +struct mlx5_ifc_stc_ste_param_tir_bits { + u8 reserved_at_0[0x8]; + u8 tirn[0x18]; + u8 reserved_at_20[0x60]; +}; + +struct mlx5_ifc_stc_ste_param_table_bits { + u8 reserved_at_0[0x8]; + u8 table_id[0x18]; + u8 reserved_at_20[0x60]; +}; + +struct mlx5_ifc_stc_ste_param_flow_counter_bits { + u8 flow_counter_id[0x20]; +}; + +enum { + MLX5_ASO_CT_NUM_PER_OBJ = 1, + MLX5_ASO_METER_NUM_PER_OBJ = 2, +}; + +struct mlx5_ifc_stc_ste_param_execute_aso_bits { + u8 aso_object_id[0x20]; + u8 return_reg_id[0x4]; + u8 aso_type[0x4]; + u8 reserved_at_28[0x18]; +}; + +struct mlx5_ifc_stc_ste_param_header_modify_list_bits { + u8 header_modify_pattern_id[0x20]; + u8 header_modify_argument_id[0x20]; +}; + +enum mlx5_ifc_header_anchors { + MLX5_HEADER_ANCHOR_PACKET_START = 0x0, + MLX5_HEADER_ANCHOR_FIRST_VLAN_START = 0x2, + MLX5_HEADER_ANCHOR_IPV6_IPV4 = 0x07, + MLX5_HEADER_ANCHOR_INNER_MAC = 0x13, + MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4 = 0x19, +}; + +struct mlx5_ifc_stc_ste_param_remove_bits { + u8 action_type[0x4]; + u8 decap[0x1]; + u8 reserved_at_5[0x5]; + u8 remove_start_anchor[0x6]; + u8 reserved_at_10[0x2]; + u8 remove_end_anchor[0x6]; + u8 reserved_at_18[0x8]; +}; + +struct mlx5_ifc_stc_ste_param_remove_words_bits { + u8 action_type[0x4]; + u8 reserved_at_4[0x6]; + u8 remove_start_anchor[0x6]; + u8 reserved_at_10[0x1]; + u8 remove_offset[0x7]; + u8 reserved_at_18[0x2]; + u8 remove_size[0x6]; +}; + +struct mlx5_ifc_stc_ste_param_insert_bits { + u8 action_type[0x4]; + u8 encap[0x1]; + u8 inline_data[0x1]; + u8 reserved_at_6[0x4]; + u8 insert_anchor[0x6]; + u8 reserved_at_10[0x1]; + u8 insert_offset[0x7]; + u8 reserved_at_18[0x1]; + u8 insert_size[0x7]; + u8 insert_argument[0x20]; +}; + +struct mlx5_ifc_stc_ste_param_vport_bits { + u8 eswitch_owner_vhca_id[0x10]; + u8 vport_number[0x10]; + u8 eswitch_owner_vhca_id_valid[0x1]; + u8 reserved_at_21[0x59]; +}; + +union mlx5_ifc_stc_param_bits { + struct mlx5_ifc_stc_ste_param_ste_table_bits ste_table; + struct mlx5_ifc_stc_ste_param_tir_bits tir; + struct mlx5_ifc_stc_ste_param_table_bits table; + struct mlx5_ifc_stc_ste_param_flow_counter_bits counter; + struct mlx5_ifc_stc_ste_param_header_modify_list_bits modify_header; + struct mlx5_ifc_stc_ste_param_execute_aso_bits aso; + struct mlx5_ifc_stc_ste_param_remove_bits remove_header; + struct mlx5_ifc_stc_ste_param_insert_bits insert_header; + struct mlx5_ifc_set_action_in_bits add; + struct mlx5_ifc_set_action_in_bits set; + struct mlx5_ifc_copy_action_in_bits copy; + struct mlx5_ifc_stc_ste_param_vport_bits vport; + u8 reserved_at_0[0x80]; +}; + +enum { + MLX5_IFC_MODIFY_STC_FIELD_SELECT_NEW_STC = 1 << 0, +}; + +struct mlx5_ifc_stc_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x48]; + u8 table_type[0x8]; + u8 ste_action_offset[0x8]; + u8 action_type[0x8]; + u8 reserved_at_a0[0x60]; + union mlx5_ifc_stc_param_bits stc_param; + u8 reserved_at_180[0x280]; +}; + +struct mlx5_ifc_ste_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x48]; + u8 table_type[0x8]; + u8 reserved_at_90[0x370]; +}; + +enum { + MLX5_IFC_DEFINER_FORMAT_ID_SELECT = 61, +}; + +struct mlx5_ifc_definer_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x50]; + u8 format_id[0x10]; + u8 reserved_at_60[0x60]; + u8 format_select_dw3[0x8]; + u8 format_select_dw2[0x8]; + u8 format_select_dw1[0x8]; + u8 format_select_dw0[0x8]; + u8 format_select_dw7[0x8]; + u8 format_select_dw6[0x8]; + u8 format_select_dw5[0x8]; + u8 format_select_dw4[0x8]; + u8 reserved_at_100[0x18]; + u8 format_select_dw8[0x8]; + u8 reserved_at_120[0x20]; + u8 format_select_byte3[0x8]; + u8 format_select_byte2[0x8]; + u8 format_select_byte1[0x8]; + u8 format_select_byte0[0x8]; + u8 format_select_byte7[0x8]; + u8 format_select_byte6[0x8]; + u8 format_select_byte5[0x8]; + u8 format_select_byte4[0x8]; + u8 reserved_at_180[0x40]; + u8 ctrl[0xa0]; + u8 match_mask[0x160]; +}; + +struct mlx5_ifc_arg_bits { + u8 rsvd0[0x88]; + u8 access_pd[0x18]; +}; + +struct mlx5_ifc_header_modify_pattern_in_bits { + u8 modify_field_select[0x40]; + + u8 reserved_at_40[0x40]; + + u8 pattern_length[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x60]; + + u8 pattern_data[MAX_ACTIONS_DATA_IN_HEADER_MODIFY * 8]; +}; + struct mlx5_ifc_create_virtio_q_counters_in_bits { struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters; @@ -3044,6 +3398,36 @@ struct mlx5_ifc_create_geneve_tlv_option_in_bits { struct mlx5_ifc_geneve_tlv_option_bits geneve_tlv_opt; }; +struct mlx5_ifc_create_rtc_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_rtc_bits rtc; +}; + +struct mlx5_ifc_create_stc_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_stc_bits stc; +}; + +struct mlx5_ifc_create_ste_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_ste_bits ste; +}; + +struct mlx5_ifc_create_definer_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_definer_bits definer; +}; + +struct mlx5_ifc_create_arg_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_arg_bits arg; +}; + +struct mlx5_ifc_create_header_modify_pattern_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_header_modify_pattern_in_bits pattern; +}; + enum { MLX5_CRYPTO_KEY_SIZE_128b = 0x0, MLX5_CRYPTO_KEY_SIZE_256b = 0x1, @@ -4253,6 +4637,209 @@ struct mlx5_ifc_query_q_counter_in_bits { u8 counter_set_id[0x8]; }; +enum { + FS_FT_NIC_RX = 0x0, + FS_FT_NIC_TX = 0x1, + FS_FT_FDB = 0x4, + FS_FT_FDB_RX = 0xa, + FS_FT_FDB_TX = 0xb, +}; + +struct mlx5_ifc_flow_table_context_bits { + u8 reformat_en[0x1]; + u8 decap_en[0x1]; + u8 sw_owner[0x1]; + u8 termination_table[0x1]; + u8 table_miss_action[0x4]; + u8 level[0x8]; + u8 rtc_valid[0x1]; + u8 reserved_at_11[0x7]; + u8 log_size[0x8]; + + u8 reserved_at_20[0x8]; + u8 table_miss_id[0x18]; + + u8 reserved_at_40[0x8]; + u8 lag_master_next_table_id[0x18]; + + u8 reserved_at_60[0x60]; + + u8 rtc_id_0[0x20]; + + u8 rtc_id_1[0x20]; + + u8 reserved_at_100[0x40]; +}; + +struct mlx5_ifc_create_flow_table_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + + u8 reserved_at_60[0x20]; + + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x20]; + + struct mlx5_ifc_flow_table_context_bits flow_table_context; +}; + +struct mlx5_ifc_create_flow_table_out_bits { + u8 status[0x8]; + u8 icm_address_63_40[0x18]; + u8 syndrome[0x20]; + u8 icm_address_39_32[0x8]; + u8 table_id[0x18]; + u8 icm_address_31_0[0x20]; +}; + +enum mlx5_flow_destination_type { + MLX5_FLOW_DESTINATION_TYPE_VPORT = 0x0, +}; + +enum { + MLX5_FLOW_CONTEXT_ACTION_FWD_DEST = 0x4, +}; + +struct mlx5_ifc_set_fte_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x40]; +}; + +struct mlx5_ifc_dest_format_bits { + u8 destination_type[0x8]; + u8 destination_id[0x18]; + u8 destination_eswitch_owner_vhca_id_valid[0x1]; + u8 packet_reformat[0x1]; + u8 reserved_at_22[0xe]; + u8 destination_eswitch_owner_vhca_id[0x10]; +}; + +struct mlx5_ifc_flow_counter_list_bits { + u8 flow_counter_id[0x20]; + u8 reserved_at_20[0x20]; +}; + +union mlx5_ifc_dest_format_flow_counter_list_auto_bits { + struct mlx5_ifc_dest_format_bits dest_format; + struct mlx5_ifc_flow_counter_list_bits flow_counter_list; + u8 reserved_at_0[0x40]; +}; + +struct mlx5_ifc_flow_context_bits { + u8 reserved_at_00[0x20]; + u8 group_id[0x20]; + u8 reserved_at_40[0x8]; + u8 flow_tag[0x18]; + u8 reserved_at_60[0x10]; + u8 action[0x10]; + u8 extended_destination[0x1]; + u8 reserved_at_81[0x7]; + u8 destination_list_size[0x18]; + u8 reserved_at_a0[0x8]; + u8 flow_counter_list_size[0x18]; + u8 reserved_at_c0[0x1740]; + /* Currently only one destnation */ + union mlx5_ifc_dest_format_flow_counter_list_auto_bits destination[1]; +}; + +struct mlx5_ifc_set_fte_in_bits { + u8 opcode[0x10]; + u8 reserved_at_10[0x10]; + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + u8 reserved_at_60[0x20]; + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + u8 ignore_flow_level[0x1]; + u8 reserved_at_c1[0x17]; + u8 modify_enable_mask[0x8]; + u8 reserved_at_e0[0x20]; + u8 flow_index[0x20]; + u8 reserved_at_120[0xe0]; + struct mlx5_ifc_flow_context_bits flow_context; +}; + +struct mlx5_ifc_create_flow_group_in_bits { + u8 opcode[0x10]; + u8 reserved_at_10[0x10]; + u8 reserved_at_20[0x20]; + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + u8 reserved_at_60[0x20]; + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + u8 reserved_at_c0[0x1f40]; +}; + +struct mlx5_ifc_create_flow_group_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x8]; + u8 group_id[0x18]; + u8 reserved_at_60[0x20]; +}; + +enum { + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION = 1 << 0, + MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID = 1 << 1, +}; + +enum { + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_DEFAULT = 0, + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_GOTO_TBL = 1, +}; + +struct mlx5_ifc_modify_flow_table_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 reserved_at_40[0x10]; + u8 vport_number[0x10]; + + u8 reserved_at_60[0x10]; + u8 modify_field_select[0x10]; + + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + + struct mlx5_ifc_flow_table_context_bits flow_table_context; +}; + +struct mlx5_ifc_modify_flow_table_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + + u8 syndrome[0x20]; + + u8 reserved_at_40[0x60]; +}; + /* CQE format mask. */ #define MLX5E_CQE_FORMAT_MASK 0xc diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c new file mode 100644 index 0000000000..da8cc3d265 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c @@ -0,0 +1,948 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +int mlx5dr_cmd_destroy_obj(struct mlx5dr_devx_obj *devx_obj) +{ + int ret; + + ret = mlx5_glue->devx_obj_destroy(devx_obj->obj); + simple_free(devx_obj); + + return ret; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_table_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(create_flow_table_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_flow_table_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *ft_ctx; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for flow table object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_flow_table_in, in, opcode, MLX5_CMD_OP_CREATE_FLOW_TABLE); + MLX5_SET(create_flow_table_in, in, table_type, ft_attr->type); + + ft_ctx = MLX5_ADDR_OF(create_flow_table_in, in, flow_table_context); + MLX5_SET(flow_table_context, ft_ctx, level, ft_attr->level); + MLX5_SET(flow_table_context, ft_ctx, rtc_valid, ft_attr->rtc_valid); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create FT"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_flow_table_out, out, table_id); + + return devx_obj; +} + +int +mlx5dr_cmd_flow_table_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_ft_modify_attr *ft_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(modify_flow_table_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(modify_flow_table_in)] = {0}; + void *ft_ctx; + int ret; + + MLX5_SET(modify_flow_table_in, in, opcode, MLX5_CMD_OP_MODIFY_FLOW_TABLE); + MLX5_SET(modify_flow_table_in, in, table_type, ft_attr->type); + MLX5_SET(modify_flow_table_in, in, modify_field_select, ft_attr->modify_fs); + MLX5_SET(modify_flow_table_in, in, table_id, devx_obj->id); + + ft_ctx = MLX5_ADDR_OF(modify_flow_table_in, in, flow_table_context); + + MLX5_SET(flow_table_context, ft_ctx, table_miss_action, ft_attr->table_miss_action); + MLX5_SET(flow_table_context, ft_ctx, table_miss_id, ft_attr->table_miss_id); + MLX5_SET(flow_table_context, ft_ctx, rtc_id_0, ft_attr->rtc_id_0); + MLX5_SET(flow_table_context, ft_ctx, rtc_id_1, ft_attr->rtc_id_1); + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify FT"); + rte_errno = errno; + } + + return ret; +} + +static struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_group_create(struct ibv_context *ctx, + struct mlx5dr_cmd_fg_attr *fg_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(create_flow_group_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_flow_group_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for flow group object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_flow_group_in, in, opcode, MLX5_CMD_OP_CREATE_FLOW_GROUP); + MLX5_SET(create_flow_group_in, in, table_type, fg_attr->table_type); + MLX5_SET(create_flow_group_in, in, table_id, fg_attr->table_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create Flow group"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_flow_group_out, out, group_id); + + return devx_obj; +} + +static struct mlx5dr_devx_obj * +mlx5dr_cmd_set_vport_fte(struct ibv_context *ctx, + uint32_t table_type, + uint32_t table_id, + uint32_t group_id, + uint32_t vport_id) +{ + uint32_t in[MLX5_ST_SZ_DW(set_fte_in) + MLX5_ST_SZ_DW(dest_format)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(set_fte_out)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *in_flow_context; + void *in_dests; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for fte object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(set_fte_in, in, opcode, MLX5_CMD_OP_SET_FLOW_TABLE_ENTRY); + MLX5_SET(set_fte_in, in, table_type, table_type); + MLX5_SET(set_fte_in, in, table_id, table_id); + + in_flow_context = MLX5_ADDR_OF(set_fte_in, in, flow_context); + MLX5_SET(flow_context, in_flow_context, group_id, group_id); + MLX5_SET(flow_context, in_flow_context, destination_list_size, 1); + MLX5_SET(flow_context, in_flow_context, action, MLX5_FLOW_CONTEXT_ACTION_FWD_DEST); + + in_dests = MLX5_ADDR_OF(flow_context, in_flow_context, destination); + MLX5_SET(dest_format, in_dests, destination_type, + MLX5_FLOW_DESTINATION_TYPE_VPORT); + MLX5_SET(dest_format, in_dests, destination_id, vport_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create FTE"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + return devx_obj; +} + +void mlx5dr_cmd_miss_ft_destroy(struct mlx5dr_cmd_forward_tbl *tbl) +{ + mlx5dr_cmd_destroy_obj(tbl->fte); + mlx5dr_cmd_destroy_obj(tbl->fg); + mlx5dr_cmd_destroy_obj(tbl->ft); +} + +struct mlx5dr_cmd_forward_tbl * +mlx5dr_cmd_miss_ft_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr, + uint32_t vport) +{ + struct mlx5dr_cmd_fg_attr fg_attr = {0}; + struct mlx5dr_cmd_forward_tbl *tbl; + + tbl = simple_calloc(1, sizeof(*tbl)); + if (!tbl) { + DR_LOG(ERR, "Failed to allocate memory for forward default"); + rte_errno = ENOMEM; + return NULL; + } + + tbl->ft = mlx5dr_cmd_flow_table_create(ctx, ft_attr); + if (!tbl->ft) { + DR_LOG(ERR, "Failed to create FT for miss-table"); + goto free_tbl; + } + + fg_attr.table_id = tbl->ft->id; + fg_attr.table_type = ft_attr->type; + + tbl->fg = mlx5dr_cmd_flow_group_create(ctx, &fg_attr); + if (!tbl->fg) { + DR_LOG(ERR, "Failed to create FG for miss-table"); + goto free_ft; + } + + tbl->fte = mlx5dr_cmd_set_vport_fte(ctx, ft_attr->type, tbl->ft->id, tbl->fg->id, vport); + if (!tbl->fte) { + DR_LOG(ERR, "Failed to create FTE for miss-table"); + goto free_fg; + } + return tbl; + +free_fg: + mlx5dr_cmd_destroy_obj(tbl->fg); +free_ft: + mlx5dr_cmd_destroy_obj(tbl->ft); +free_tbl: + simple_free(tbl); + return NULL; +} + +void mlx5dr_cmd_set_attr_connect_miss_tbl(struct mlx5dr_context *ctx, + uint32_t fw_ft_type, + enum mlx5dr_table_type type, + struct mlx5dr_cmd_ft_modify_attr *ft_attr) +{ + struct mlx5dr_devx_obj *default_miss_tbl; + + if (type != MLX5DR_TABLE_TYPE_FDB) + return; + + default_miss_tbl = ctx->common_res[type].default_miss->ft; + if (!default_miss_tbl) { + assert(false); + return; + } + ft_attr->modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION; + ft_attr->type = fw_ft_type; + ft_attr->table_miss_action = MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_GOTO_TBL; + ft_attr->table_miss_id = default_miss_tbl->id; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_rtc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_rtc_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for RTC object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_rtc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_RTC); + + attr = MLX5_ADDR_OF(create_rtc_in, in, rtc); + MLX5_SET(rtc, attr, ste_format, rtc_attr->is_jumbo ? + MLX5_IFC_RTC_STE_FORMAT_11DW : + MLX5_IFC_RTC_STE_FORMAT_8DW); + MLX5_SET(rtc, attr, pd, rtc_attr->pd); + MLX5_SET(rtc, attr, update_index_mode, rtc_attr->update_index_mode); + MLX5_SET(rtc, attr, log_depth, rtc_attr->log_depth); + MLX5_SET(rtc, attr, log_hash_size, rtc_attr->log_size); + MLX5_SET(rtc, attr, table_type, rtc_attr->table_type); + MLX5_SET(rtc, attr, match_definer_id, rtc_attr->definer_id); + MLX5_SET(rtc, attr, stc_id, rtc_attr->stc_base); + MLX5_SET(rtc, attr, ste_table_base_id, rtc_attr->ste_base); + MLX5_SET(rtc, attr, ste_table_offset, rtc_attr->ste_offset); + MLX5_SET(rtc, attr, miss_flow_table_id, rtc_attr->miss_ft_id); + MLX5_SET(rtc, attr, reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create RTC"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_stc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_stc_create_attr *stc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_stc_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for STC object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_stc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STC); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, stc_attr->log_obj_range); + + attr = MLX5_ADDR_OF(create_stc_in, in, stc); + MLX5_SET(stc, attr, table_type, stc_attr->table_type); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create STC"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +static int +mlx5dr_cmd_stc_modify_set_stc_param(struct mlx5dr_cmd_stc_modify_attr *stc_attr, + void *stc_parm) +{ + switch (stc_attr->action_type) { + case MLX5_IFC_STC_ACTION_TYPE_COUNTER: + MLX5_SET(stc_ste_param_flow_counter, stc_parm, flow_counter_id, stc_attr->id); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR: + MLX5_SET(stc_ste_param_tir, stc_parm, tirn, stc_attr->dest_tir_num); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT: + MLX5_SET(stc_ste_param_table, stc_parm, table_id, stc_attr->dest_table_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST: + MLX5_SET(stc_ste_param_header_modify_list, stc_parm, + header_modify_pattern_id, stc_attr->modify_header.pattern_id); + MLX5_SET(stc_ste_param_header_modify_list, stc_parm, + header_modify_argument_id, stc_attr->modify_header.arg_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE: + MLX5_SET(stc_ste_param_remove, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_REMOVE); + MLX5_SET(stc_ste_param_remove, stc_parm, decap, + stc_attr->remove_header.decap); + MLX5_SET(stc_ste_param_remove, stc_parm, remove_start_anchor, + stc_attr->remove_header.start_anchor); + MLX5_SET(stc_ste_param_remove, stc_parm, remove_end_anchor, + stc_attr->remove_header.end_anchor); + break; + case MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT: + MLX5_SET(stc_ste_param_insert, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_INSERT); + MLX5_SET(stc_ste_param_insert, stc_parm, encap, + stc_attr->insert_header.encap); + MLX5_SET(stc_ste_param_insert, stc_parm, inline_data, + stc_attr->insert_header.is_inline); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_anchor, + stc_attr->insert_header.insert_anchor); + /* HW gets the next 2 sizes in words */ + MLX5_SET(stc_ste_param_insert, stc_parm, insert_size, + stc_attr->insert_header.header_size / 2); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_offset, + stc_attr->insert_header.insert_offset / 2); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_argument, + stc_attr->insert_header.arg_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_COPY: + case MLX5_IFC_STC_ACTION_TYPE_SET: + case MLX5_IFC_STC_ACTION_TYPE_ADD: + *(__be64 *)stc_parm = stc_attr->modify_action.data; + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT: + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK: + MLX5_SET(stc_ste_param_vport, stc_parm, vport_number, + stc_attr->vport.vport_num); + MLX5_SET(stc_ste_param_vport, stc_parm, eswitch_owner_vhca_id, + stc_attr->vport.esw_owner_vhca_id); + MLX5_SET(stc_ste_param_vport, stc_parm, eswitch_owner_vhca_id_valid, 1); + break; + case MLX5_IFC_STC_ACTION_TYPE_DROP: + case MLX5_IFC_STC_ACTION_TYPE_NOP: + case MLX5_IFC_STC_ACTION_TYPE_TAG: + case MLX5_IFC_STC_ACTION_TYPE_ALLOW: + break; + case MLX5_IFC_STC_ACTION_TYPE_ASO: + MLX5_SET(stc_ste_param_execute_aso, stc_parm, aso_object_id, + stc_attr->aso.devx_obj_id); + MLX5_SET(stc_ste_param_execute_aso, stc_parm, return_reg_id, + stc_attr->aso.return_reg_id); + MLX5_SET(stc_ste_param_execute_aso, stc_parm, aso_type, + stc_attr->aso.aso_type); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE: + MLX5_SET(stc_ste_param_ste_table, stc_parm, ste_obj_id, + stc_attr->ste_table.ste_obj_id); + MLX5_SET(stc_ste_param_ste_table, stc_parm, match_definer_id, + stc_attr->ste_table.match_definer_id); + MLX5_SET(stc_ste_param_ste_table, stc_parm, log_hash_size, + stc_attr->ste_table.log_hash_size); + break; + case MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS: + MLX5_SET(stc_ste_param_remove_words, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS); + MLX5_SET(stc_ste_param_remove_words, stc_parm, remove_start_anchor, + stc_attr->remove_words.start_anchor); + MLX5_SET(stc_ste_param_remove_words, stc_parm, + remove_size, stc_attr->remove_words.num_of_words); + break; + default: + DR_LOG(ERR, "Not supported type %d", stc_attr->action_type); + rte_errno = EINVAL; + return rte_errno; + } + return 0; +} + +int +mlx5dr_cmd_stc_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_stc_modify_attr *stc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_stc_in)] = {0}; + void *stc_parm; + void *attr; + int ret; + + attr = MLX5_ADDR_OF(create_stc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_MODIFY_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STC); + MLX5_SET(general_obj_in_cmd_hdr, in, obj_id, devx_obj->id); + MLX5_SET(general_obj_in_cmd_hdr, in, obj_offset, stc_attr->stc_offset); + + attr = MLX5_ADDR_OF(create_stc_in, in, stc); + MLX5_SET(stc, attr, ste_action_offset, stc_attr->action_offset); + MLX5_SET(stc, attr, action_type, stc_attr->action_type); + MLX5_SET64(stc, attr, modify_field_select, + MLX5_IFC_MODIFY_STC_FIELD_SELECT_NEW_STC); + + /* Set destination TIRN, TAG, FT ID, STE ID */ + stc_parm = MLX5_ADDR_OF(stc, attr, stc_param); + ret = mlx5dr_cmd_stc_modify_set_stc_param(stc_attr, stc_parm); + if (ret) + return ret; + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify STC FW action_type %d", stc_attr->action_type); + rte_errno = errno; + } + + return ret; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_arg_create(struct ibv_context *ctx, + uint16_t log_obj_range, + uint32_t pd) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_arg_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for ARG object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_arg_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_ARG); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, log_obj_range); + + attr = MLX5_ADDR_OF(create_arg_in, in, arg); + MLX5_SET(arg, attr, access_pd, pd); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create ARG"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_header_modify_pattern_create(struct ibv_context *ctx, + uint32_t pattern_length, + uint8_t *actions) +{ + uint32_t in[MLX5_ST_SZ_DW(create_header_modify_pattern_in)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *pattern_data; + void *pattern; + void *attr; + + if (pattern_length > MAX_ACTIONS_DATA_IN_HEADER_MODIFY) { + DR_LOG(ERR, "Pattern length %d exceeds limit %d", + pattern_length, MAX_ACTIONS_DATA_IN_HEADER_MODIFY); + rte_errno = EINVAL; + return NULL; + } + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for header_modify_pattern object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_header_modify_pattern_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_MODIFY_HEADER_PATTERN); + + pattern = MLX5_ADDR_OF(create_header_modify_pattern_in, in, pattern); + /* Pattern_length is in ddwords */ + MLX5_SET(header_modify_pattern_in, pattern, pattern_length, pattern_length / (2 * DW_SIZE)); + + pattern_data = MLX5_ADDR_OF(header_modify_pattern_in, pattern, pattern_data); + memcpy(pattern_data, actions, pattern_length); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create header_modify_pattern"); + rte_errno = errno; + goto free_obj; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; + +free_obj: + simple_free(devx_obj); + return NULL; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_ste_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ste_create_attr *ste_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_ste_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for STE object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_ste_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STE); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, ste_attr->log_obj_range); + + attr = MLX5_ADDR_OF(create_ste_in, in, ste); + MLX5_SET(ste, attr, table_type, ste_attr->table_type); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create STE"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_definer_create(struct ibv_context *ctx, + struct mlx5dr_cmd_definer_create_attr *def_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_definer_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *ptr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for definer object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(general_obj_in_cmd_hdr, + in, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + in, obj_type, MLX5_GENERAL_OBJ_TYPE_DEFINER); + + ptr = MLX5_ADDR_OF(create_definer_in, in, definer); + MLX5_SET(definer, ptr, format_id, MLX5_IFC_DEFINER_FORMAT_ID_SELECT); + + MLX5_SET(definer, ptr, format_select_dw0, def_attr->dw_selector[0]); + MLX5_SET(definer, ptr, format_select_dw1, def_attr->dw_selector[1]); + MLX5_SET(definer, ptr, format_select_dw2, def_attr->dw_selector[2]); + MLX5_SET(definer, ptr, format_select_dw3, def_attr->dw_selector[3]); + MLX5_SET(definer, ptr, format_select_dw4, def_attr->dw_selector[4]); + MLX5_SET(definer, ptr, format_select_dw5, def_attr->dw_selector[5]); + MLX5_SET(definer, ptr, format_select_dw6, def_attr->dw_selector[6]); + MLX5_SET(definer, ptr, format_select_dw7, def_attr->dw_selector[7]); + MLX5_SET(definer, ptr, format_select_dw8, def_attr->dw_selector[8]); + + MLX5_SET(definer, ptr, format_select_byte0, def_attr->byte_selector[0]); + MLX5_SET(definer, ptr, format_select_byte1, def_attr->byte_selector[1]); + MLX5_SET(definer, ptr, format_select_byte2, def_attr->byte_selector[2]); + MLX5_SET(definer, ptr, format_select_byte3, def_attr->byte_selector[3]); + MLX5_SET(definer, ptr, format_select_byte4, def_attr->byte_selector[4]); + MLX5_SET(definer, ptr, format_select_byte5, def_attr->byte_selector[5]); + MLX5_SET(definer, ptr, format_select_byte6, def_attr->byte_selector[6]); + MLX5_SET(definer, ptr, format_select_byte7, def_attr->byte_selector[7]); + + ptr = MLX5_ADDR_OF(definer, ptr, match_mask); + memcpy(ptr, def_attr->match_mask, MLX5_FLD_SZ_BYTES(definer, match_mask)); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create Definer"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_sq_create(struct ibv_context *ctx, + struct mlx5dr_cmd_sq_create_attr *attr) +{ + uint32_t out[MLX5_ST_SZ_DW(create_sq_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_sq_in)] = {0}; + void *sqc = MLX5_ADDR_OF(create_sq_in, in, ctx); + void *wqc = MLX5_ADDR_OF(sqc, sqc, wq); + struct mlx5dr_devx_obj *devx_obj; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to create SQ"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_sq_in, in, opcode, MLX5_CMD_OP_CREATE_SQ); + MLX5_SET(sqc, sqc, cqn, attr->cqn); + MLX5_SET(sqc, sqc, flush_in_error_en, 1); + MLX5_SET(sqc, sqc, non_wire, 1); + MLX5_SET(wq, wqc, wq_type, MLX5_WQ_TYPE_CYCLIC); + MLX5_SET(wq, wqc, pd, attr->pdn); + MLX5_SET(wq, wqc, uar_page, attr->page_id); + MLX5_SET(wq, wqc, log_wq_stride, log2above(MLX5_SEND_WQE_BB)); + MLX5_SET(wq, wqc, log_wq_sz, attr->log_wq_sz); + MLX5_SET(wq, wqc, dbr_umem_id, attr->dbr_id); + MLX5_SET(wq, wqc, wq_umem_id, attr->wq_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_sq_out, out, sqn); + + return devx_obj; +} + +int mlx5dr_cmd_sq_modify_rdy(struct mlx5dr_devx_obj *devx_obj) +{ + uint32_t out[MLX5_ST_SZ_DW(modify_sq_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(modify_sq_in)] = {0}; + void *sqc = MLX5_ADDR_OF(modify_sq_in, in, ctx); + int ret; + + MLX5_SET(modify_sq_in, in, opcode, MLX5_CMD_OP_MODIFY_SQ); + MLX5_SET(modify_sq_in, in, sqn, devx_obj->id); + MLX5_SET(modify_sq_in, in, sq_state, MLX5_SQC_STATE_RST); + MLX5_SET(sqc, sqc, state, MLX5_SQC_STATE_RDY); + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify SQ"); + rte_errno = errno; + } + + return ret; +} + +int mlx5dr_cmd_query_caps(struct ibv_context *ctx, + struct mlx5dr_cmd_query_caps *caps) +{ + uint32_t out[MLX5_ST_SZ_DW(query_hca_cap_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(query_hca_cap_in)] = {0}; + const struct flow_hw_port_info *port_info; + struct ibv_device_attr_ex attr_ex; + int ret; + + MLX5_SET(query_hca_cap_in, in, opcode, MLX5_CMD_OP_QUERY_HCA_CAP); + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query device caps"); + rte_errno = errno; + return rte_errno; + } + + caps->wqe_based_update = + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.wqe_based_flow_table_update_cap); + + caps->eswitch_manager = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.eswitch_manager); + + caps->flex_protocols = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.flex_parser_protocols); + + caps->log_header_modify_argument_granularity = + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.log_header_modify_argument_granularity); + + caps->log_header_modify_argument_granularity -= + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap. + log_header_modify_argument_granularity_offset); + + caps->log_header_modify_argument_max_alloc = + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.log_header_modify_argument_max_alloc); + + caps->definer_format_sup = + MLX5_GET64(query_hca_cap_out, out, + capability.cmd_hca_cap.match_definer_format_supported); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE_2 | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query device caps"); + rte_errno = errno; + return rte_errno; + } + + caps->full_dw_jumbo_support = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_8_6_ext); + + caps->format_select_gtpu_dw_0 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_0); + + caps->format_select_gtpu_dw_1 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_1); + + caps->format_select_gtpu_dw_2 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_2); + + caps->format_select_gtpu_ext_dw_0 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_first_ext_dw_0); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query flow table caps"); + rte_errno = errno; + return rte_errno; + } + + caps->nic_ft.max_level = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.max_ft_level); + + caps->nic_ft.reparse = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.reparse); + + if (caps->wqe_based_update) { + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_WQE_BASED_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query WQE based FT caps"); + rte_errno = errno; + return rte_errno; + } + + caps->rtc_reparse_mode = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_reparse_mode); + + caps->ste_format = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_format); + + caps->rtc_index_mode = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_index_mode); + + caps->rtc_log_depth_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_log_depth_max); + + caps->ste_alloc_log_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_alloc_log_max); + + caps->ste_alloc_log_gran = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_alloc_log_granularity); + + caps->trivial_match_definer = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + trivial_match_definer); + + caps->stc_alloc_log_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + stc_alloc_log_max); + + caps->stc_alloc_log_gran = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + stc_alloc_log_granularity); + } + + if (caps->eswitch_manager) { + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query flow table esw caps"); + rte_errno = errno; + return rte_errno; + } + + caps->fdb_ft.max_level = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.max_ft_level); + + caps->fdb_ft.reparse = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.reparse); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_SET_HCA_CAP_OP_MOD_ESW | MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Query eswitch capabilities failed %d\n", ret); + rte_errno = errno; + return rte_errno; + } + + if (MLX5_GET(query_hca_cap_out, out, + capability.esw_cap.esw_manager_vport_number_valid)) + caps->eswitch_manager_vport_number = + MLX5_GET(query_hca_cap_out, out, + capability.esw_cap.esw_manager_vport_number); + } + + ret = mlx5_glue->query_device_ex(ctx, NULL, &attr_ex); + if (ret) { + DR_LOG(ERR, "Failed to query device attributes"); + rte_errno = ret; + return rte_errno; + } + + strlcpy(caps->fw_ver, attr_ex.orig_attr.fw_ver, sizeof(caps->fw_ver)); + + port_info = flow_hw_get_wire_port(ctx); + if (port_info) { + caps->wire_regc = port_info->regc_value; + caps->wire_regc_mask = port_info->regc_mask; + } else { + DR_LOG(INFO, "Failed to query wire port regc value"); + } + + return ret; +} + +int mlx5dr_cmd_query_ib_port(struct ibv_context *ctx, + struct mlx5dr_cmd_query_vport_caps *vport_caps, + uint32_t port_num) +{ + struct mlx5_port_info port_info = {0}; + uint32_t flags; + int ret; + + flags = MLX5_PORT_QUERY_VPORT | MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID; + + ret = mlx5_glue->devx_port_query(ctx, port_num, &port_info); + /* Check if query succeed and vport is enabled */ + if (ret || (port_info.query_flags & flags) != flags) { + rte_errno = ENOTSUP; + return rte_errno; + } + + vport_caps->vport_num = port_info.vport_id; + vport_caps->esw_owner_vhca_id = port_info.esw_owner_vhca_id; + + if (port_info.query_flags & MLX5_PORT_QUERY_REG_C0) { + vport_caps->metadata_c = port_info.vport_meta_tag; + vport_caps->metadata_c_mask = port_info.vport_meta_mask; + } + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h new file mode 100644 index 0000000000..2548b2b238 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h @@ -0,0 +1,230 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_CMD_H_ +#define MLX5DR_CMD_H_ + +struct mlx5dr_cmd_ft_create_attr { + uint8_t type; + uint8_t level; + bool rtc_valid; +}; + +struct mlx5dr_cmd_ft_modify_attr { + uint8_t type; + uint32_t rtc_id_0; + uint32_t rtc_id_1; + uint32_t table_miss_id; + uint8_t table_miss_action; + uint64_t modify_fs; +}; + +struct mlx5dr_cmd_fg_attr { + uint32_t table_id; + uint32_t table_type; +}; + +struct mlx5dr_cmd_forward_tbl { + struct mlx5dr_devx_obj *ft; + struct mlx5dr_devx_obj *fg; + struct mlx5dr_devx_obj *fte; + uint32_t refcount; +}; + +struct mlx5dr_cmd_rtc_create_attr { + uint32_t pd; + uint32_t stc_base; + uint32_t ste_base; + uint32_t ste_offset; + uint32_t miss_ft_id; + uint8_t update_index_mode; + uint8_t log_depth; + uint8_t log_size; + uint8_t table_type; + uint8_t definer_id; + bool is_jumbo; +}; + +struct mlx5dr_cmd_stc_create_attr { + uint8_t log_obj_range; + uint8_t table_type; +}; + +struct mlx5dr_cmd_stc_modify_attr { + uint32_t stc_offset; + uint8_t action_offset; + enum mlx5_ifc_stc_action_type action_type; + union { + uint32_t id; /* TIRN, TAG, FT ID, STE ID */ + struct { + uint8_t decap; + uint16_t start_anchor; + uint16_t end_anchor; + } remove_header; + struct { + uint32_t arg_id; + uint32_t pattern_id; + } modify_header; + struct { + __be64 data; + } modify_action; + struct { + uint32_t arg_id; + uint32_t header_size; + uint8_t is_inline; + uint8_t encap; + uint16_t insert_anchor; + uint16_t insert_offset; + } insert_header; + struct { + uint8_t aso_type; + uint32_t devx_obj_id; + uint8_t return_reg_id; + } aso; + struct { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + } vport; + struct { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_pool *ste_pool; + uint32_t ste_obj_id; /* Internal */ + uint32_t match_definer_id; + uint8_t log_hash_size; + } ste_table; + struct { + uint16_t start_anchor; + uint16_t num_of_words; + } remove_words; + + uint32_t dest_table_id; + uint32_t dest_tir_num; + }; +}; + +struct mlx5dr_cmd_ste_create_attr { + uint8_t log_obj_range; + uint8_t table_type; +}; + +struct mlx5dr_cmd_definer_create_attr { + uint8_t *dw_selector; + uint8_t *byte_selector; + uint8_t *match_mask; +}; + +struct mlx5dr_cmd_sq_create_attr { + uint32_t cqn; + uint32_t pdn; + uint32_t page_id; + uint32_t dbr_id; + uint32_t wq_id; + uint32_t log_wq_sz; +}; + +struct mlx5dr_cmd_query_ft_caps { + uint8_t max_level; + uint8_t reparse; +}; + +struct mlx5dr_cmd_query_vport_caps { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + uint32_t metadata_c; + uint32_t metadata_c_mask; +}; + +struct mlx5dr_cmd_query_caps { + uint32_t wire_regc; + uint32_t wire_regc_mask; + uint32_t flex_protocols; + uint8_t wqe_based_update; + uint8_t rtc_reparse_mode; + uint16_t ste_format; + uint8_t rtc_index_mode; + uint8_t ste_alloc_log_max; + uint8_t ste_alloc_log_gran; + uint8_t stc_alloc_log_max; + uint8_t stc_alloc_log_gran; + uint8_t rtc_log_depth_max; + uint8_t format_select_gtpu_dw_0; + uint8_t format_select_gtpu_dw_1; + uint8_t format_select_gtpu_dw_2; + uint8_t format_select_gtpu_ext_dw_0; + bool full_dw_jumbo_support; + struct mlx5dr_cmd_query_ft_caps nic_ft; + struct mlx5dr_cmd_query_ft_caps fdb_ft; + bool eswitch_manager; + uint32_t eswitch_manager_vport_number; + uint8_t log_header_modify_argument_granularity; + uint8_t log_header_modify_argument_max_alloc; + uint64_t definer_format_sup; + uint32_t trivial_match_definer; + char fw_ver[64]; +}; + +int mlx5dr_cmd_destroy_obj(struct mlx5dr_devx_obj *devx_obj); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_table_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr); + +int +mlx5dr_cmd_flow_table_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_ft_modify_attr *ft_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_rtc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_stc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_stc_create_attr *stc_attr); + +int +mlx5dr_cmd_stc_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_stc_modify_attr *stc_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_ste_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ste_create_attr *ste_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_definer_create(struct ibv_context *ctx, + struct mlx5dr_cmd_definer_create_attr *def_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_sq_create(struct ibv_context *ctx, + struct mlx5dr_cmd_sq_create_attr *attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_arg_create(struct ibv_context *ctx, + uint16_t log_obj_range, + uint32_t pd); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_header_modify_pattern_create(struct ibv_context *ctx, + uint32_t pattern_length, + uint8_t *actions); + +int mlx5dr_cmd_sq_modify_rdy(struct mlx5dr_devx_obj *devx_obj); + +int mlx5dr_cmd_query_ib_port(struct ibv_context *ctx, + struct mlx5dr_cmd_query_vport_caps *vport_caps, + uint32_t port_num); +int mlx5dr_cmd_query_caps(struct ibv_context *ctx, + struct mlx5dr_cmd_query_caps *caps); + +void mlx5dr_cmd_miss_ft_destroy(struct mlx5dr_cmd_forward_tbl *tbl); + +struct mlx5dr_cmd_forward_tbl * +mlx5dr_cmd_miss_ft_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr, + uint32_t vport); + +void mlx5dr_cmd_set_attr_connect_miss_tbl(struct mlx5dr_context *ctx, + uint32_t fw_ft_type, + enum mlx5dr_table_type type, + struct mlx5dr_cmd_ft_modify_attr *ft_attr); +#endif /* MLX5DR_CMD_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 09/18] net/mlx5/hws: Add HWS pool and buddy 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (7 preceding siblings ...) 2022-10-19 20:57 ` [v5 08/18] net/mlx5/hws: Add HWS command layer Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 10/18] net/mlx5/hws: Add HWS send layer Alex Vesker ` (8 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit From: Erez Shitrit <erezsh@nvidia.com> HWS needs to manage different types of device memory in an efficient and quick way. For this, memory pools are being used. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_buddy.c | 201 +++++++++ drivers/net/mlx5/hws/mlx5dr_buddy.h | 22 + drivers/net/mlx5/hws/mlx5dr_pool.c | 672 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_pool.h | 152 +++++++ 4 files changed, 1047 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.h diff --git a/drivers/net/mlx5/hws/mlx5dr_buddy.c b/drivers/net/mlx5/hws/mlx5dr_buddy.c new file mode 100644 index 0000000000..9dba95f0b1 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_buddy.c @@ -0,0 +1,201 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include <rte_bitmap.h> +#include <rte_malloc.h> +#include "mlx5dr_internal.h" +#include "mlx5dr_buddy.h" + +static struct rte_bitmap *bitmap_alloc0(int s) +{ + struct rte_bitmap *bitmap; + uint32_t bmp_size; + void *mem; + + bmp_size = rte_bitmap_get_memory_footprint(s); + mem = rte_zmalloc("create_bmap", bmp_size, RTE_CACHE_LINE_SIZE); + if (!mem) { + DR_LOG(ERR, "No mem for bitmap"); + rte_errno = ENOMEM; + return NULL; + } + + bitmap = rte_bitmap_init(s, mem, bmp_size); + if (!bitmap) { + DR_LOG(ERR, "%s Failed to initialize bitmap", __func__); + rte_errno = EINVAL; + goto err_mem_alloc; + } + + return bitmap; + +err_mem_alloc: + rte_free(mem); + return NULL; +} + +static void bitmap_set_bit(struct rte_bitmap *bmp, uint32_t pos) +{ + rte_bitmap_set(bmp, pos); +} + +static void bitmap_clear_bit(struct rte_bitmap *bmp, uint32_t pos) +{ + rte_bitmap_clear(bmp, pos); +} + +static bool bitmap_test_bit(struct rte_bitmap *bmp, unsigned long n) +{ + return !!rte_bitmap_get(bmp, n); +} + +static unsigned long bitmap_ffs(struct rte_bitmap *bmap, + unsigned long n, unsigned long m) +{ + uint64_t out_slab = 0; + uint32_t pos = 0; /* Compilation warn */ + + __rte_bitmap_scan_init(bmap); + if (!rte_bitmap_scan(bmap, &pos, &out_slab)) { + DR_LOG(ERR, "Failed to get slab from bitmap."); + return m; + } + pos = pos + __builtin_ctzll(out_slab); + + if (pos < n) { + DR_LOG(ERR, "Unexpected bit (%d < %"PRIx64") from bitmap", pos, n); + return m; + } + return pos; +} + +static unsigned long mlx5dr_buddy_find_first_bit(struct rte_bitmap *addr, + uint32_t size) +{ + return bitmap_ffs(addr, 0, size); +} + +static int mlx5dr_buddy_init(struct mlx5dr_buddy_mem *buddy, uint32_t max_order) +{ + int i, s; + + buddy->max_order = max_order; + + buddy->bits = simple_calloc(buddy->max_order + 1, sizeof(long *)); + if (!buddy->bits) { + rte_errno = ENOMEM; + return -1; + } + + buddy->num_free = simple_calloc(buddy->max_order + 1, sizeof(*buddy->num_free)); + if (!buddy->num_free) { + rte_errno = ENOMEM; + goto err_out_free_bits; + } + + for (i = 0; i <= (int)buddy->max_order; ++i) { + s = 1 << (buddy->max_order - i); + buddy->bits[i] = bitmap_alloc0(s); + if (!buddy->bits[i]) + goto err_out_free_num_free; + } + + bitmap_set_bit(buddy->bits[buddy->max_order], 0); + + buddy->num_free[buddy->max_order] = 1; + + return 0; + +err_out_free_num_free: + for (i = 0; i <= (int)buddy->max_order; ++i) + rte_free(buddy->bits[i]); + + simple_free(buddy->num_free); + +err_out_free_bits: + simple_free(buddy->bits); + return -1; +} + +struct mlx5dr_buddy_mem *mlx5dr_buddy_create(uint32_t max_order) +{ + struct mlx5dr_buddy_mem *buddy; + + buddy = simple_calloc(1, sizeof(*buddy)); + if (!buddy) { + rte_errno = ENOMEM; + return NULL; + } + + if (mlx5dr_buddy_init(buddy, max_order)) + goto free_buddy; + + return buddy; + +free_buddy: + simple_free(buddy); + return NULL; +} + +void mlx5dr_buddy_cleanup(struct mlx5dr_buddy_mem *buddy) +{ + int i; + + for (i = 0; i <= (int)buddy->max_order; ++i) { + rte_free(buddy->bits[i]); + } + + simple_free(buddy->num_free); + simple_free(buddy->bits); +} + +int mlx5dr_buddy_alloc_mem(struct mlx5dr_buddy_mem *buddy, int order) +{ + int seg; + int o, m; + + for (o = order; o <= (int)buddy->max_order; ++o) + if (buddy->num_free[o]) { + m = 1 << (buddy->max_order - o); + seg = mlx5dr_buddy_find_first_bit(buddy->bits[o], m); + if (m <= seg) + return -1; + + goto found; + } + + return -1; + +found: + bitmap_clear_bit(buddy->bits[o], seg); + --buddy->num_free[o]; + + while (o > order) { + --o; + seg <<= 1; + bitmap_set_bit(buddy->bits[o], seg ^ 1); + ++buddy->num_free[o]; + } + + seg <<= order; + + return seg; +} + +void mlx5dr_buddy_free_mem(struct mlx5dr_buddy_mem *buddy, uint32_t seg, int order) +{ + seg >>= order; + + while (bitmap_test_bit(buddy->bits[order], seg ^ 1)) { + bitmap_clear_bit(buddy->bits[order], seg ^ 1); + --buddy->num_free[order]; + seg >>= 1; + ++order; + } + + bitmap_set_bit(buddy->bits[order], seg); + + ++buddy->num_free[order]; +} + diff --git a/drivers/net/mlx5/hws/mlx5dr_buddy.h b/drivers/net/mlx5/hws/mlx5dr_buddy.h new file mode 100644 index 0000000000..b9ec446b99 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_buddy.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_BUDDY_H_ +#define MLX5DR_BUDDY_H_ + +struct mlx5dr_buddy_mem { + struct rte_bitmap **bits; + unsigned int *num_free; + uint32_t max_order; +}; + +struct mlx5dr_buddy_mem *mlx5dr_buddy_create(uint32_t max_order); + +void mlx5dr_buddy_cleanup(struct mlx5dr_buddy_mem *buddy); + +int mlx5dr_buddy_alloc_mem(struct mlx5dr_buddy_mem *buddy, int order); + +void mlx5dr_buddy_free_mem(struct mlx5dr_buddy_mem *buddy, uint32_t seg, int order); + +#endif /* MLX5DR_BUDDY_H_ */ diff --git a/drivers/net/mlx5/hws/mlx5dr_pool.c b/drivers/net/mlx5/hws/mlx5dr_pool.c new file mode 100644 index 0000000000..2bfda5b4a5 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pool.c @@ -0,0 +1,672 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include <rte_bitmap.h> +#include <rte_malloc.h> +#include "mlx5dr_buddy.h" +#include "mlx5dr_internal.h" + +static void mlx5dr_pool_free_one_resource(struct mlx5dr_pool_resource *resource) +{ + mlx5dr_cmd_destroy_obj(resource->devx_obj); + + simple_free(resource); +} + +static void mlx5dr_pool_resource_free(struct mlx5dr_pool *pool, + int resource_idx) +{ + mlx5dr_pool_free_one_resource(pool->resource[resource_idx]); + pool->resource[resource_idx] = NULL; + + if (pool->tbl_type == MLX5DR_TABLE_TYPE_FDB) { + mlx5dr_pool_free_one_resource(pool->mirror_resource[resource_idx]); + pool->mirror_resource[resource_idx] = NULL; + } +} + +static struct mlx5dr_pool_resource * +mlx5dr_pool_create_one_resource(struct mlx5dr_pool *pool, uint32_t log_range, + uint32_t fw_ft_type) +{ + struct mlx5dr_cmd_ste_create_attr ste_attr; + struct mlx5dr_cmd_stc_create_attr stc_attr; + struct mlx5dr_pool_resource *resource; + struct mlx5dr_devx_obj *devx_obj; + + resource = simple_malloc(sizeof(*resource)); + if (!resource) { + rte_errno = ENOMEM; + return NULL; + } + + switch (pool->type) { + case MLX5DR_POOL_TYPE_STE: + ste_attr.log_obj_range = log_range; + ste_attr.table_type = fw_ft_type; + devx_obj = mlx5dr_cmd_ste_create(pool->ctx->ibv_ctx, &ste_attr); + break; + case MLX5DR_POOL_TYPE_STC: + stc_attr.log_obj_range = log_range; + stc_attr.table_type = fw_ft_type; + devx_obj = mlx5dr_cmd_stc_create(pool->ctx->ibv_ctx, &stc_attr); + break; + default: + assert(0); + break; + } + + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate resource objects"); + goto free_resource; + } + + resource->pool = pool; + resource->devx_obj = devx_obj; + resource->range = 1 << log_range; + resource->base_id = devx_obj->id; + + return resource; + +free_resource: + simple_free(resource); + return NULL; +} + +static int +mlx5dr_pool_resource_alloc(struct mlx5dr_pool *pool, uint32_t log_range, int idx) +{ + struct mlx5dr_pool_resource *resource; + uint32_t fw_ft_type, opt_log_range; + + fw_ft_type = mlx5dr_table_get_res_fw_ft_type(pool->tbl_type, false); + opt_log_range = pool->opt_type == MLX5DR_POOL_OPTIMIZE_ORIG ? 0 : log_range; + resource = mlx5dr_pool_create_one_resource(pool, opt_log_range, fw_ft_type); + if (!resource) { + DR_LOG(ERR, "Failed allocating resource"); + return rte_errno; + } + pool->resource[idx] = resource; + + if (pool->tbl_type == MLX5DR_TABLE_TYPE_FDB) { + struct mlx5dr_pool_resource *mir_resource; + + fw_ft_type = mlx5dr_table_get_res_fw_ft_type(pool->tbl_type, true); + opt_log_range = pool->opt_type == MLX5DR_POOL_OPTIMIZE_MIRROR ? 0 : log_range; + mir_resource = mlx5dr_pool_create_one_resource(pool, opt_log_range, fw_ft_type); + if (!mir_resource) { + DR_LOG(ERR, "Failed allocating mirrored resource"); + mlx5dr_pool_free_one_resource(resource); + pool->resource[idx] = NULL; + return rte_errno; + } + pool->mirror_resource[idx] = mir_resource; + } + + return 0; +} + +static int mlx5dr_pool_bitmap_get_free_slot(struct rte_bitmap *bitmap, uint32_t *iidx) +{ + uint64_t slab = 0; + + __rte_bitmap_scan_init(bitmap); + + if (!rte_bitmap_scan(bitmap, iidx, &slab)) + return ENOMEM; + + *iidx += __builtin_ctzll(slab); + + rte_bitmap_clear(bitmap, *iidx); + + return 0; +} + +static struct rte_bitmap *mlx5dr_pool_create_and_init_bitmap(uint32_t log_range) +{ + struct rte_bitmap *cur_bmp; + uint32_t bmp_size; + void *mem; + + bmp_size = rte_bitmap_get_memory_footprint(1 << log_range); + mem = rte_zmalloc("create_stc_bmap", bmp_size, RTE_CACHE_LINE_SIZE); + if (!mem) { + DR_LOG(ERR, "No mem for bitmap"); + rte_errno = ENOMEM; + return NULL; + } + + cur_bmp = rte_bitmap_init_with_all_set(1 << log_range, mem, bmp_size); + if (!cur_bmp) { + rte_free(mem); + DR_LOG(ERR, "Failed to initialize stc bitmap."); + rte_errno = ENOMEM; + return NULL; + } + + return cur_bmp; +} + +static void mlx5dr_pool_buddy_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + struct mlx5dr_buddy_mem *buddy; + + buddy = pool->db.buddy_manager->buddies[chunk->resource_idx]; + if (!buddy) { + assert(false); + DR_LOG(ERR, "No such buddy (%d)", chunk->resource_idx); + return; + } + + mlx5dr_buddy_free_mem(buddy, chunk->offset, chunk->order); +} + +static struct mlx5dr_buddy_mem * +mlx5dr_pool_buddy_get_next_buddy(struct mlx5dr_pool *pool, int idx, + uint32_t order, bool *is_new_buddy) +{ + static struct mlx5dr_buddy_mem *buddy; + uint32_t new_buddy_size; + + buddy = pool->db.buddy_manager->buddies[idx]; + if (buddy) + return buddy; + + new_buddy_size = RTE_MAX(pool->alloc_log_sz, order); + *is_new_buddy = true; + buddy = mlx5dr_buddy_create(new_buddy_size); + if (!buddy) { + DR_LOG(ERR, "Failed to create buddy order: %d index: %d", + new_buddy_size, idx); + return NULL; + } + + if (mlx5dr_pool_resource_alloc(pool, new_buddy_size, idx) != 0) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, new_buddy_size, idx); + mlx5dr_buddy_cleanup(buddy); + return NULL; + } + + pool->db.buddy_manager->buddies[idx] = buddy; + + return buddy; +} + +static int mlx5dr_pool_buddy_get_mem_chunk(struct mlx5dr_pool *pool, + int order, + uint32_t *buddy_idx, + int *seg) +{ + struct mlx5dr_buddy_mem *buddy; + bool new_mem = false; + int err = 0; + int i; + + *seg = -1; + + /* Find the next free place from the buddy array */ + while (*seg == -1) { + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + buddy = mlx5dr_pool_buddy_get_next_buddy(pool, i, + order, + &new_mem); + if (!buddy) { + err = rte_errno; + goto out; + } + + *seg = mlx5dr_buddy_alloc_mem(buddy, order); + if (*seg != -1) + goto found; + + if (pool->flags & MLX5DR_POOL_FLAGS_ONE_RESOURCE) { + DR_LOG(ERR, "Fail to allocate seg for one resource pool"); + err = rte_errno; + goto out; + } + + if (new_mem) { + /* We have new memory pool, should be place for us */ + assert(false); + DR_LOG(ERR, "No memory for order: %d with buddy no: %d", + order, i); + rte_errno = ENOMEM; + err = ENOMEM; + goto out; + } + } + } + +found: + *buddy_idx = i; +out: + return err; +} + +static int mlx5dr_pool_buddy_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret = 0; + + /* Go over the buddies and find next free slot */ + ret = mlx5dr_pool_buddy_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_pool_buddy_db_uninit(struct mlx5dr_pool *pool) +{ + struct mlx5dr_buddy_mem *buddy; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + buddy = pool->db.buddy_manager->buddies[i]; + if (buddy) { + mlx5dr_buddy_cleanup(buddy); + simple_free(buddy); + pool->db.buddy_manager->buddies[i] = NULL; + } + } + + simple_free(pool->db.buddy_manager); +} + +static int mlx5dr_pool_buddy_db_init(struct mlx5dr_pool *pool, uint32_t log_range) +{ + pool->db.buddy_manager = simple_calloc(1, sizeof(*pool->db.buddy_manager)); + if (!pool->db.buddy_manager) { + DR_LOG(ERR, "No mem for buddy_manager with log_range: %d", log_range); + rte_errno = ENOMEM; + return rte_errno; + } + + if (pool->flags & MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE) { + bool new_buddy; + + if (!mlx5dr_pool_buddy_get_next_buddy(pool, 0, log_range, &new_buddy)) { + DR_LOG(ERR, "Failed allocating memory on create log_sz: %d", log_range); + simple_free(pool->db.buddy_manager); + return rte_errno; + } + } + + pool->p_db_uninit = &mlx5dr_pool_buddy_db_uninit; + pool->p_get_chunk = &mlx5dr_pool_buddy_db_get_chunk; + pool->p_put_chunk = &mlx5dr_pool_buddy_db_put_chunk; + + return 0; +} + +static int mlx5dr_pool_create_resource_on_index(struct mlx5dr_pool *pool, + uint32_t alloc_size, int idx) +{ + if (mlx5dr_pool_resource_alloc(pool, alloc_size, idx) != 0) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, alloc_size, idx); + return rte_errno; + } + + return 0; +} + +static struct mlx5dr_pool_elements * +mlx5dr_pool_element_create_new_elem(struct mlx5dr_pool *pool, uint32_t order, int idx) +{ + struct mlx5dr_pool_elements *elem; + uint32_t alloc_size; + + alloc_size = pool->alloc_log_sz; + + elem = simple_calloc(1, sizeof(*elem)); + if (!elem) { + DR_LOG(ERR, "Failed to create elem order: %d index: %d", + order, idx); + rte_errno = ENOMEM; + return NULL; + } + /*sharing the same resource, also means that all the elements are with size 1*/ + if ((pool->flags & MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS) && + !(pool->flags & MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK)) { + /* Currently all chunks in size 1 */ + elem->bitmap = mlx5dr_pool_create_and_init_bitmap(alloc_size - order); + if (!elem->bitmap) { + DR_LOG(ERR, "Failed to create bitmap type: %d: size %d index: %d", + pool->type, alloc_size, idx); + goto free_elem; + } + } + + if (mlx5dr_pool_create_resource_on_index(pool, alloc_size, idx)) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, alloc_size, idx); + goto free_db; + } + + pool->db.element_manager->elements[idx] = elem; + + return elem; + +free_db: + rte_free(elem->bitmap); +free_elem: + simple_free(elem); + return NULL; +} + +static int mlx5dr_pool_element_find_seg(struct mlx5dr_pool_elements *elem, int *seg) +{ + if (mlx5dr_pool_bitmap_get_free_slot(elem->bitmap, (uint32_t *)seg)) { + elem->is_full = true; + return ENOMEM; + } + return 0; +} + +static int +mlx5dr_pool_onesize_element_get_mem_chunk(struct mlx5dr_pool *pool, uint32_t order, + uint32_t *idx, int *seg) +{ + struct mlx5dr_pool_elements *elem; + + elem = pool->db.element_manager->elements[0]; + if (!elem) + elem = mlx5dr_pool_element_create_new_elem(pool, order, 0); + if (!elem) + goto err_no_elem; + + *idx = 0; + + if (mlx5dr_pool_element_find_seg(elem, seg) != 0) { + DR_LOG(ERR, "No more resources (last request order: %d)", order); + rte_errno = ENOMEM; + return ENOMEM; + } + + elem->num_of_elements++; + return 0; + +err_no_elem: + DR_LOG(ERR, "Failed to allocate element for order: %d", order); + return ENOMEM; +} + +static int +mlx5dr_pool_general_element_get_mem_chunk(struct mlx5dr_pool *pool, uint32_t order, + uint32_t *idx, int *seg) +{ + int ret; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + if (!pool->resource[i]) { + ret = mlx5dr_pool_create_resource_on_index(pool, order, i); + if (ret) + goto err_no_res; + *idx = i; + *seg = 0; /* One memory slot in that element */ + return 0; + } + } + + rte_errno = ENOMEM; + DR_LOG(ERR, "No more resources (last request order: %d)", order); + return ENOMEM; + +err_no_res: + DR_LOG(ERR, "Failed to allocate element for order: %d", order); + return ENOMEM; +} + +static int mlx5dr_pool_general_element_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret; + + /* Go over all memory elements and find/allocate free slot */ + ret = mlx5dr_pool_general_element_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_pool_general_element_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + assert(pool->resource[chunk->resource_idx]); + + if (pool->flags & MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE) + mlx5dr_pool_resource_free(pool, chunk->resource_idx); +} + +static void mlx5dr_pool_general_element_db_uninit(struct mlx5dr_pool *pool) +{ + (void)pool; +} + +/* This memory management works as the following: + * - At start doesn't allocate no mem at all. + * - When new request for chunk arrived: + * allocate resource and give it. + * - When free that chunk: + * the resource is freed. + */ +static int mlx5dr_pool_general_element_db_init(struct mlx5dr_pool *pool) +{ + pool->db.element_manager = simple_calloc(1, sizeof(*pool->db.element_manager)); + if (!pool->db.element_manager) { + DR_LOG(ERR, "No mem for general elemnt_manager"); + rte_errno = ENOMEM; + return rte_errno; + } + + pool->p_db_uninit = &mlx5dr_pool_general_element_db_uninit; + pool->p_get_chunk = &mlx5dr_pool_general_element_db_get_chunk; + pool->p_put_chunk = &mlx5dr_pool_general_element_db_put_chunk; + + return 0; +} + +static void mlx5dr_onesize_element_db_destroy_element(struct mlx5dr_pool *pool, + struct mlx5dr_pool_elements *elem, + struct mlx5dr_pool_chunk *chunk) +{ + assert(pool->resource[chunk->resource_idx]); + + mlx5dr_pool_resource_free(pool, chunk->resource_idx); + + simple_free(elem); + pool->db.element_manager->elements[chunk->resource_idx] = NULL; +} + +static void mlx5dr_onesize_element_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + struct mlx5dr_pool_elements *elem; + + assert(chunk->resource_idx == 0); + + elem = pool->db.element_manager->elements[chunk->resource_idx]; + if (!elem) { + assert(false); + DR_LOG(ERR, "No such element (%d)", chunk->resource_idx); + return; + } + + rte_bitmap_set(elem->bitmap, chunk->offset); + elem->is_full = false; + elem->num_of_elements--; + + if (pool->flags & MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE && + !elem->num_of_elements) + mlx5dr_onesize_element_db_destroy_element(pool, elem, chunk); +} + +static int mlx5dr_onesize_element_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret = 0; + + /* Go over all memory elements and find/allocate free slot */ + ret = mlx5dr_pool_onesize_element_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_onesize_element_db_uninit(struct mlx5dr_pool *pool) +{ + struct mlx5dr_pool_elements *elem; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + elem = pool->db.element_manager->elements[i]; + if (elem) { + if (elem->bitmap) + rte_free(elem->bitmap); + simple_free(elem); + pool->db.element_manager->elements[i] = NULL; + } + } + simple_free(pool->db.element_manager); +} + +/* This memory management works as the following: + * - At start doesn't allocate no mem at all. + * - When new request for chunk arrived: + * aloocate the first and only slot of memory/resource + * when it ended return error. + */ +static int mlx5dr_pool_onesize_element_db_init(struct mlx5dr_pool *pool) +{ + pool->db.element_manager = simple_calloc(1, sizeof(*pool->db.element_manager)); + if (!pool->db.element_manager) { + DR_LOG(ERR, "No mem for general elemnt_manager"); + rte_errno = ENOMEM; + return rte_errno; + } + + pool->p_db_uninit = &mlx5dr_onesize_element_db_uninit; + pool->p_get_chunk = &mlx5dr_onesize_element_db_get_chunk; + pool->p_put_chunk = &mlx5dr_onesize_element_db_put_chunk; + + return 0; +} + +static int mlx5dr_pool_db_init(struct mlx5dr_pool *pool, + enum mlx5dr_db_type db_type) +{ + int ret; + + if (db_type == MLX5DR_POOL_DB_TYPE_GENERAL_SIZE) + ret = mlx5dr_pool_general_element_db_init(pool); + else if (db_type == MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE) + ret = mlx5dr_pool_onesize_element_db_init(pool); + else + ret = mlx5dr_pool_buddy_db_init(pool, pool->alloc_log_sz); + + if (ret) { + DR_LOG(ERR, "Failed to init general db : %d (ret: %d)", db_type, ret); + return ret; + } + + return 0; +} + +static void mlx5dr_pool_db_unint(struct mlx5dr_pool *pool) +{ + pool->p_db_uninit(pool); +} + +int +mlx5dr_pool_chunk_alloc(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret; + + pthread_spin_lock(&pool->lock); + ret = pool->p_get_chunk(pool, chunk); + pthread_spin_unlock(&pool->lock); + + return ret; +} + +void mlx5dr_pool_chunk_free(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + pthread_spin_lock(&pool->lock); + pool->p_put_chunk(pool, chunk); + pthread_spin_unlock(&pool->lock); +} + +struct mlx5dr_pool * +mlx5dr_pool_create(struct mlx5dr_context *ctx, struct mlx5dr_pool_attr *pool_attr) +{ + enum mlx5dr_db_type res_db_type; + struct mlx5dr_pool *pool; + + pool = simple_calloc(1, sizeof(*pool)); + if (!pool) + return NULL; + + pool->ctx = ctx; + pool->type = pool_attr->pool_type; + pool->alloc_log_sz = pool_attr->alloc_log_sz; + pool->flags = pool_attr->flags; + pool->tbl_type = pool_attr->table_type; + pool->opt_type = pool_attr->opt_type; + + pthread_spin_init(&pool->lock, PTHREAD_PROCESS_PRIVATE); + + /* Support general db */ + if (pool->flags == (MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE | + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK)) + res_db_type = MLX5DR_POOL_DB_TYPE_GENERAL_SIZE; + else if (pool->flags == (MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS)) + res_db_type = MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE; + else + res_db_type = MLX5DR_POOL_DB_TYPE_BUDDY; + + pool->alloc_log_sz = pool_attr->alloc_log_sz; + + if (mlx5dr_pool_db_init(pool, res_db_type)) + goto free_pool; + + return pool; + +free_pool: + pthread_spin_destroy(&pool->lock); + simple_free(pool); + return NULL; +} + +int mlx5dr_pool_destroy(struct mlx5dr_pool *pool) +{ + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) + if (pool->resource[i]) + mlx5dr_pool_resource_free(pool, i); + + mlx5dr_pool_db_unint(pool); + + pthread_spin_destroy(&pool->lock); + simple_free(pool); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_pool.h b/drivers/net/mlx5/hws/mlx5dr_pool.h new file mode 100644 index 0000000000..cd12c3ab9a --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pool.h @@ -0,0 +1,152 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_POOL_H_ +#define MLX5DR_POOL_H_ + +enum mlx5dr_pool_type { + MLX5DR_POOL_TYPE_STE, + MLX5DR_POOL_TYPE_STC, +}; + +#define MLX5DR_POOL_STC_LOG_SZ 14 + +#define MLX5DR_POOL_RESOURCE_ARR_SZ 100 + +struct mlx5dr_pool_chunk { + uint32_t resource_idx; + /* Internal offset, relative to base index */ + int offset; + int order; +}; + +struct mlx5dr_pool_resource { + struct mlx5dr_pool *pool; + struct mlx5dr_devx_obj *devx_obj; + uint32_t base_id; + uint32_t range; +}; + +enum mlx5dr_pool_flags { + /* Only a one resource in that pool */ + MLX5DR_POOL_FLAGS_ONE_RESOURCE = 1 << 0, + MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE = 1 << 1, + /* No sharing resources between chunks */ + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK = 1 << 2, + /* All objects are in the same size */ + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS = 1 << 3, + /* Manged by buddy allocator */ + MLX5DR_POOL_FLAGS_BUDDY_MANAGED = 1 << 4, + /* Allocate pool_type memory on pool creation */ + MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE = 1 << 5, + + /* These values should be used by the caller */ + MLX5DR_POOL_FLAGS_FOR_STC_POOL = + MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS, + MLX5DR_POOL_FLAGS_FOR_MATCHER_STE_POOL = + MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE | + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK, + MLX5DR_POOL_FLAGS_FOR_STE_ACTION_POOL = + MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_BUDDY_MANAGED | + MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE, +}; + +enum mlx5dr_pool_optimize { + MLX5DR_POOL_OPTIMIZE_NONE = 0x0, + MLX5DR_POOL_OPTIMIZE_ORIG = 0x1, + MLX5DR_POOL_OPTIMIZE_MIRROR = 0x2, +}; + +struct mlx5dr_pool_attr { + enum mlx5dr_pool_type pool_type; + enum mlx5dr_table_type table_type; + enum mlx5dr_pool_flags flags; + enum mlx5dr_pool_optimize opt_type; + /* Allocation size once memory is depleted */ + size_t alloc_log_sz; +}; + +enum mlx5dr_db_type { + /* Uses for allocating chunk of big memory, each element has its own resource in the FW*/ + MLX5DR_POOL_DB_TYPE_GENERAL_SIZE, + /* One resource only, all the elements are with same one size */ + MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE, + /* Many resources, the memory allocated with buddy mechanism */ + MLX5DR_POOL_DB_TYPE_BUDDY, +}; + +struct mlx5dr_buddy_manager { + struct mlx5dr_buddy_mem *buddies[MLX5DR_POOL_RESOURCE_ARR_SZ]; +}; + +struct mlx5dr_pool_elements { + uint32_t num_of_elements; + struct rte_bitmap *bitmap; + bool is_full; +}; + +struct mlx5dr_element_manager { + struct mlx5dr_pool_elements *elements[MLX5DR_POOL_RESOURCE_ARR_SZ]; +}; + +struct mlx5dr_pool_db { + enum mlx5dr_db_type type; + union { + struct mlx5dr_element_manager *element_manager; + struct mlx5dr_buddy_manager *buddy_manager; + }; +}; + +typedef int (*mlx5dr_pool_db_get_chunk)(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); +typedef void (*mlx5dr_pool_db_put_chunk)(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); +typedef void (*mlx5dr_pool_unint_db)(struct mlx5dr_pool *pool); + +struct mlx5dr_pool { + struct mlx5dr_context *ctx; + enum mlx5dr_pool_type type; + enum mlx5dr_pool_flags flags; + pthread_spinlock_t lock; + size_t alloc_log_sz; + enum mlx5dr_table_type tbl_type; + enum mlx5dr_pool_optimize opt_type; + struct mlx5dr_pool_resource *resource[MLX5DR_POOL_RESOURCE_ARR_SZ]; + struct mlx5dr_pool_resource *mirror_resource[MLX5DR_POOL_RESOURCE_ARR_SZ]; + /* DB */ + struct mlx5dr_pool_db db; + /* Functions */ + mlx5dr_pool_unint_db p_db_uninit; + mlx5dr_pool_db_get_chunk p_get_chunk; + mlx5dr_pool_db_put_chunk p_put_chunk; +}; + +struct mlx5dr_pool * +mlx5dr_pool_create(struct mlx5dr_context *ctx, + struct mlx5dr_pool_attr *pool_attr); + +int mlx5dr_pool_destroy(struct mlx5dr_pool *pool); + +int mlx5dr_pool_chunk_alloc(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); + +void mlx5dr_pool_chunk_free(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); + +static inline struct mlx5dr_devx_obj * +mlx5dr_pool_chunk_get_base_devx_obj(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + return pool->resource[chunk->resource_idx]->devx_obj; +} + +static inline struct mlx5dr_devx_obj * +mlx5dr_pool_chunk_get_base_devx_obj_mirror(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + return pool->mirror_resource[chunk->resource_idx]->devx_obj; +} +#endif /* MLX5DR_POOL_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 10/18] net/mlx5/hws: Add HWS send layer 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (8 preceding siblings ...) 2022-10-19 20:57 ` [v5 09/18] net/mlx5/hws: Add HWS pool and buddy Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 11/18] net/mlx5/hws: Add HWS definer layer Alex Vesker ` (7 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Mark Bloch HWS configures flows to the HW using a QP, each WQE has the details of the flow we want to offload. The send layer allocates the resources needed to send the request to the HW as well as managing the queues, getting completions and handling failures. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_send.c | 844 +++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_send.h | 275 ++++++++++ 2 files changed, 1119 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.h diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c new file mode 100644 index 0000000000..26904a9040 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_send.c @@ -0,0 +1,844 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +struct mlx5dr_send_ring_dep_wqe * +mlx5dr_send_add_new_dep_wqe(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *send_sq = &queue->send_ring->send_sq; + unsigned int idx = send_sq->head_dep_idx++ & (queue->num_entries - 1); + + memset(&send_sq->dep_wqe[idx].wqe_data.tag, 0, MLX5DR_MATCH_TAG_SZ); + + return &send_sq->dep_wqe[idx]; +} + +void mlx5dr_send_abort_new_dep_wqe(struct mlx5dr_send_engine *queue) +{ + queue->send_ring->send_sq.head_dep_idx--; +} + +void mlx5dr_send_all_dep_wqe(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *send_sq = &queue->send_ring->send_sq; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE; + + /* Fence first from previous depend WQEs */ + ste_attr.send_attr.fence = 1; + + while (send_sq->head_dep_idx != send_sq->tail_dep_idx) { + dep_wqe = &send_sq->dep_wqe[send_sq->tail_dep_idx++ & (queue->num_entries - 1)]; + + /* Notify HW on the last WQE */ + ste_attr.send_attr.notify_hw = (send_sq->tail_dep_idx == send_sq->head_dep_idx); + ste_attr.send_attr.user_data = dep_wqe->user_data; + ste_attr.send_attr.rule = dep_wqe->rule; + + ste_attr.rtc_0 = dep_wqe->rtc_0; + ste_attr.rtc_1 = dep_wqe->rtc_1; + ste_attr.retry_rtc_0 = dep_wqe->retry_rtc_0; + ste_attr.retry_rtc_1 = dep_wqe->retry_rtc_1; + ste_attr.used_id_rtc_0 = &dep_wqe->rule->rtc_0; + ste_attr.used_id_rtc_1 = &dep_wqe->rule->rtc_1; + ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; + ste_attr.wqe_data = &dep_wqe->wqe_data; + + mlx5dr_send_ste(queue, &ste_attr); + + /* Fencing is done only on the first WQE */ + ste_attr.send_attr.fence = 0; + } +} + +struct mlx5dr_send_engine_post_ctrl +mlx5dr_send_engine_post_start(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_engine_post_ctrl ctrl; + + ctrl.queue = queue; + /* Currently only one send ring is supported */ + ctrl.send_ring = &queue->send_ring[0]; + ctrl.num_wqebbs = 0; + + return ctrl; +} + +void mlx5dr_send_engine_post_req_wqe(struct mlx5dr_send_engine_post_ctrl *ctrl, + char **buf, size_t *len) +{ + struct mlx5dr_send_ring_sq *send_sq = &ctrl->send_ring->send_sq; + unsigned int idx; + + idx = (send_sq->cur_post + ctrl->num_wqebbs) & send_sq->buf_mask; + + *buf = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + *len = MLX5_SEND_WQE_BB; + + if (!ctrl->num_wqebbs) { + *buf += sizeof(struct mlx5dr_wqe_ctrl_seg); + *len -= sizeof(struct mlx5dr_wqe_ctrl_seg); + } + + ctrl->num_wqebbs++; +} + +static void mlx5dr_send_engine_post_ring(struct mlx5dr_send_ring_sq *sq, + struct mlx5dv_devx_uar *uar, + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl) +{ + rte_compiler_barrier(); + sq->db[MLX5_SND_DBR] = rte_cpu_to_be_32(sq->cur_post); + + rte_wmb(); + mlx5dr_uar_write64_relaxed(*((uint64_t *)wqe_ctrl), uar->reg_addr); + rte_wmb(); +} + +static void +mlx5dr_send_wqe_set_tag(struct mlx5dr_wqe_gta_data_seg_ste *wqe_data, + struct mlx5dr_rule_match_tag *tag, + bool is_jumbo) +{ + if (is_jumbo) { + /* Clear previous possibly dirty control */ + memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ); + memcpy(wqe_data->action, tag->jumbo, MLX5DR_JUMBO_TAG_SZ); + } else { + /* Clear previous possibly dirty control and actions */ + memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ + MLX5DR_ACTIONS_SZ); + memcpy(wqe_data->tag, tag->match, MLX5DR_MATCH_TAG_SZ); + } +} + +void mlx5dr_send_engine_post_end(struct mlx5dr_send_engine_post_ctrl *ctrl, + struct mlx5dr_send_engine_post_attr *attr) +{ + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_ring_sq *sq; + uint32_t flags = 0; + unsigned int idx; + + sq = &ctrl->send_ring->send_sq; + idx = sq->cur_post & sq->buf_mask; + sq->last_idx = idx; + + wqe_ctrl = (void *)(sq->buf + (idx << MLX5_SEND_WQE_SHIFT)); + + wqe_ctrl->opmod_idx_opcode = + rte_cpu_to_be_32((attr->opmod << 24) | + ((sq->cur_post & 0xffff) << 8) | + attr->opcode); + wqe_ctrl->qpn_ds = + rte_cpu_to_be_32((attr->len + sizeof(struct mlx5dr_wqe_ctrl_seg)) / 16 | + sq->sqn << 8); + + wqe_ctrl->imm = rte_cpu_to_be_32(attr->id); + + flags |= attr->notify_hw ? MLX5_WQE_CTRL_CQ_UPDATE : 0; + flags |= attr->fence ? MLX5_WQE_CTRL_INITIATOR_SMALL_FENCE : 0; + wqe_ctrl->flags = rte_cpu_to_be_32(flags); + + sq->wr_priv[idx].id = attr->id; + sq->wr_priv[idx].retry_id = attr->retry_id; + + sq->wr_priv[idx].rule = attr->rule; + sq->wr_priv[idx].user_data = attr->user_data; + sq->wr_priv[idx].num_wqebbs = ctrl->num_wqebbs; + + if (attr->rule) { + sq->wr_priv[idx].rule->pending_wqes++; + sq->wr_priv[idx].used_id = attr->used_id; + } + + sq->cur_post += ctrl->num_wqebbs; + + if (attr->notify_hw) + mlx5dr_send_engine_post_ring(sq, ctrl->queue->uar, wqe_ctrl); +} + +static void mlx5dr_send_wqe(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_engine_post_attr *send_attr, + struct mlx5dr_wqe_gta_ctrl_seg *send_wqe_ctrl, + void *send_wqe_data, + void *send_wqe_tag, + bool is_jumbo, + uint8_t gta_opcode, + uint32_t direct_index) +{ + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_engine_post_ctrl ctrl; + size_t wqe_len; + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_data, &wqe_len); + + wqe_ctrl->op_dirix = htobe32(gta_opcode << 28 | direct_index); + memcpy(wqe_ctrl->stc_ix, send_wqe_ctrl->stc_ix, sizeof(send_wqe_ctrl->stc_ix)); + + if (send_wqe_data) + memcpy(wqe_data, send_wqe_data, sizeof(*wqe_data)); + else + mlx5dr_send_wqe_set_tag(wqe_data, send_wqe_tag, is_jumbo); + + mlx5dr_send_engine_post_end(&ctrl, send_attr); +} + +void mlx5dr_send_ste(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ste_attr *ste_attr) +{ + struct mlx5dr_send_engine_post_attr *send_attr = &ste_attr->send_attr; + uint8_t notify_hw = send_attr->notify_hw; + uint8_t fence = send_attr->fence; + + if (ste_attr->rtc_1) { + send_attr->id = ste_attr->rtc_1; + send_attr->used_id = ste_attr->used_id_rtc_1; + send_attr->retry_id = ste_attr->retry_rtc_1; + send_attr->fence = fence; + send_attr->notify_hw = notify_hw && !ste_attr->rtc_0; + mlx5dr_send_wqe(queue, send_attr, + ste_attr->wqe_ctrl, + ste_attr->wqe_data, + ste_attr->wqe_tag, + ste_attr->wqe_tag_is_jumbo, + ste_attr->gta_opcode, + ste_attr->direct_index); + } + + if (ste_attr->rtc_0) { + send_attr->id = ste_attr->rtc_0; + send_attr->used_id = ste_attr->used_id_rtc_0; + send_attr->retry_id = ste_attr->retry_rtc_0; + send_attr->fence = fence && !ste_attr->rtc_1; + send_attr->notify_hw = notify_hw; + mlx5dr_send_wqe(queue, send_attr, + ste_attr->wqe_ctrl, + ste_attr->wqe_data, + ste_attr->wqe_tag, + ste_attr->wqe_tag_is_jumbo, + ste_attr->gta_opcode, + ste_attr->direct_index); + } + + /* Restore to ortginal requested values */ + send_attr->notify_hw = notify_hw; + send_attr->fence = fence; +} + +static void mlx5dr_send_engine_retry_post_send(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_priv *priv, + uint16_t wqe_cnt) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_send_ring_sq *send_sq; + unsigned int idx; + size_t wqe_len; + char *p; + + send_attr.rule = priv->rule; + send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + send_attr.len = MLX5_SEND_WQE_BB * 2 - sizeof(struct mlx5dr_wqe_ctrl_seg); + send_attr.notify_hw = 1; + send_attr.fence = 0; + send_attr.user_data = priv->user_data; + send_attr.id = priv->retry_id; + send_attr.used_id = priv->used_id; + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_data, &wqe_len); + + send_sq = &ctrl.send_ring->send_sq; + idx = wqe_cnt & send_sq->buf_mask; + p = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + + /* Copy old gta ctrl */ + memcpy(wqe_ctrl, p + sizeof(struct mlx5dr_wqe_ctrl_seg), + MLX5_SEND_WQE_BB - sizeof(struct mlx5dr_wqe_ctrl_seg)); + + idx = (wqe_cnt + 1) & send_sq->buf_mask; + p = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + + /* Copy old gta data */ + memcpy(wqe_data, p, MLX5_SEND_WQE_BB); + + mlx5dr_send_engine_post_end(&ctrl, &send_attr); +} + +void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *sq = &queue->send_ring[0].send_sq; + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl; + + wqe_ctrl = (void *)(sq->buf + (sq->last_idx << MLX5_SEND_WQE_SHIFT)); + + wqe_ctrl->flags |= rte_cpu_to_be_32(MLX5_WQE_CTRL_CQ_UPDATE); + + mlx5dr_send_engine_post_ring(sq, queue->uar, wqe_ctrl); +} + +static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_priv *priv, + uint16_t wqe_cnt, + enum rte_flow_op_status *status) +{ + priv->rule->pending_wqes--; + + if (*status == RTE_FLOW_OP_ERROR) { + if (priv->retry_id) { + mlx5dr_send_engine_retry_post_send(queue, priv, wqe_cnt); + return; + } + /* Some part of the rule failed */ + priv->rule->status = MLX5DR_RULE_STATUS_FAILING; + *priv->used_id = 0; + } else { + *priv->used_id = priv->id; + } + + /* Update rule status for the last completion */ + if (!priv->rule->pending_wqes) { + if (unlikely(priv->rule->status == MLX5DR_RULE_STATUS_FAILING)) { + /* Rule completely failed and doesn't require cleanup */ + if (!priv->rule->rtc_0 && !priv->rule->rtc_1) + priv->rule->status = MLX5DR_RULE_STATUS_FAILED; + + *status = RTE_FLOW_OP_ERROR; + } else { + /* Increase the status, this only works on good flow as the enum + * is arrange it away creating -> created -> deleting -> deleted + */ + priv->rule->status++; + *status = RTE_FLOW_OP_SUCCESS; + /* Rule was deleted now we can safely release action STEs */ + if (priv->rule->status == MLX5DR_RULE_STATUS_DELETED) + mlx5dr_rule_free_action_ste_idx(priv->rule); + } + } +} + +static void mlx5dr_send_engine_update(struct mlx5dr_send_engine *queue, + struct mlx5_cqe64 *cqe, + struct mlx5dr_send_ring_priv *priv, + struct rte_flow_op_result res[], + int64_t *i, + uint32_t res_nb, + uint16_t wqe_cnt) +{ + enum rte_flow_op_status status; + + if (!cqe || (likely(rte_be_to_cpu_32(cqe->byte_cnt) >> 31 == 0) && + likely(mlx5dv_get_cqe_opcode(cqe) == MLX5_CQE_REQ))) { + status = RTE_FLOW_OP_SUCCESS; + } else { + status = RTE_FLOW_OP_ERROR; + } + + if (priv->user_data) { + if (priv->rule) { + mlx5dr_send_engine_update_rule(queue, priv, wqe_cnt, &status); + /* Completion is provided on the last rule WQE */ + if (priv->rule->pending_wqes) + return; + } + + if (*i < res_nb) { + res[*i].user_data = priv->user_data; + res[*i].status = status; + (*i)++; + mlx5dr_send_engine_dec_rule(queue); + } else { + mlx5dr_send_engine_gen_comp(queue, priv->user_data, status); + } + } +} + +static void mlx5dr_send_engine_poll_cq(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring *send_ring, + struct rte_flow_op_result res[], + int64_t *i, + uint32_t res_nb) +{ + struct mlx5dr_send_ring_cq *cq = &send_ring->send_cq; + struct mlx5dr_send_ring_sq *sq = &send_ring->send_sq; + uint32_t cq_idx = cq->cons_index & cq->ncqe_mask; + struct mlx5dr_send_ring_priv *priv; + struct mlx5_cqe64 *cqe; + uint32_t offset_cqe64; + uint8_t cqe_opcode; + uint8_t cqe_owner; + uint16_t wqe_cnt; + uint8_t sw_own; + + offset_cqe64 = RTE_CACHE_LINE_SIZE - sizeof(struct mlx5_cqe64); + cqe = (void *)(cq->buf + (cq_idx << cq->cqe_log_sz) + offset_cqe64); + + sw_own = (cq->cons_index & cq->ncqe) ? 1 : 0; + cqe_opcode = mlx5dv_get_cqe_opcode(cqe); + cqe_owner = mlx5dv_get_cqe_owner(cqe); + + if (cqe_opcode == MLX5_CQE_INVALID || + cqe_owner != sw_own) + return; + + if (unlikely(mlx5dv_get_cqe_opcode(cqe) != MLX5_CQE_REQ)) + queue->err = true; + + rte_io_rmb(); + + wqe_cnt = be16toh(cqe->wqe_counter) & sq->buf_mask; + + while (cq->poll_wqe != wqe_cnt) { + priv = &sq->wr_priv[cq->poll_wqe]; + mlx5dr_send_engine_update(queue, NULL, priv, res, i, res_nb, 0); + cq->poll_wqe = (cq->poll_wqe + priv->num_wqebbs) & sq->buf_mask; + } + + priv = &sq->wr_priv[wqe_cnt]; + cq->poll_wqe = (wqe_cnt + priv->num_wqebbs) & sq->buf_mask; + mlx5dr_send_engine_update(queue, cqe, priv, res, i, res_nb, wqe_cnt); + cq->cons_index++; +} + +static void mlx5dr_send_engine_poll_cqs(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + int64_t *polled, + uint32_t res_nb) +{ + int j; + + for (j = 0; j < MLX5DR_NUM_SEND_RINGS; j++) { + mlx5dr_send_engine_poll_cq(queue, &queue->send_ring[j], + res, polled, res_nb); + + *queue->send_ring[j].send_cq.db = + htobe32(queue->send_ring[j].send_cq.cons_index & 0xffffff); + } +} + +static void mlx5dr_send_engine_poll_list(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + int64_t *polled, + uint32_t res_nb) +{ + struct mlx5dr_completed_poll *comp = &queue->completed; + + while (comp->ci != comp->pi) { + if (*polled < res_nb) { + res[*polled].status = + comp->entries[comp->ci].status; + res[*polled].user_data = + comp->entries[comp->ci].user_data; + (*polled)++; + comp->ci = (comp->ci + 1) & comp->mask; + mlx5dr_send_engine_dec_rule(queue); + } else { + return; + } + } +} + +static int mlx5dr_send_engine_poll(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + uint32_t res_nb) +{ + int64_t polled = 0; + + mlx5dr_send_engine_poll_list(queue, res, &polled, res_nb); + + if (polled >= res_nb) + return polled; + + mlx5dr_send_engine_poll_cqs(queue, res, &polled, res_nb); + + return polled; +} + +int mlx5dr_send_queue_poll(struct mlx5dr_context *ctx, + uint16_t queue_id, + struct rte_flow_op_result res[], + uint32_t res_nb) +{ + return mlx5dr_send_engine_poll(&ctx->send_queue[queue_id], + res, res_nb); +} + +static int mlx5dr_send_ring_create_sq_obj(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_sq *sq, + struct mlx5dr_send_ring_cq *cq, + size_t log_wq_sz) +{ + struct mlx5dr_cmd_sq_create_attr attr = {0}; + int err; + + attr.cqn = cq->cqn; + attr.pdn = ctx->pd_num; + attr.page_id = queue->uar->page_id; + attr.dbr_id = sq->db_umem->umem_id; + attr.wq_id = sq->buf_umem->umem_id; + attr.log_wq_sz = log_wq_sz; + + sq->obj = mlx5dr_cmd_sq_create(ctx->ibv_ctx, &attr); + if (!sq->obj) + return rte_errno; + + sq->sqn = sq->obj->id; + + err = mlx5dr_cmd_sq_modify_rdy(sq->obj); + if (err) + goto free_sq; + + return 0; + +free_sq: + mlx5dr_cmd_destroy_obj(sq->obj); + + return err; +} + +static inline unsigned long align(unsigned long val, unsigned long align) +{ + return (val + align - 1) & ~(align - 1); +} + +static int mlx5dr_send_ring_open_sq(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_sq *sq, + struct mlx5dr_send_ring_cq *cq) +{ + size_t sq_log_buf_sz; + size_t buf_aligned; + size_t sq_buf_sz; + size_t buf_sz; + int err; + + buf_sz = queue->num_entries * MAX_WQES_PER_RULE; + sq_log_buf_sz = log2above(buf_sz); + sq_buf_sz = 1 << (sq_log_buf_sz + log2above(MLX5_SEND_WQE_BB)); + sq->reg_addr = queue->uar->reg_addr; + + buf_aligned = align(sq_buf_sz, sysconf(_SC_PAGESIZE)); + err = posix_memalign((void **)&sq->buf, sysconf(_SC_PAGESIZE), buf_aligned); + if (err) { + rte_errno = ENOMEM; + return err; + } + memset(sq->buf, 0, buf_aligned); + + err = posix_memalign((void **)&sq->db, 8, 8); + if (err) + goto free_buf; + + sq->buf_umem = mlx5_glue->devx_umem_reg(ctx->ibv_ctx, sq->buf, sq_buf_sz, 0); + + if (!sq->buf_umem) { + err = errno; + goto free_db; + } + + sq->db_umem = mlx5_glue->devx_umem_reg(ctx->ibv_ctx, sq->db, 8, 0); + if (!sq->db_umem) { + err = errno; + goto free_buf_umem; + } + + err = mlx5dr_send_ring_create_sq_obj(ctx, queue, sq, cq, sq_log_buf_sz); + + if (err) + goto free_db_umem; + + sq->wr_priv = simple_malloc(sizeof(*sq->wr_priv) * buf_sz); + if (!sq->wr_priv) { + err = ENOMEM; + goto destroy_sq_obj; + } + + sq->dep_wqe = simple_calloc(queue->num_entries, sizeof(*sq->dep_wqe)); + if (!sq->dep_wqe) { + err = ENOMEM; + goto destroy_wr_priv; + } + + sq->buf_mask = buf_sz - 1; + + return 0; + +destroy_wr_priv: + simple_free(sq->wr_priv); +destroy_sq_obj: + mlx5dr_cmd_destroy_obj(sq->obj); +free_db_umem: + mlx5_glue->devx_umem_dereg(sq->db_umem); +free_buf_umem: + mlx5_glue->devx_umem_dereg(sq->buf_umem); +free_db: + free(sq->db); +free_buf: + free(sq->buf); + rte_errno = err; + return err; +} + +static void mlx5dr_send_ring_close_sq(struct mlx5dr_send_ring_sq *sq) +{ + simple_free(sq->dep_wqe); + mlx5dr_cmd_destroy_obj(sq->obj); + mlx5_glue->devx_umem_dereg(sq->db_umem); + mlx5_glue->devx_umem_dereg(sq->buf_umem); + simple_free(sq->wr_priv); + free(sq->db); + free(sq->buf); +} + +static int mlx5dr_send_ring_open_cq(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_cq *cq) +{ + struct mlx5dv_cq mlx5_cq = {0}; + struct mlx5dv_obj obj; + struct ibv_cq *ibv_cq; + size_t cq_size; + int err; + + cq_size = queue->num_entries; + ibv_cq = mlx5_glue->create_cq(ctx->ibv_ctx, cq_size, NULL, NULL, 0); + if (!ibv_cq) { + DR_LOG(ERR, "Failed to create CQ"); + rte_errno = errno; + return rte_errno; + } + + obj.cq.in = ibv_cq; + obj.cq.out = &mlx5_cq; + err = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_CQ); + if (err) { + err = errno; + goto close_cq; + } + + cq->buf = mlx5_cq.buf; + cq->db = mlx5_cq.dbrec; + cq->ncqe = mlx5_cq.cqe_cnt; + cq->cqe_sz = mlx5_cq.cqe_size; + cq->cqe_log_sz = log2above(cq->cqe_sz); + cq->ncqe_mask = cq->ncqe - 1; + cq->buf_sz = cq->cqe_sz * cq->ncqe; + cq->cqn = mlx5_cq.cqn; + cq->ibv_cq = ibv_cq; + + return 0; + +close_cq: + mlx5_glue->destroy_cq(ibv_cq); + rte_errno = err; + return err; +} + +static void mlx5dr_send_ring_close_cq(struct mlx5dr_send_ring_cq *cq) +{ + mlx5_glue->destroy_cq(cq->ibv_cq); +} + +static void mlx5dr_send_ring_close(struct mlx5dr_send_ring *ring) +{ + mlx5dr_send_ring_close_sq(&ring->send_sq); + mlx5dr_send_ring_close_cq(&ring->send_cq); +} + +static int mlx5dr_send_ring_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring *ring) +{ + int err; + + err = mlx5dr_send_ring_open_cq(ctx, queue, &ring->send_cq); + if (err) + return err; + + err = mlx5dr_send_ring_open_sq(ctx, queue, &ring->send_sq, &ring->send_cq); + if (err) + goto close_cq; + + return err; + +close_cq: + mlx5dr_send_ring_close_cq(&ring->send_cq); + + return err; +} + +static void __mlx5dr_send_rings_close(struct mlx5dr_send_engine *queue, + uint16_t i) +{ + while (i--) + mlx5dr_send_ring_close(&queue->send_ring[i]); +} + +static void mlx5dr_send_rings_close(struct mlx5dr_send_engine *queue) +{ + __mlx5dr_send_rings_close(queue, queue->rings); +} + +static int mlx5dr_send_rings_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue) +{ + uint16_t i; + int err; + + for (i = 0; i < queue->rings; i++) { + err = mlx5dr_send_ring_open(ctx, queue, &queue->send_ring[i]); + if (err) + goto free_rings; + } + + return 0; + +free_rings: + __mlx5dr_send_rings_close(queue, i); + + return err; +} + +void mlx5dr_send_queue_close(struct mlx5dr_send_engine *queue) +{ + mlx5dr_send_rings_close(queue); + simple_free(queue->completed.entries); + mlx5_glue->devx_free_uar(queue->uar); +} + +int mlx5dr_send_queue_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + uint16_t queue_size) +{ + struct mlx5dv_devx_uar *uar; + int err; + +#ifdef MLX5DV_UAR_ALLOC_TYPE_NC + uar = mlx5_glue->devx_alloc_uar(ctx->ibv_ctx, MLX5_IB_UAPI_UAR_ALLOC_TYPE_NC); + if (!uar) { + rte_errno = errno; + return rte_errno; + } +#else + uar = NULL; + rte_errno = ENOTSUP; + return rte_errno; +#endif + + queue->uar = uar; + queue->rings = MLX5DR_NUM_SEND_RINGS; + queue->num_entries = roundup_pow_of_two(queue_size); + queue->used_entries = 0; + queue->th_entries = queue->num_entries; + + queue->completed.entries = simple_calloc(queue->num_entries, + sizeof(queue->completed.entries[0])); + if (!queue->completed.entries) { + rte_errno = ENOMEM; + goto free_uar; + } + queue->completed.pi = 0; + queue->completed.ci = 0; + queue->completed.mask = queue->num_entries - 1; + + err = mlx5dr_send_rings_open(ctx, queue); + if (err) + goto free_completed_entries; + + return 0; + +free_completed_entries: + simple_free(queue->completed.entries); +free_uar: + mlx5_glue->devx_free_uar(uar); + return rte_errno; +} + +static void __mlx5dr_send_queues_close(struct mlx5dr_context *ctx, uint16_t queues) +{ + struct mlx5dr_send_engine *queue; + + while (queues--) { + queue = &ctx->send_queue[queues]; + + mlx5dr_send_queue_close(queue); + } +} + +void mlx5dr_send_queues_close(struct mlx5dr_context *ctx) +{ + __mlx5dr_send_queues_close(ctx, ctx->queues); + simple_free(ctx->send_queue); +} + +int mlx5dr_send_queues_open(struct mlx5dr_context *ctx, + uint16_t queues, + uint16_t queue_size) +{ + int err = 0; + uint32_t i; + + /* Open one extra queue for control path */ + ctx->queues = queues + 1; + + ctx->send_queue = simple_calloc(ctx->queues, sizeof(*ctx->send_queue)); + if (!ctx->send_queue) { + rte_errno = ENOMEM; + return rte_errno; + } + + for (i = 0; i < ctx->queues; i++) { + err = mlx5dr_send_queue_open(ctx, &ctx->send_queue[i], queue_size); + if (err) + goto close_send_queues; + } + + return 0; + +close_send_queues: + __mlx5dr_send_queues_close(ctx, i); + + simple_free(ctx->send_queue); + + return err; +} + +int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, + uint16_t queue_id, + uint32_t actions) +{ + struct mlx5dr_send_ring_sq *send_sq; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[queue_id]; + send_sq = &queue->send_ring->send_sq; + + if (actions == MLX5DR_SEND_QUEUE_ACTION_DRAIN) { + if (send_sq->head_dep_idx != send_sq->tail_dep_idx) + /* Send dependent WQEs to drain the queue */ + mlx5dr_send_all_dep_wqe(queue); + else + /* Signal on the last posted WQE */ + mlx5dr_send_engine_flush_queue(queue); + } else { + rte_errno = -EINVAL; + return rte_errno; + } + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h new file mode 100644 index 0000000000..8d4769495d --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_send.h @@ -0,0 +1,275 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_SEND_H_ +#define MLX5DR_SEND_H_ + +#define MLX5DR_NUM_SEND_RINGS 1 + +/* As a single operation requires at least two WQEBBS. + * This means a maximum of 16 such operations per rule. + */ +#define MAX_WQES_PER_RULE 32 + +/* WQE Control segment. */ +struct mlx5dr_wqe_ctrl_seg { + __be32 opmod_idx_opcode; + __be32 qpn_ds; + __be32 flags; + __be32 imm; +}; + +enum mlx5dr_wqe_opcode { + MLX5DR_WQE_OPCODE_TBL_ACCESS = 0x2c, +}; + +enum mlx5dr_wqe_opmod { + MLX5DR_WQE_OPMOD_GTA_STE = 0, + MLX5DR_WQE_OPMOD_GTA_MOD_ARG = 1, +}; + +enum mlx5dr_wqe_gta_opcode { + MLX5DR_WQE_GTA_OP_ACTIVATE = 0, + MLX5DR_WQE_GTA_OP_DEACTIVATE = 1, +}; + +enum mlx5dr_wqe_gta_opmod { + MLX5DR_WQE_GTA_OPMOD_STE = 0, + MLX5DR_WQE_GTA_OPMOD_MOD_ARG = 1, +}; + +enum mlx5dr_wqe_gta_sz { + MLX5DR_WQE_SZ_GTA_CTRL = 48, + MLX5DR_WQE_SZ_GTA_DATA = 64, +}; + +struct mlx5dr_wqe_gta_ctrl_seg { + __be32 op_dirix; + __be32 stc_ix[5]; + __be32 rsvd0[6]; +}; + +struct mlx5dr_wqe_gta_data_seg_ste { + __be32 rsvd0_ctr_id; + __be32 rsvd1[4]; + __be32 action[3]; + __be32 tag[8]; +}; + +struct mlx5dr_wqe_gta_data_seg_arg { + __be32 action_args[8]; +}; + +struct mlx5dr_wqe_gta { + struct mlx5dr_wqe_gta_ctrl_seg gta_ctrl; + union { + struct mlx5dr_wqe_gta_data_seg_ste seg_ste; + struct mlx5dr_wqe_gta_data_seg_arg seg_arg; + }; +}; + +struct mlx5dr_send_ring_cq { + uint8_t *buf; + uint32_t cons_index; + uint32_t ncqe_mask; + uint32_t buf_sz; + uint32_t ncqe; + uint32_t cqe_log_sz; + __be32 *db; + uint16_t poll_wqe; + struct ibv_cq *ibv_cq; + uint32_t cqn; + uint32_t cqe_sz; +}; + +struct mlx5dr_send_ring_priv { + struct mlx5dr_rule *rule; + void *user_data; + uint32_t num_wqebbs; + uint32_t id; + uint32_t retry_id; + uint32_t *used_id; +}; + +struct mlx5dr_send_ring_dep_wqe { + struct mlx5dr_wqe_gta_ctrl_seg wqe_ctrl; + struct mlx5dr_wqe_gta_data_seg_ste wqe_data; + struct mlx5dr_rule *rule; + uint32_t rtc_0; + uint32_t rtc_1; + uint32_t retry_rtc_0; + uint32_t retry_rtc_1; + void *user_data; +}; + +struct mlx5dr_send_ring_sq { + char *buf; + uint32_t sqn; + __be32 *db; + void *reg_addr; + uint16_t cur_post; + uint16_t buf_mask; + struct mlx5dr_send_ring_priv *wr_priv; + unsigned int last_idx; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + unsigned int head_dep_idx; + unsigned int tail_dep_idx; + struct mlx5dr_devx_obj *obj; + struct mlx5dv_devx_umem *buf_umem; + struct mlx5dv_devx_umem *db_umem; +}; + +struct mlx5dr_send_ring { + struct mlx5dr_send_ring_cq send_cq; + struct mlx5dr_send_ring_sq send_sq; +}; + +struct mlx5dr_completed_poll_entry { + void *user_data; + enum rte_flow_op_status status; +}; + +struct mlx5dr_completed_poll { + struct mlx5dr_completed_poll_entry *entries; + uint16_t ci; + uint16_t pi; + uint16_t mask; +}; + +struct mlx5dr_send_engine { + struct mlx5dr_send_ring send_ring[MLX5DR_NUM_SEND_RINGS]; /* For now 1:1 mapping */ + struct mlx5dv_devx_uar *uar; /* Uar is shared between rings of a queue */ + struct mlx5dr_completed_poll completed; + uint16_t used_entries; + uint16_t th_entries; + uint16_t rings; + uint16_t num_entries; + bool err; +} __rte_cache_aligned; + +struct mlx5dr_send_engine_post_ctrl { + struct mlx5dr_send_engine *queue; + struct mlx5dr_send_ring *send_ring; + size_t num_wqebbs; +}; + +struct mlx5dr_send_engine_post_attr { + uint8_t opcode; + uint8_t opmod; + uint8_t notify_hw; + uint8_t fence; + size_t len; + struct mlx5dr_rule *rule; + uint32_t id; + uint32_t retry_id; + uint32_t *used_id; + void *user_data; +}; + +struct mlx5dr_send_ste_attr { + /* rtc / retry_rtc / used_id_rtc override send_attr */ + uint32_t rtc_0; + uint32_t rtc_1; + uint32_t retry_rtc_0; + uint32_t retry_rtc_1; + uint32_t *used_id_rtc_0; + uint32_t *used_id_rtc_1; + bool wqe_tag_is_jumbo; + uint8_t gta_opcode; + uint32_t direct_index; + struct mlx5dr_send_engine_post_attr send_attr; + struct mlx5dr_rule_match_tag *wqe_tag; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; +}; + +/** + * Provide safe 64bit store operation to mlx5 UAR region for + * both 32bit and 64bit architectures. + * + * @param val + * value to write in CPU endian format. + * @param addr + * Address to write to. + * @param lock + * Address of the lock to use for that UAR access. + */ +static __rte_always_inline void +mlx5dr_uar_write64_relaxed(uint64_t val, void *addr) +{ +#ifdef RTE_ARCH_64 + *(uint64_t *)addr = val; +#else /* !RTE_ARCH_64 */ + *(uint32_t *)addr = val; + rte_io_wmb(); + *((uint32_t *)addr + 1) = val >> 32; +#endif +} + +struct mlx5dr_send_ring_dep_wqe * +mlx5dr_send_add_new_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_abort_new_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_all_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_queue_close(struct mlx5dr_send_engine *queue); + +int mlx5dr_send_queue_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + uint16_t queue_size); + +void mlx5dr_send_queues_close(struct mlx5dr_context *ctx); + +int mlx5dr_send_queues_open(struct mlx5dr_context *ctx, + uint16_t queues, + uint16_t queue_size); + +struct mlx5dr_send_engine_post_ctrl +mlx5dr_send_engine_post_start(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_engine_post_req_wqe(struct mlx5dr_send_engine_post_ctrl *ctrl, + char **buf, size_t *len); + +void mlx5dr_send_engine_post_end(struct mlx5dr_send_engine_post_ctrl *ctrl, + struct mlx5dr_send_engine_post_attr *attr); + +void mlx5dr_send_ste(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ste_attr *ste_attr); + +void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue); + +static inline bool mlx5dr_send_engine_full(struct mlx5dr_send_engine *queue) +{ + return queue->used_entries >= queue->th_entries; +} + +static inline void mlx5dr_send_engine_inc_rule(struct mlx5dr_send_engine *queue) +{ + queue->used_entries++; +} + +static inline void mlx5dr_send_engine_dec_rule(struct mlx5dr_send_engine *queue) +{ + queue->used_entries--; +} + +static inline void mlx5dr_send_engine_gen_comp(struct mlx5dr_send_engine *queue, + void *user_data, + int comp_status) +{ + struct mlx5dr_completed_poll *comp = &queue->completed; + + comp->entries[comp->pi].status = comp_status; + comp->entries[comp->pi].user_data = user_data; + + comp->pi = (comp->pi + 1) & comp->mask; +} + +static inline bool mlx5dr_send_engine_err(struct mlx5dr_send_engine *queue) +{ + return queue->err; +} + +#endif /* MLX5DR_SEND_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 11/18] net/mlx5/hws: Add HWS definer layer 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (9 preceding siblings ...) 2022-10-19 20:57 ` [v5 10/18] net/mlx5/hws: Add HWS send layer Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 12/18] net/mlx5/hws: Add HWS context object Alex Vesker ` (6 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Mark Bloch Definers are HW objects that are used for matching, rte items are translated to definers, each definer holds the fields and bit-masks used for HW flow matching. The definer layer is used for finding the most efficient definer for each set of items. In addition to definer creation we also calculate the field copy (fc) array used for efficient items to WQE conversion. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_definer.c | 1968 +++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_definer.h | 585 ++++++++ 2 files changed, 2553 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.h diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c new file mode 100644 index 0000000000..6b98eb8c96 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -0,0 +1,1968 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +#define GTP_PDU_SC 0x85 +#define BAD_PORT 0xBAD +#define ETH_TYPE_IPV4_VXLAN 0x0800 +#define ETH_TYPE_IPV6_VXLAN 0x86DD +#define ETH_VXLAN_DEFAULT_PORT 4789 + +#define STE_NO_VLAN 0x0 +#define STE_SVLAN 0x1 +#define STE_CVLAN 0x2 +#define STE_IPV4 0x1 +#define STE_IPV6 0x2 +#define STE_TCP 0x1 +#define STE_UDP 0x2 +#define STE_ICMP 0x3 + +/* Setter function based on bit offset and mask, for 32bit DW*/ +#define _DR_SET_32(p, v, byte_off, bit_off, mask) \ + do { \ + u32 _v = v; \ + *((rte_be32_t *)(p) + ((byte_off) / 4)) = \ + rte_cpu_to_be_32((rte_be_to_cpu_32(*((u32 *)(p) + \ + ((byte_off) / 4))) & \ + (~((mask) << (bit_off)))) | \ + (((_v) & (mask)) << \ + (bit_off))); \ + } while (0) + +/* Setter function based on bit offset and mask */ +#define DR_SET(p, v, byte_off, bit_off, mask) \ + do { \ + if (unlikely((bit_off) < 0)) { \ + u32 _bit_off = -1 * (bit_off); \ + u32 second_dw_mask = (mask) & ((1 << _bit_off) - 1); \ + _DR_SET_32(p, (v) >> _bit_off, byte_off, 0, (mask) >> _bit_off); \ + _DR_SET_32(p, (v) & second_dw_mask, (byte_off) + DW_SIZE, \ + (bit_off) % BITS_IN_DW, second_dw_mask); \ + } else { \ + _DR_SET_32(p, v, byte_off, (bit_off), (mask)); \ + } \ + } while (0) + +/* Setter function based on byte offset to directly set FULL BE32 value */ +#define DR_SET_BE32(p, v, byte_off, bit_off, mask) \ + (*((rte_be32_t *)((uint8_t *)(p) + (byte_off))) = (v)) + +/* Setter function based on byte offset to directly set FULL BE32 value from ptr */ +#define DR_SET_BE32P(p, v_ptr, byte_off, bit_off, mask) \ + memcpy((uint8_t *)(p) + (byte_off), v_ptr, 4) + +/* Setter function based on byte offset to directly set FULL BE16 value */ +#define DR_SET_BE16(p, v, byte_off, bit_off, mask) \ + (*((rte_be16_t *)((uint8_t *)(p) + (byte_off))) = (v)) + +/* Setter function based on byte offset to directly set FULL BE16 value from ptr */ +#define DR_SET_BE16P(p, v_ptr, byte_off, bit_off, mask) \ + memcpy((uint8_t *)(p) + (byte_off), v_ptr, 2) + +#define DR_CALC_FNAME(field, inner) \ + ((inner) ? MLX5DR_DEFINER_FNAME_##field##_I : \ + MLX5DR_DEFINER_FNAME_##field##_O) + +#define DR_CALC_SET_HDR(fc, hdr, field) \ + do { \ + (fc)->bit_mask = __mlx5_mask(definer_hl, hdr.field); \ + (fc)->bit_off = __mlx5_dw_bit_off(definer_hl, hdr.field); \ + (fc)->byte_off = MLX5_BYTE_OFF(definer_hl, hdr.field); \ + } while (0) + +/* Helper to calculate data used by DR_SET */ +#define DR_CALC_SET(fc, hdr, field, is_inner) \ + do { \ + if (is_inner) { \ + DR_CALC_SET_HDR(fc, hdr##_inner, field); \ + } else { \ + DR_CALC_SET_HDR(fc, hdr##_outer, field); \ + } \ + } while (0) + + #define DR_GET(typ, p, fld) \ + ((rte_be_to_cpu_32(*((const rte_be32_t *)(p) + \ + __mlx5_dw_off(typ, fld))) >> __mlx5_dw_bit_off(typ, fld)) & \ + __mlx5_mask(typ, fld)) + +struct mlx5dr_definer_sel_ctrl { + uint8_t allowed_full_dw; /* Full DW selectors cover all offsets */ + uint8_t allowed_lim_dw; /* Limited DW selectors cover offset < 64 */ + uint8_t allowed_bytes; /* Bytes selectors, up to offset 255 */ + uint8_t used_full_dw; + uint8_t used_lim_dw; + uint8_t used_bytes; + uint8_t full_dw_selector[DW_SELECTORS]; + uint8_t lim_dw_selector[DW_SELECTORS_LIMITED]; + uint8_t byte_selector[BYTE_SELECTORS]; +}; + +struct mlx5dr_definer_conv_data { + struct mlx5dr_cmd_query_caps *caps; + struct mlx5dr_definer_fc *fc; + uint8_t relaxed; + uint8_t tunnel; + uint8_t *hl; +}; + +/* Xmacro used to create generic item setter from items */ +#define LIST_OF_FIELDS_INFO \ + X(SET_BE16, eth_type, v->type, rte_flow_item_eth) \ + X(SET_BE32P, eth_smac_47_16, &v->src.addr_bytes[0], rte_flow_item_eth) \ + X(SET_BE16P, eth_smac_15_0, &v->src.addr_bytes[4], rte_flow_item_eth) \ + X(SET_BE32P, eth_dmac_47_16, &v->dst.addr_bytes[0], rte_flow_item_eth) \ + X(SET_BE16P, eth_dmac_15_0, &v->dst.addr_bytes[4], rte_flow_item_eth) \ + X(SET_BE16, tci, v->tci, rte_flow_item_vlan) \ + X(SET, ipv4_ihl, v->ihl, rte_ipv4_hdr) \ + X(SET, ipv4_tos, v->type_of_service, rte_ipv4_hdr) \ + X(SET, ipv4_time_to_live, v->time_to_live, rte_ipv4_hdr) \ + X(SET_BE32, ipv4_dst_addr, v->dst_addr, rte_ipv4_hdr) \ + X(SET_BE32, ipv4_src_addr, v->src_addr, rte_ipv4_hdr) \ + X(SET, ipv4_next_proto, v->next_proto_id, rte_ipv4_hdr) \ + X(SET, ipv4_version, STE_IPV4, rte_ipv4_hdr) \ + X(SET_BE16, ipv4_frag, v->fragment_offset, rte_ipv4_hdr) \ + X(SET_BE16, ipv6_payload_len, v->hdr.payload_len, rte_flow_item_ipv6) \ + X(SET, ipv6_proto, v->hdr.proto, rte_flow_item_ipv6) \ + X(SET, ipv6_hop_limits, v->hdr.hop_limits, rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_127_96, &v->hdr.src_addr[0], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_95_64, &v->hdr.src_addr[4], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_63_32, &v->hdr.src_addr[8], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_31_0, &v->hdr.src_addr[12], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_127_96, &v->hdr.dst_addr[0], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_95_64, &v->hdr.dst_addr[4], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_63_32, &v->hdr.dst_addr[8], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_31_0, &v->hdr.dst_addr[12], rte_flow_item_ipv6) \ + X(SET, ipv6_version, STE_IPV6, rte_flow_item_ipv6) \ + X(SET, ipv6_frag, v->has_frag_ext, rte_flow_item_ipv6) \ + X(SET, icmp_protocol, STE_ICMP, rte_flow_item_icmp) \ + X(SET, udp_protocol, STE_UDP, rte_flow_item_udp) \ + X(SET_BE16, udp_src_port, v->hdr.src_port, rte_flow_item_udp) \ + X(SET_BE16, udp_dst_port, v->hdr.dst_port, rte_flow_item_udp) \ + X(SET, tcp_flags, v->hdr.tcp_flags, rte_flow_item_tcp) \ + X(SET, tcp_protocol, STE_TCP, rte_flow_item_tcp) \ + X(SET_BE16, tcp_src_port, v->hdr.src_port, rte_flow_item_tcp) \ + X(SET_BE16, tcp_dst_port, v->hdr.dst_port, rte_flow_item_tcp) \ + X(SET, gtp_udp_port, RTE_GTPU_UDP_PORT, rte_flow_item_gtp) \ + X(SET_BE32, gtp_teid, v->teid, rte_flow_item_gtp) \ + X(SET, gtp_msg_type, v->msg_type, rte_flow_item_gtp) \ + X(SET, gtp_ext_flag, !!v->v_pt_rsv_flags, rte_flow_item_gtp) \ + X(SET, gtp_next_ext_hdr, GTP_PDU_SC, rte_flow_item_gtp_psc) \ + X(SET, gtp_ext_hdr_pdu, v->hdr.type, rte_flow_item_gtp_psc) \ + X(SET, gtp_ext_hdr_qfi, v->hdr.qfi, rte_flow_item_gtp_psc) \ + X(SET, vxlan_flags, v->flags, rte_flow_item_vxlan) \ + X(SET, vxlan_udp_port, ETH_VXLAN_DEFAULT_PORT, rte_flow_item_vxlan) \ + X(SET, source_qp, v->queue, mlx5_rte_flow_item_sq) \ + X(SET, tag, v->data, rte_flow_item_tag) \ + X(SET, metadata, v->data, rte_flow_item_meta) \ + X(SET_BE16, gre_c_ver, v->c_rsvd0_ver, rte_flow_item_gre) \ + X(SET_BE16, gre_protocol_type, v->protocol, rte_flow_item_gre) \ + X(SET, ipv4_protocol_gre, IPPROTO_GRE, rte_flow_item_gre) \ + X(SET_BE32, gre_opt_key, v->key.key, rte_flow_item_gre_opt) \ + X(SET_BE32, gre_opt_seq, v->sequence.sequence, rte_flow_item_gre_opt) \ + X(SET_BE16, gre_opt_checksum, v->checksum_rsvd.checksum, rte_flow_item_gre_opt) \ + X(SET, meter_color, rte_col_2_mlx5_col(v->color), rte_flow_item_meter_color) + +/* Item set function format */ +#define X(set_type, func_name, value, item_type) \ +static void mlx5dr_definer_##func_name##_set( \ + struct mlx5dr_definer_fc *fc, \ + const void *item_spec, \ + uint8_t *tag) \ +{ \ + __rte_unused const struct item_type *v = item_spec; \ + DR_##set_type(tag, value, fc->byte_off, fc->bit_off, fc->bit_mask); \ +} +LIST_OF_FIELDS_INFO +#undef X + +static void +mlx5dr_definer_ones_set(struct mlx5dr_definer_fc *fc, + __rte_unused const void *item_spec, + __rte_unused uint8_t *tag) +{ + DR_SET(tag, -1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_eth_first_vlan_q_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_eth *v = item_spec; + uint8_t vlan_type; + + vlan_type = v->has_vlan ? STE_CVLAN : STE_NO_VLAN; + + DR_SET(tag, vlan_type, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_first_vlan_q_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_vlan *v = item_spec; + uint8_t vlan_type; + + vlan_type = v->has_more_vlan ? STE_SVLAN : STE_CVLAN; + + DR_SET(tag, vlan_type, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_conntrack_mask(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_conntrack *m = item_spec; + uint32_t reg_mask = 0; + + if (m->flags & (RTE_FLOW_CONNTRACK_PKT_STATE_VALID | + RTE_FLOW_CONNTRACK_PKT_STATE_INVALID | + RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED)) + reg_mask |= (MLX5_CT_SYNDROME_VALID | MLX5_CT_SYNDROME_INVALID | + MLX5_CT_SYNDROME_TRAP); + + if (m->flags & RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED) + reg_mask |= MLX5_CT_SYNDROME_STATE_CHANGE; + + if (m->flags & RTE_FLOW_CONNTRACK_PKT_STATE_BAD) + reg_mask |= MLX5_CT_SYNDROME_BAD_PACKET; + + DR_SET(tag, reg_mask, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_conntrack_tag(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_conntrack *v = item_spec; + uint32_t reg_value = 0; + + /* The conflict should be checked in the validation. */ + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_VALID) + reg_value |= MLX5_CT_SYNDROME_VALID; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED) + reg_value |= MLX5_CT_SYNDROME_STATE_CHANGE; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_INVALID) + reg_value |= MLX5_CT_SYNDROME_INVALID; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED) + reg_value |= MLX5_CT_SYNDROME_TRAP; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_BAD) + reg_value |= MLX5_CT_SYNDROME_BAD_PACKET; + + DR_SET(tag, reg_value, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_integrity_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + bool inner = (fc->fname == MLX5DR_DEFINER_FNAME_INTEGRITY_I); + const struct rte_flow_item_integrity *v = item_spec; + uint32_t ok1_bits = 0; + + if (v->l3_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L3_OK) | + BIT(MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_L3_OK) | + BIT(MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK); + + if (v->ipv4_csum_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK); + + if (v->l4_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_OK) | + BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_OK) | + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK); + + if (v->l4_csum_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK); + + DR_SET(tag, ok1_bits, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_gre_key_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const rte_be32_t *v = item_spec; + + DR_SET_BE32(tag, *v, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_vxlan_vni_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_vxlan *v = item_spec; + + memcpy(tag + fc->byte_off, v->vni, sizeof(v->vni)); +} + +static void +mlx5dr_definer_ipv6_tos_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ipv6 *v = item_spec; + uint8_t tos = DR_GET(header_ipv6_vtc, &v->hdr.vtc_flow, tos); + + DR_SET(tag, tos, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp_dw1_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp *v = item_spec; + rte_be32_t icmp_dw1; + + icmp_dw1 = (v->hdr.icmp_type << __mlx5_dw_bit_off(header_icmp, type)) | + (v->hdr.icmp_code << __mlx5_dw_bit_off(header_icmp, code)) | + (rte_be_to_cpu_16(v->hdr.icmp_cksum) << __mlx5_dw_bit_off(header_icmp, cksum)); + + DR_SET(tag, icmp_dw1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp_dw2_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp *v = item_spec; + rte_be32_t icmp_dw2; + + icmp_dw2 = (rte_be_to_cpu_16(v->hdr.icmp_ident) << __mlx5_dw_bit_off(header_icmp, ident)) | + (rte_be_to_cpu_16(v->hdr.icmp_seq_nb) << __mlx5_dw_bit_off(header_icmp, seq_nb)); + + DR_SET(tag, icmp_dw2, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp6_dw1_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp6 *v = item_spec; + rte_be32_t icmp_dw1; + + icmp_dw1 = (v->type << __mlx5_dw_bit_off(header_icmp, type)) | + (v->code << __mlx5_dw_bit_off(header_icmp, code)) | + (rte_be_to_cpu_16(v->checksum) << __mlx5_dw_bit_off(header_icmp, cksum)); + + DR_SET(tag, icmp_dw1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_ipv6_flow_label_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ipv6 *v = item_spec; + uint32_t flow_label = DR_GET(header_ipv6_vtc, &v->hdr.vtc_flow, flow_label); + + DR_SET(tag, flow_label, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_vport_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ethdev *v = item_spec; + const struct flow_hw_port_info *port_info; + uint32_t regc_value; + + port_info = flow_hw_conv_port_id(v->port_id); + if (unlikely(!port_info)) + regc_value = BAD_PORT; + else + regc_value = port_info->regc_value >> fc->bit_off; + + /* Bit offset is set to 0 to since regc value is 32bit */ + DR_SET(tag, regc_value, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static int +mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_eth *m = item->mask; + uint8_t empty_mac[RTE_ETHER_ADDR_LEN] = {0}; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->reserved) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->type) { + fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_type_set; + DR_CALC_SET(fc, eth_l2, l3_ethertype, inner); + } + + /* Check SMAC 47_16 */ + if (memcmp(m->src.addr_bytes, empty_mac, 4)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_48_16, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_smac_47_16_set; + DR_CALC_SET(fc, eth_l2_src, smac_47_16, inner); + } + + /* Check SMAC 15_0 */ + if (memcmp(m->src.addr_bytes + 4, empty_mac + 4, 2)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_15_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_smac_15_0_set; + DR_CALC_SET(fc, eth_l2_src, smac_15_0, inner); + } + + /* Check DMAC 47_16 */ + if (memcmp(m->dst.addr_bytes, empty_mac, 4)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_48_16, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_dmac_47_16_set; + DR_CALC_SET(fc, eth_l2, dmac_47_16, inner); + } + + /* Check DMAC 15_0 */ + if (memcmp(m->dst.addr_bytes + 4, empty_mac + 4, 2)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_15_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_dmac_15_0_set; + DR_CALC_SET(fc, eth_l2, dmac_15_0, inner); + } + + if (m->has_vlan) { + /* Mark packet as tagged (CVLAN) */ + fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_eth_first_vlan_q_set; + DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_vlan *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->reserved) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (!cd->relaxed || m->has_more_vlan) { + /* Mark packet as tagged (CVLAN or SVLAN) even if TCI is not specified.*/ + fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_first_vlan_q_set; + DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner); + } + + if (m->tci) { + fc = &cd->fc[DR_CALC_FNAME(VLAN_TCI, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tci_set; + DR_CALC_SET(fc, eth_l2, tci, inner); + } + + if (m->inner_type) { + fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_type_set; + DR_CALC_SET(fc, eth_l2, l3_ethertype, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_ipv4(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_ipv4_hdr *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_VERSION, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_version_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l3_type, inner); + + /* Overwrite - Unset ethertype if present */ + memset(&cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)], 0, sizeof(*fc)); + } + + if (!m) + return 0; + + if (m->total_length || m->packet_id || + m->hdr_checksum) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->fragment_offset) { + fc = &cd->fc[DR_CALC_FNAME(IP_FRAG, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_frag_set; + DR_CALC_SET(fc, eth_l3, fragment_offset, inner); + } + + if (m->next_proto_id) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_next_proto_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (m->dst_addr) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_DST, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_dst_addr_set; + DR_CALC_SET(fc, ipv4_src_dest, destination_address, inner); + } + + if (m->src_addr) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_SRC, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_src_addr_set; + DR_CALC_SET(fc, ipv4_src_dest, source_address, inner); + } + + if (m->ihl) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_IHL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_ihl_set; + DR_CALC_SET(fc, eth_l3, ihl, inner); + } + + if (m->time_to_live) { + fc = &cd->fc[DR_CALC_FNAME(IP_TTL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_time_to_live_set; + DR_CALC_SET(fc, eth_l3, time_to_live_hop_limit, inner); + } + + if (m->type_of_service) { + fc = &cd->fc[DR_CALC_FNAME(IP_TOS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_tos_set; + DR_CALC_SET(fc, eth_l3, tos, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_ipv6(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_ipv6 *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_VERSION, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_version_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l3_type, inner); + + /* Overwrite - Unset ethertype if present */ + memset(&cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)], 0, sizeof(*fc)); + } + + if (!m) + return 0; + + if (m->has_hop_ext || m->has_route_ext || m->has_auth_ext || + m->has_esp_ext || m->has_dest_ext || m->has_mobil_ext || + m->has_hip_ext || m->has_shim6_ext) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->has_frag_ext) { + fc = &cd->fc[DR_CALC_FNAME(IP_FRAG, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_frag_set; + DR_CALC_SET(fc, eth_l4, ip_fragmented, inner); + } + + if (DR_GET(header_ipv6_vtc, &m->hdr.vtc_flow, tos)) { + fc = &cd->fc[DR_CALC_FNAME(IP_TOS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_tos_set; + DR_CALC_SET(fc, eth_l3, tos, inner); + } + + if (DR_GET(header_ipv6_vtc, &m->hdr.vtc_flow, flow_label)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_FLOW_LABEL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_flow_label_set; + DR_CALC_SET(fc, eth_l3, flow_label, inner); + } + + if (m->hdr.payload_len) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_PAYLOAD_LEN, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_payload_len_set; + DR_CALC_SET(fc, eth_l3, ipv6_payload_length, inner); + } + + if (m->hdr.proto) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_proto_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (m->hdr.hop_limits) { + fc = &cd->fc[DR_CALC_FNAME(IP_TTL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_hop_limits_set; + DR_CALC_SET(fc, eth_l3, time_to_live_hop_limit, inner); + } + + if (!is_mem_zero(m->hdr.src_addr, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_127_96, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_127_96_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_127_96, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 4, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_95_64, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_95_64_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_95_64, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 8, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_63_32, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_63_32_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_63_32, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 12, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_31_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_31_0_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_31_0, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_127_96, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_127_96_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_127_96, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 4, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_95_64, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_95_64_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_95_64, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 8, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_63_32, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_63_32_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_63_32, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 12, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_31_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_31_0_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_31_0, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_udp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_udp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Set match on L4 type UDP */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + if (!m) + return 0; + + if (m->hdr.dgram_cksum || m->hdr.dgram_len) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->hdr.src_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_SPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_src_port_set; + DR_CALC_SET(fc, eth_l4, source_port, inner); + } + + if (m->hdr.dst_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_dst_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_tcp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_tcp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on L4 type TCP */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + if (!m) + return 0; + + if (m->hdr.tcp_flags) { + fc = &cd->fc[DR_CALC_FNAME(TCP_FLAGS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_flags_set; + DR_CALC_SET(fc, eth_l4, tcp_flags, inner); + } + + if (m->hdr.src_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_SPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_src_port_set; + DR_CALC_SET(fc, eth_l4, source_port, inner); + } + + if (m->hdr.dst_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_dst_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gtp *m = item->mask; + struct mlx5dr_definer_fc *fc; + + /* Overwrite GTPU dest port if not present */ + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, false)]; + if (!fc->tag_set && !cd->relaxed) { + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_udp_port_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l4, destination_port, false); + } + + if (!m) + return 0; + + if (m->msg_len || m->v_pt_rsv_flags & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->teid) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_TEID]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_teid_set; + fc->bit_mask = __mlx5_mask(header_gtp, teid); + fc->byte_off = cd->caps->format_select_gtpu_dw_1 * DW_SIZE; + } + + if (m->v_pt_rsv_flags) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_ext_flag_set; + fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + + if (m->msg_type) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_MSG_TYPE]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_msg_type_set; + fc->bit_mask = __mlx5_mask(header_gtp, msg_type); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, msg_type); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gtp_psc *m = item->mask; + struct mlx5dr_definer_fc *fc; + + /* Overwrite GTP extension flag to be 1 */ + if (!cd->relaxed) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ones_set; + fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + /* Overwrite next extension header type */ + if (!cd->relaxed) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_2_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_NEXT_EXT_HDR]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_next_ext_hdr_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->bit_mask = __mlx5_mask(header_opt_gtp, next_ext_hdr_type); + fc->bit_off = __mlx5_dw_bit_off(header_opt_gtp, next_ext_hdr_type); + fc->byte_off = cd->caps->format_select_gtpu_dw_2 * DW_SIZE; + } + + if (!m) + return 0; + + if (m->hdr.type) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_PDU]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_ext_hdr_pdu_set; + fc->bit_mask = __mlx5_mask(header_gtp_psc, pdu_type); + fc->bit_off = __mlx5_dw_bit_off(header_gtp_psc, pdu_type); + fc->byte_off = cd->caps->format_select_gtpu_ext_dw_0 * DW_SIZE; + } + + if (m->hdr.qfi) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_QFI]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_ext_hdr_qfi_set; + fc->bit_mask = __mlx5_mask(header_gtp_psc, qfi); + fc->bit_off = __mlx5_dw_bit_off(header_gtp_psc, qfi); + fc->byte_off = cd->caps->format_select_gtpu_ext_dw_0 * DW_SIZE; + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_port(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_ethdev *m = item->mask; + struct mlx5dr_definer_fc *fc; + uint8_t bit_offset = 0; + + if (m->port_id) { + if (!cd->caps->wire_regc_mask) { + DR_LOG(ERR, "Port ID item not supported, missing wire REGC mask"); + rte_errno = ENOTSUP; + return rte_errno; + } + + while (!(cd->caps->wire_regc_mask & (1 << bit_offset))) + bit_offset++; + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VPORT_REG_C_0]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_vport_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET_HDR(fc, registers, register_c_0); + fc->bit_off = bit_offset; + fc->bit_mask = cd->caps->wire_regc_mask >> bit_offset; + } else { + DR_LOG(ERR, "Pord ID item mask must specify ID mask"); + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_vxlan(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_vxlan *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* In order to match on VXLAN we must match on ether_type, ip_protocol + * and l4_dport. + */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_udp_protocol_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_vxlan_udp_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + } + + if (!m) + return 0; + + if (m->flags) { + if (inner) { + DR_LOG(ERR, "Inner VXLAN flags item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VXLAN_FLAGS]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_vxlan_flags_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_vxlan, flags); + fc->bit_off = __mlx5_dw_bit_off(header_vxlan, flags); + } + + if (!is_mem_zero(m->vni, 3)) { + if (inner) { + DR_LOG(ERR, "Inner VXLAN vni item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VXLAN_VNI]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_vxlan_vni_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_1); + fc->bit_mask = __mlx5_mask(header_vxlan, vni); + fc->bit_off = __mlx5_dw_bit_off(header_vxlan, vni); + } + + return 0; +} + +static struct mlx5dr_definer_fc * +mlx5dr_definer_get_register_fc(struct mlx5dr_definer_conv_data *cd, int reg) +{ + struct mlx5dr_definer_fc *fc; + + switch (reg) { + case REG_C_0: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_0]; + DR_CALC_SET_HDR(fc, registers, register_c_0); + break; + case REG_C_1: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_1]; + DR_CALC_SET_HDR(fc, registers, register_c_1); + break; + case REG_C_2: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_2]; + DR_CALC_SET_HDR(fc, registers, register_c_2); + break; + case REG_C_3: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_3]; + DR_CALC_SET_HDR(fc, registers, register_c_3); + break; + case REG_C_4: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_4]; + DR_CALC_SET_HDR(fc, registers, register_c_4); + break; + case REG_C_5: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_5]; + DR_CALC_SET_HDR(fc, registers, register_c_5); + break; + case REG_C_6: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_6]; + DR_CALC_SET_HDR(fc, registers, register_c_6); + break; + case REG_C_7: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_7]; + DR_CALC_SET_HDR(fc, registers, register_c_7); + break; + case REG_A: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_A]; + DR_CALC_SET_HDR(fc, metadata, general_purpose); + break; + case REG_B: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_B]; + DR_CALC_SET_HDR(fc, metadata, metadata_to_cqe); + break; + default: + rte_errno = ENOTSUP; + return NULL; + } + + return fc; +} + +static int +mlx5dr_definer_conv_item_tag(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_tag *m = item->mask; + const struct rte_flow_item_tag *v = item->spec; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m || !v) + return 0; + + if (item->type == RTE_FLOW_ITEM_TYPE_TAG) + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_TAG, v->index); + else + reg = (int)v->index; + + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item tag"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tag_set; + return 0; +} + +static int +mlx5dr_definer_conv_item_metadata(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_meta *m = item->mask; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m) + return 0; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_META, -1); + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item metadata"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_metadata_set; + return 0; +} + +static int +mlx5dr_definer_conv_item_sq(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct mlx5_rte_flow_item_sq *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!m) + return 0; + + if (m->queue) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_SOURCE_QP]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_source_qp_set; + DR_CALC_SET_HDR(fc, source_qp_gvmi, source_qp); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gre *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (inner) { + DR_LOG(ERR, "Inner GRE item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (!m) + return 0; + + if (m->c_rsvd0_ver) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_C_VER]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_c_ver_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_gre, c_rsvd0_ver); + fc->bit_off = __mlx5_dw_bit_off(header_gre, c_rsvd0_ver); + } + + if (m->protocol) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_PROTOCOL]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_protocol_type_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->byte_off += MLX5_BYTE_OFF(header_gre, gre_protocol); + fc->bit_mask = __mlx5_mask(header_gre, gre_protocol); + fc->bit_off = __mlx5_dw_bit_off(header_gre, gre_protocol); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gre_opt(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gre_opt *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, false)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, false); + } + } + + if (!m) + return 0; + + if (m->checksum_rsvd.checksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_CHECKSUM]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_checksum_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_1); + } + + if (m->key.key) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_KEY]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_key_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_2); + } + + if (m->sequence.sequence) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_SEQ]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_seq_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_3); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gre_key(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const rte_be32_t *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!cd->relaxed) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_KEY_PRESENT]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ones_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_gre, gre_k_present); + fc->bit_off = __mlx5_dw_bit_off(header_gre, gre_k_present); + + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, false)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, false); + } + } + + if (!m) + return 0; + + if (*m) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_KEY]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_key_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_2); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_integrity(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_integrity *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->packet_ok || m->l2_ok || m->l2_crc_ok || m->l3_len_ok) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->l3_ok || m->ipv4_csum_ok || m->l4_ok || m->l4_csum_ok) { + fc = &cd->fc[DR_CALC_FNAME(INTEGRITY, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_integrity_set; + DR_CALC_SET_HDR(fc, oks1, oks1_bits); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_conntrack(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_conntrack *m = item->mask; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m) + return 0; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_CONNTRACK, -1); + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item conntrack"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_conntrack_mask; + fc->tag_set = &mlx5dr_definer_conntrack_tag; + + return 0; +} + +static int +mlx5dr_definer_conv_item_icmp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_icmp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on L4 type ICMP */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type, inner); + } + + if (!m) + return 0; + + if (m->hdr.icmp_type || m->hdr.icmp_code || m->hdr.icmp_cksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW1]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_dw1_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw1); + } + + if (m->hdr.icmp_ident || m->hdr.icmp_seq_nb) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW2]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_dw2_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw2); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_icmp6(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_icmp6 *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on L4 type ICMP6 */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type, inner); + } + + if (!m) + return 0; + + if (m->type || m->code || m->checksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW1]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp6_dw1_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw1); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_meter_color(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_meter_color *m = item->mask; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m) + return 0; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_METER_COLOR, 0); + MLX5_ASSERT(reg > 0); + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_meter_color_set; + return 0; +} + +static int +mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt, + uint8_t *hl) +{ + struct mlx5dr_definer_fc fc[MLX5DR_DEFINER_FNAME_MAX] = {{0}}; + struct mlx5dr_definer_conv_data cd = {0}; + struct rte_flow_item *items = mt->items; + uint64_t item_flags = 0; + uint32_t total = 0; + int i, j; + int ret; + + cd.fc = fc; + cd.hl = hl; + cd.caps = ctx->caps; + cd.relaxed = mt->flags & MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH; + + /* Collect all RTE fields to the field array and set header layout */ + for (i = 0; items->type != RTE_FLOW_ITEM_TYPE_END; i++, items++) { + cd.tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + + switch ((int)items->type) { + case RTE_FLOW_ITEM_TYPE_ETH: + ret = mlx5dr_definer_conv_item_eth(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + ret = mlx5dr_definer_conv_item_vlan(&cd, items, i); + item_flags |= cd.tunnel ? + (MLX5_FLOW_LAYER_INNER_VLAN | MLX5_FLOW_LAYER_INNER_L2) : + (MLX5_FLOW_LAYER_OUTER_VLAN | MLX5_FLOW_LAYER_OUTER_L2); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + ret = mlx5dr_definer_conv_item_ipv4(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + ret = mlx5dr_definer_conv_item_ipv6(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + break; + case RTE_FLOW_ITEM_TYPE_UDP: + ret = mlx5dr_definer_conv_item_udp(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; + break; + case RTE_FLOW_ITEM_TYPE_TCP: + ret = mlx5dr_definer_conv_item_tcp(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + ret = mlx5dr_definer_conv_item_gtp(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = mlx5dr_definer_conv_item_gtp_psc(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + ret = mlx5dr_definer_conv_item_port(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_REPRESENTED_PORT; + mt->vport_item_id = i; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + ret = mlx5dr_definer_conv_item_vxlan(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_VXLAN; + break; + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: + ret = mlx5dr_definer_conv_item_sq(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_SQ; + break; + case RTE_FLOW_ITEM_TYPE_TAG: + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + ret = mlx5dr_definer_conv_item_tag(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_TAG; + break; + case RTE_FLOW_ITEM_TYPE_META: + ret = mlx5dr_definer_conv_item_metadata(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_METADATA; + break; + case RTE_FLOW_ITEM_TYPE_GRE: + ret = mlx5dr_definer_conv_item_gre(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + ret = mlx5dr_definer_conv_item_gre_opt(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + ret = mlx5dr_definer_conv_item_gre_key(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE_KEY; + break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + ret = mlx5dr_definer_conv_item_integrity(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_INTEGRITY : + MLX5_FLOW_ITEM_OUTER_INTEGRITY; + break; + case RTE_FLOW_ITEM_TYPE_CONNTRACK: + ret = mlx5dr_definer_conv_item_conntrack(&cd, items, i); + break; + case RTE_FLOW_ITEM_TYPE_ICMP: + ret = mlx5dr_definer_conv_item_icmp(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_ICMP; + break; + case RTE_FLOW_ITEM_TYPE_ICMP6: + ret = mlx5dr_definer_conv_item_icmp6(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_ICMP6; + break; + case RTE_FLOW_ITEM_TYPE_METER_COLOR: + ret = mlx5dr_definer_conv_item_meter_color(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_METER_COLOR; + break; + default: + DR_LOG(ERR, "Unsupported item type %d", items->type); + rte_errno = ENOTSUP; + return rte_errno; + } + + if (ret) { + DR_LOG(ERR, "Failed processing item type: %d", items->type); + return ret; + } + } + + mt->item_flags = item_flags; + + /* Fill in headers layout and calculate total number of fields */ + for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) { + if (fc[i].tag_set) { + total++; + DR_SET(hl, -1, fc[i].byte_off, fc[i].bit_off, fc[i].bit_mask); + } + } + + mt->fc_sz = total; + mt->fc = simple_calloc(total, sizeof(*mt->fc)); + if (!mt->fc) { + DR_LOG(ERR, "Failed to allocate field copy array"); + rte_errno = ENOMEM; + return rte_errno; + } + + j = 0; + for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) { + if (fc[i].tag_set) { + memcpy(&mt->fc[j], &fc[i], sizeof(*mt->fc)); + mt->fc[j].fname = i; + j++; + } + } + + return 0; +} + +static int +mlx5dr_definer_find_byte_in_tag(struct mlx5dr_definer *definer, + uint32_t hl_byte_off, + uint32_t *tag_byte_off) +{ + uint8_t byte_offset; + int i; + + /* Add offset since each DW covers multiple BYTEs */ + byte_offset = hl_byte_off % DW_SIZE; + for (i = 0; i < DW_SELECTORS; i++) { + if (definer->dw_selector[i] == hl_byte_off / DW_SIZE) { + *tag_byte_off = byte_offset + DW_SIZE * (DW_SELECTORS - i - 1); + return 0; + } + } + + /* Add offset to skip DWs in definer */ + byte_offset = DW_SIZE * DW_SELECTORS; + /* Iterate in reverse since the code uses bytes from 7 -> 0 */ + for (i = BYTE_SELECTORS; i-- > 0 ;) { + if (definer->byte_selector[i] == hl_byte_off) { + *tag_byte_off = byte_offset + (BYTE_SELECTORS - i - 1); + return 0; + } + } + + /* The hl byte offset must be part of the definer */ + DR_LOG(INFO, "Failed to map to definer, HL byte [%d] not found", byte_offset); + rte_errno = EINVAL; + return rte_errno; +} + +static int +mlx5dr_definer_fc_bind(struct mlx5dr_definer *definer, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz) +{ + uint32_t tag_offset = 0; + int ret, byte_diff; + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + /* Map header layout byte offset to byte offset in tag */ + ret = mlx5dr_definer_find_byte_in_tag(definer, fc->byte_off, &tag_offset); + if (ret) + return ret; + + /* Move setter based on the location in the definer */ + byte_diff = fc->byte_off % DW_SIZE - tag_offset % DW_SIZE; + fc->bit_off = fc->bit_off + byte_diff * BITS_IN_BYTE; + + /* Update offset in headers layout to offset in tag */ + fc->byte_off = tag_offset; + fc++; + } + + return 0; +} + +static bool +mlx5dr_definer_best_hl_fit_recu(struct mlx5dr_definer_sel_ctrl *ctrl, + uint32_t cur_dw, + uint32_t *data) +{ + uint8_t bytes_set; + int byte_idx; + bool ret; + int i; + + /* Reached end, nothing left to do */ + if (cur_dw == MLX5_ST_SZ_DW(definer_hl)) + return true; + + /* No data set, can skip to next DW */ + while (!*data) { + cur_dw++; + data++; + + /* Reached end, nothing left to do */ + if (cur_dw == MLX5_ST_SZ_DW(definer_hl)) + return true; + } + + /* Used all DW selectors and Byte selectors, no possible solution */ + if (ctrl->allowed_full_dw == ctrl->used_full_dw && + ctrl->allowed_lim_dw == ctrl->used_lim_dw && + ctrl->allowed_bytes == ctrl->used_bytes) + return false; + + /* Try to use limited DW selectors */ + if (ctrl->allowed_lim_dw > ctrl->used_lim_dw && cur_dw < 64) { + ctrl->lim_dw_selector[ctrl->used_lim_dw++] = cur_dw; + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + ctrl->lim_dw_selector[--ctrl->used_lim_dw] = 0; + } + + /* Try to use DW selectors */ + if (ctrl->allowed_full_dw > ctrl->used_full_dw) { + ctrl->full_dw_selector[ctrl->used_full_dw++] = cur_dw; + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + ctrl->full_dw_selector[--ctrl->used_full_dw] = 0; + } + + /* No byte selector for offset bigger than 255 */ + if (cur_dw * DW_SIZE > 255) + return false; + + bytes_set = !!(0x000000ff & *data) + + !!(0x0000ff00 & *data) + + !!(0x00ff0000 & *data) + + !!(0xff000000 & *data); + + /* Check if there are enough byte selectors left */ + if (bytes_set + ctrl->used_bytes > ctrl->allowed_bytes) + return false; + + /* Try to use Byte selectors */ + for (i = 0; i < DW_SIZE; i++) + if ((0xff000000 >> (i * BITS_IN_BYTE)) & rte_be_to_cpu_32(*data)) { + /* Use byte selectors high to low */ + byte_idx = ctrl->allowed_bytes - ctrl->used_bytes - 1; + ctrl->byte_selector[byte_idx] = cur_dw * DW_SIZE + i; + ctrl->used_bytes++; + } + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + for (i = 0; i < DW_SIZE; i++) + if ((0xff << (i * BITS_IN_BYTE)) & rte_be_to_cpu_32(*data)) { + ctrl->used_bytes--; + byte_idx = ctrl->allowed_bytes - ctrl->used_bytes - 1; + ctrl->byte_selector[byte_idx] = 0; + } + + return false; +} + +static void +mlx5dr_definer_apply_sel_ctrl(struct mlx5dr_definer_sel_ctrl *ctrl, + struct mlx5dr_definer *definer) +{ + memcpy(definer->byte_selector, ctrl->byte_selector, ctrl->allowed_bytes); + memcpy(definer->dw_selector, ctrl->full_dw_selector, ctrl->allowed_full_dw); + memcpy(definer->dw_selector + ctrl->allowed_full_dw, + ctrl->lim_dw_selector, ctrl->allowed_lim_dw); +} + +static int +mlx5dr_definer_find_best_hl_fit(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt, + uint8_t *hl) +{ + struct mlx5dr_definer_sel_ctrl ctrl = {0}; + bool found; + + /* Try to create a match definer */ + ctrl.allowed_full_dw = DW_SELECTORS_MATCH; + ctrl.allowed_lim_dw = 0; + ctrl.allowed_bytes = BYTE_SELECTORS; + + found = mlx5dr_definer_best_hl_fit_recu(&ctrl, 0, (uint32_t *)hl); + if (found) { + mlx5dr_definer_apply_sel_ctrl(&ctrl, mt->definer); + mt->definer->type = MLX5DR_DEFINER_TYPE_MATCH; + return 0; + } + + /* Try to create a full/limited jumbo definer */ + ctrl.allowed_full_dw = ctx->caps->full_dw_jumbo_support ? DW_SELECTORS : + DW_SELECTORS_MATCH; + ctrl.allowed_lim_dw = ctx->caps->full_dw_jumbo_support ? 0 : + DW_SELECTORS_LIMITED; + ctrl.allowed_bytes = BYTE_SELECTORS; + + found = mlx5dr_definer_best_hl_fit_recu(&ctrl, 0, (uint32_t *)hl); + if (found) { + mlx5dr_definer_apply_sel_ctrl(&ctrl, mt->definer); + mt->definer->type = MLX5DR_DEFINER_TYPE_JUMBO; + return 0; + } + + DR_LOG(ERR, "Unable to find supporting match/jumbo definer combination"); + rte_errno = ENOTSUP; + return rte_errno; +} + +static void +mlx5dr_definer_create_tag_mask(struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag) +{ + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + if (fc->tag_mask_set) + fc->tag_mask_set(fc, items[fc->item_idx].mask, tag); + else + fc->tag_set(fc, items[fc->item_idx].mask, tag); + fc++; + } +} + +void mlx5dr_definer_create_tag(const struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag) +{ + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + fc->tag_set(fc, items[fc->item_idx].spec, tag); + fc++; + } +} + +int mlx5dr_definer_get_id(struct mlx5dr_definer *definer) +{ + return definer->obj->id; +} + +int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b) +{ + int i; + + if (definer_a->type != definer_b->type) + return 1; + + for (i = 0; i < BYTE_SELECTORS; i++) + if (definer_a->byte_selector[i] != definer_b->byte_selector[i]) + return 1; + + for (i = 0; i < DW_SELECTORS; i++) + if (definer_a->dw_selector[i] != definer_b->dw_selector[i]) + return 1; + + for (i = 0; i < MLX5DR_JUMBO_TAG_SZ; i++) + if (definer_a->mask.jumbo[i] != definer_b->mask.jumbo[i]) + return 1; + + return 0; +} + +int mlx5dr_definer_get(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt) +{ + struct mlx5dr_cmd_definer_create_attr def_attr = {0}; + struct ibv_context *ibv_ctx = ctx->ibv_ctx; + uint8_t *hl; + int ret; + + if (mt->refcount++) + return 0; + + mt->definer = simple_calloc(1, sizeof(*mt->definer)); + if (!mt->definer) { + DR_LOG(ERR, "Failed to allocate memory for definer"); + rte_errno = ENOMEM; + goto dec_refcount; + } + + /* Header layout (hl) holds full bit mask per field */ + hl = simple_calloc(1, MLX5_ST_SZ_BYTES(definer_hl)); + if (!hl) { + DR_LOG(ERR, "Failed to allocate memory for header layout"); + rte_errno = ENOMEM; + goto free_definer; + } + + /* Convert items to hl and allocate the field copy array (fc) */ + ret = mlx5dr_definer_conv_items_to_hl(ctx, mt, hl); + if (ret) { + DR_LOG(ERR, "Failed to convert items to hl"); + goto free_hl; + } + + /* Find the definer for given header layout */ + ret = mlx5dr_definer_find_best_hl_fit(ctx, mt, hl); + if (ret) { + DR_LOG(ERR, "Failed to create definer from header layout"); + goto free_field_copy; + } + + /* Align field copy array based on the new definer */ + ret = mlx5dr_definer_fc_bind(mt->definer, + mt->fc, + mt->fc_sz); + if (ret) { + DR_LOG(ERR, "Failed to bind field copy to definer"); + goto free_field_copy; + } + + /* Create the tag mask used for definer creation */ + mlx5dr_definer_create_tag_mask(mt->items, + mt->fc, + mt->fc_sz, + mt->definer->mask.jumbo); + + /* Create definer based on the bitmask tag */ + def_attr.match_mask = mt->definer->mask.jumbo; + def_attr.dw_selector = mt->definer->dw_selector; + def_attr.byte_selector = mt->definer->byte_selector; + mt->definer->obj = mlx5dr_cmd_definer_create(ibv_ctx, &def_attr); + if (!mt->definer->obj) + goto free_field_copy; + + simple_free(hl); + + return 0; + +free_field_copy: + simple_free(mt->fc); +free_hl: + simple_free(hl); +free_definer: + simple_free(mt->definer); +dec_refcount: + mt->refcount--; + + return rte_errno; +} + +void mlx5dr_definer_put(struct mlx5dr_match_template *mt) +{ + if (--mt->refcount) + return; + + simple_free(mt->fc); + mlx5dr_cmd_destroy_obj(mt->definer->obj); + simple_free(mt->definer); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h new file mode 100644 index 0000000000..d52c6b0627 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_definer.h @@ -0,0 +1,585 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_DEFINER_H_ +#define MLX5DR_DEFINER_H_ + +/* Selectors based on match TAG */ +#define DW_SELECTORS_MATCH 6 +#define DW_SELECTORS_LIMITED 3 +#define DW_SELECTORS 9 +#define BYTE_SELECTORS 8 + +enum mlx5dr_definer_fname { + MLX5DR_DEFINER_FNAME_ETH_SMAC_48_16_O, + MLX5DR_DEFINER_FNAME_ETH_SMAC_48_16_I, + MLX5DR_DEFINER_FNAME_ETH_SMAC_15_0_O, + MLX5DR_DEFINER_FNAME_ETH_SMAC_15_0_I, + MLX5DR_DEFINER_FNAME_ETH_DMAC_48_16_O, + MLX5DR_DEFINER_FNAME_ETH_DMAC_48_16_I, + MLX5DR_DEFINER_FNAME_ETH_DMAC_15_0_O, + MLX5DR_DEFINER_FNAME_ETH_DMAC_15_0_I, + MLX5DR_DEFINER_FNAME_ETH_TYPE_O, + MLX5DR_DEFINER_FNAME_ETH_TYPE_I, + MLX5DR_DEFINER_FNAME_VLAN_TYPE_O, + MLX5DR_DEFINER_FNAME_VLAN_TYPE_I, + MLX5DR_DEFINER_FNAME_VLAN_TCI_O, + MLX5DR_DEFINER_FNAME_VLAN_TCI_I, + MLX5DR_DEFINER_FNAME_IPV4_IHL_O, + MLX5DR_DEFINER_FNAME_IPV4_IHL_I, + MLX5DR_DEFINER_FNAME_IP_TTL_O, + MLX5DR_DEFINER_FNAME_IP_TTL_I, + MLX5DR_DEFINER_FNAME_IPV4_DST_O, + MLX5DR_DEFINER_FNAME_IPV4_DST_I, + MLX5DR_DEFINER_FNAME_IPV4_SRC_O, + MLX5DR_DEFINER_FNAME_IPV4_SRC_I, + MLX5DR_DEFINER_FNAME_IP_VERSION_O, + MLX5DR_DEFINER_FNAME_IP_VERSION_I, + MLX5DR_DEFINER_FNAME_IP_FRAG_O, + MLX5DR_DEFINER_FNAME_IP_FRAG_I, + MLX5DR_DEFINER_FNAME_IPV6_PAYLOAD_LEN_O, + MLX5DR_DEFINER_FNAME_IPV6_PAYLOAD_LEN_I, + MLX5DR_DEFINER_FNAME_IP_TOS_O, + MLX5DR_DEFINER_FNAME_IP_TOS_I, + MLX5DR_DEFINER_FNAME_IPV6_FLOW_LABEL_O, + MLX5DR_DEFINER_FNAME_IPV6_FLOW_LABEL_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_127_96_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_95_64_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_63_32_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_31_0_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_127_96_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_95_64_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_63_32_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_31_0_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_127_96_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_95_64_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_63_32_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_31_0_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_127_96_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_95_64_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_63_32_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_31_0_I, + MLX5DR_DEFINER_FNAME_IP_PROTOCOL_O, + MLX5DR_DEFINER_FNAME_IP_PROTOCOL_I, + MLX5DR_DEFINER_FNAME_L4_SPORT_O, + MLX5DR_DEFINER_FNAME_L4_SPORT_I, + MLX5DR_DEFINER_FNAME_L4_DPORT_O, + MLX5DR_DEFINER_FNAME_L4_DPORT_I, + MLX5DR_DEFINER_FNAME_TCP_FLAGS_I, + MLX5DR_DEFINER_FNAME_TCP_FLAGS_O, + MLX5DR_DEFINER_FNAME_GTP_TEID, + MLX5DR_DEFINER_FNAME_GTP_MSG_TYPE, + MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG, + MLX5DR_DEFINER_FNAME_GTP_NEXT_EXT_HDR, + MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_PDU, + MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_QFI, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_0, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_1, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_2, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_3, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_4, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_5, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_6, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_7, + MLX5DR_DEFINER_FNAME_VPORT_REG_C_0, + MLX5DR_DEFINER_FNAME_VXLAN_FLAGS, + MLX5DR_DEFINER_FNAME_VXLAN_VNI, + MLX5DR_DEFINER_FNAME_SOURCE_QP, + MLX5DR_DEFINER_FNAME_REG_0, + MLX5DR_DEFINER_FNAME_REG_1, + MLX5DR_DEFINER_FNAME_REG_2, + MLX5DR_DEFINER_FNAME_REG_3, + MLX5DR_DEFINER_FNAME_REG_4, + MLX5DR_DEFINER_FNAME_REG_5, + MLX5DR_DEFINER_FNAME_REG_6, + MLX5DR_DEFINER_FNAME_REG_7, + MLX5DR_DEFINER_FNAME_REG_A, + MLX5DR_DEFINER_FNAME_REG_B, + MLX5DR_DEFINER_FNAME_GRE_KEY_PRESENT, + MLX5DR_DEFINER_FNAME_GRE_C_VER, + MLX5DR_DEFINER_FNAME_GRE_PROTOCOL, + MLX5DR_DEFINER_FNAME_GRE_OPT_KEY, + MLX5DR_DEFINER_FNAME_GRE_OPT_SEQ, + MLX5DR_DEFINER_FNAME_GRE_OPT_CHECKSUM, + MLX5DR_DEFINER_FNAME_INTEGRITY_O, + MLX5DR_DEFINER_FNAME_INTEGRITY_I, + MLX5DR_DEFINER_FNAME_ICMP_DW1, + MLX5DR_DEFINER_FNAME_ICMP_DW2, + MLX5DR_DEFINER_FNAME_MAX, +}; + +enum mlx5dr_definer_type { + MLX5DR_DEFINER_TYPE_MATCH, + MLX5DR_DEFINER_TYPE_JUMBO, +}; + +struct mlx5dr_definer_fc { + uint8_t item_idx; + uint32_t byte_off; + int bit_off; + uint32_t bit_mask; + enum mlx5dr_definer_fname fname; + void (*tag_set)(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag); + void (*tag_mask_set)(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag); +}; + +struct mlx5_ifc_definer_hl_eth_l2_bits { + u8 dmac_47_16[0x20]; + u8 dmac_15_0[0x10]; + u8 l3_ethertype[0x10]; + u8 reserved_at_40[0x1]; + u8 sx_sniffer[0x1]; + u8 functional_lb[0x1]; + u8 ip_fragmented[0x1]; + u8 qp_type[0x2]; + u8 encap_type[0x2]; + u8 port_number[0x2]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 tci[0x10]; /* contains first_priority[0x3] + first_cfi[0x1] + first_vlan_id[0xc] */ + u8 l4_type[0x4]; + u8 reserved_at_64[0x2]; + u8 ipsec_layer[0x2]; + u8 l2_type[0x2]; + u8 force_lb[0x1]; + u8 l2_ok[0x1]; + u8 l3_ok[0x1]; + u8 l4_ok[0x1]; + u8 second_vlan_qualifier[0x2]; + u8 second_priority[0x3]; + u8 second_cfi[0x1]; + u8 second_vlan_id[0xc]; +}; + +struct mlx5_ifc_definer_hl_eth_l2_src_bits { + u8 smac_47_16[0x20]; + u8 smac_15_0[0x10]; + u8 loopback_syndrome[0x8]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 ip_fragmented[0x1]; + u8 functional_lb[0x1]; +}; + +struct mlx5_ifc_definer_hl_ib_l2_bits { + u8 sx_sniffer[0x1]; + u8 force_lb[0x1]; + u8 functional_lb[0x1]; + u8 reserved_at_3[0x3]; + u8 port_number[0x2]; + u8 sl[0x4]; + u8 qp_type[0x2]; + u8 lnh[0x2]; + u8 dlid[0x10]; + u8 vl[0x4]; + u8 lrh_packet_length[0xc]; + u8 slid[0x10]; +}; + +struct mlx5_ifc_definer_hl_eth_l3_bits { + u8 ip_version[0x4]; + u8 ihl[0x4]; + union { + u8 tos[0x8]; + struct { + u8 dscp[0x6]; + u8 ecn[0x2]; + }; + }; + u8 time_to_live_hop_limit[0x8]; + u8 protocol_next_header[0x8]; + u8 identification[0x10]; + u8 flags[0x3]; + u8 fragment_offset[0xd]; + u8 ipv4_total_length[0x10]; + u8 checksum[0x10]; + u8 reserved_at_60[0xc]; + u8 flow_label[0x14]; + u8 packet_length[0x10]; + u8 ipv6_payload_length[0x10]; +}; + +struct mlx5_ifc_definer_hl_eth_l4_bits { + u8 source_port[0x10]; + u8 destination_port[0x10]; + u8 data_offset[0x4]; + u8 l4_ok[0x1]; + u8 l3_ok[0x1]; + u8 ip_fragmented[0x1]; + u8 tcp_ns[0x1]; + union { + u8 tcp_flags[0x8]; + struct { + u8 tcp_cwr[0x1]; + u8 tcp_ece[0x1]; + u8 tcp_urg[0x1]; + u8 tcp_ack[0x1]; + u8 tcp_psh[0x1]; + u8 tcp_rst[0x1]; + u8 tcp_syn[0x1]; + u8 tcp_fin[0x1]; + }; + }; + u8 first_fragment[0x1]; + u8 reserved_at_31[0xf]; +}; + +struct mlx5_ifc_definer_hl_src_qp_gvmi_bits { + u8 loopback_syndrome[0x8]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 reserved_at_e[0x1]; + u8 functional_lb[0x1]; + u8 source_gvmi[0x10]; + u8 force_lb[0x1]; + u8 ip_fragmented[0x1]; + u8 source_is_requestor[0x1]; + u8 reserved_at_23[0x5]; + u8 source_qp[0x18]; +}; + +struct mlx5_ifc_definer_hl_ib_l4_bits { + u8 opcode[0x8]; + u8 qp[0x18]; + u8 se[0x1]; + u8 migreq[0x1]; + u8 ackreq[0x1]; + u8 fecn[0x1]; + u8 becn[0x1]; + u8 bth[0x1]; + u8 deth[0x1]; + u8 dcceth[0x1]; + u8 reserved_at_28[0x2]; + u8 pad_count[0x2]; + u8 tver[0x4]; + u8 p_key[0x10]; + u8 reserved_at_40[0x8]; + u8 deth_source_qp[0x18]; +}; + +enum mlx5dr_integrity_ok1_bits { + MLX5DR_DEFINER_OKS1_FIRST_L4_OK = 24, + MLX5DR_DEFINER_OKS1_FIRST_L3_OK = 25, + MLX5DR_DEFINER_OKS1_SECOND_L4_OK = 26, + MLX5DR_DEFINER_OKS1_SECOND_L3_OK = 27, + MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK = 28, + MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK = 29, + MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK = 30, + MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK = 31, +}; + +struct mlx5_ifc_definer_hl_oks1_bits { + union { + u8 oks1_bits[0x20]; + struct { + u8 second_ipv4_checksum_ok[0x1]; + u8 second_l4_checksum_ok[0x1]; + u8 first_ipv4_checksum_ok[0x1]; + u8 first_l4_checksum_ok[0x1]; + u8 second_l3_ok[0x1]; + u8 second_l4_ok[0x1]; + u8 first_l3_ok[0x1]; + u8 first_l4_ok[0x1]; + u8 flex_parser7_steering_ok[0x1]; + u8 flex_parser6_steering_ok[0x1]; + u8 flex_parser5_steering_ok[0x1]; + u8 flex_parser4_steering_ok[0x1]; + u8 flex_parser3_steering_ok[0x1]; + u8 flex_parser2_steering_ok[0x1]; + u8 flex_parser1_steering_ok[0x1]; + u8 flex_parser0_steering_ok[0x1]; + u8 second_ipv6_extension_header_vld[0x1]; + u8 first_ipv6_extension_header_vld[0x1]; + u8 l3_tunneling_ok[0x1]; + u8 l2_tunneling_ok[0x1]; + u8 second_tcp_ok[0x1]; + u8 second_udp_ok[0x1]; + u8 second_ipv4_ok[0x1]; + u8 second_ipv6_ok[0x1]; + u8 second_l2_ok[0x1]; + u8 vxlan_ok[0x1]; + u8 gre_ok[0x1]; + u8 first_tcp_ok[0x1]; + u8 first_udp_ok[0x1]; + u8 first_ipv4_ok[0x1]; + u8 first_ipv6_ok[0x1]; + u8 first_l2_ok[0x1]; + }; + }; +}; + +struct mlx5_ifc_definer_hl_oks2_bits { + u8 reserved_at_0[0xa]; + u8 second_mpls_ok[0x1]; + u8 second_mpls4_s_bit[0x1]; + u8 second_mpls4_qualifier[0x1]; + u8 second_mpls3_s_bit[0x1]; + u8 second_mpls3_qualifier[0x1]; + u8 second_mpls2_s_bit[0x1]; + u8 second_mpls2_qualifier[0x1]; + u8 second_mpls1_s_bit[0x1]; + u8 second_mpls1_qualifier[0x1]; + u8 second_mpls0_s_bit[0x1]; + u8 second_mpls0_qualifier[0x1]; + u8 first_mpls_ok[0x1]; + u8 first_mpls4_s_bit[0x1]; + u8 first_mpls4_qualifier[0x1]; + u8 first_mpls3_s_bit[0x1]; + u8 first_mpls3_qualifier[0x1]; + u8 first_mpls2_s_bit[0x1]; + u8 first_mpls2_qualifier[0x1]; + u8 first_mpls1_s_bit[0x1]; + u8 first_mpls1_qualifier[0x1]; + u8 first_mpls0_s_bit[0x1]; + u8 first_mpls0_qualifier[0x1]; +}; + +struct mlx5_ifc_definer_hl_voq_bits { + u8 reserved_at_0[0x18]; + u8 ecn_ok[0x1]; + u8 congestion[0x1]; + u8 profile[0x2]; + u8 internal_prio[0x4]; +}; + +struct mlx5_ifc_definer_hl_ipv4_src_dst_bits { + u8 source_address[0x20]; + u8 destination_address[0x20]; +}; + +struct mlx5_ifc_definer_hl_ipv6_addr_bits { + u8 ipv6_address_127_96[0x20]; + u8 ipv6_address_95_64[0x20]; + u8 ipv6_address_63_32[0x20]; + u8 ipv6_address_31_0[0x20]; +}; + +struct mlx5_ifc_definer_tcp_icmp_header_bits { + union { + struct { + u8 icmp_dw1[0x20]; + u8 icmp_dw2[0x20]; + u8 icmp_dw3[0x20]; + }; + struct { + u8 tcp_seq[0x20]; + u8 tcp_ack[0x20]; + u8 tcp_win_urg[0x20]; + }; + }; +}; + +struct mlx5_ifc_definer_hl_tunnel_header_bits { + u8 tunnel_header_0[0x20]; + u8 tunnel_header_1[0x20]; + u8 tunnel_header_2[0x20]; + u8 tunnel_header_3[0x20]; +}; + +struct mlx5_ifc_definer_hl_ipsec_bits { + u8 spi[0x20]; + u8 sequence_number[0x20]; + u8 reserved[0x10]; + u8 ipsec_syndrome[0x8]; + u8 next_header[0x8]; +}; + +struct mlx5_ifc_definer_hl_metadata_bits { + u8 metadata_to_cqe[0x20]; + u8 general_purpose[0x20]; + u8 acomulated_hash[0x20]; +}; + +struct mlx5_ifc_definer_hl_flex_parser_bits { + u8 flex_parser_7[0x20]; + u8 flex_parser_6[0x20]; + u8 flex_parser_5[0x20]; + u8 flex_parser_4[0x20]; + u8 flex_parser_3[0x20]; + u8 flex_parser_2[0x20]; + u8 flex_parser_1[0x20]; + u8 flex_parser_0[0x20]; +}; + +struct mlx5_ifc_definer_hl_registers_bits { + u8 register_c_10[0x20]; + u8 register_c_11[0x20]; + u8 register_c_8[0x20]; + u8 register_c_9[0x20]; + u8 register_c_6[0x20]; + u8 register_c_7[0x20]; + u8 register_c_4[0x20]; + u8 register_c_5[0x20]; + u8 register_c_2[0x20]; + u8 register_c_3[0x20]; + u8 register_c_0[0x20]; + u8 register_c_1[0x20]; +}; + +struct mlx5_ifc_definer_hl_bits { + struct mlx5_ifc_definer_hl_eth_l2_bits eth_l2_outer; + struct mlx5_ifc_definer_hl_eth_l2_bits eth_l2_inner; + struct mlx5_ifc_definer_hl_eth_l2_src_bits eth_l2_src_outer; + struct mlx5_ifc_definer_hl_eth_l2_src_bits eth_l2_src_inner; + struct mlx5_ifc_definer_hl_ib_l2_bits ib_l2; + struct mlx5_ifc_definer_hl_eth_l3_bits eth_l3_outer; + struct mlx5_ifc_definer_hl_eth_l3_bits eth_l3_inner; + struct mlx5_ifc_definer_hl_eth_l4_bits eth_l4_outer; + struct mlx5_ifc_definer_hl_eth_l4_bits eth_l4_inner; + struct mlx5_ifc_definer_hl_src_qp_gvmi_bits source_qp_gvmi; + struct mlx5_ifc_definer_hl_ib_l4_bits ib_l4; + struct mlx5_ifc_definer_hl_oks1_bits oks1; + struct mlx5_ifc_definer_hl_oks2_bits oks2; + struct mlx5_ifc_definer_hl_voq_bits voq; + u8 reserved_at_480[0x380]; + struct mlx5_ifc_definer_hl_ipv4_src_dst_bits ipv4_src_dest_outer; + struct mlx5_ifc_definer_hl_ipv4_src_dst_bits ipv4_src_dest_inner; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_dst_outer; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_dst_inner; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_src_outer; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_src_inner; + u8 unsupported_dest_ib_l3[0x80]; + u8 unsupported_source_ib_l3[0x80]; + u8 unsupported_udp_misc_outer[0x20]; + u8 unsupported_udp_misc_inner[0x20]; + struct mlx5_ifc_definer_tcp_icmp_header_bits tcp_icmp; + struct mlx5_ifc_definer_hl_tunnel_header_bits tunnel_header; + u8 unsupported_mpls_outer[0xa0]; + u8 unsupported_mpls_inner[0xa0]; + u8 unsupported_config_headers_outer[0x80]; + u8 unsupported_config_headers_inner[0x80]; + u8 unsupported_random_number[0x20]; + struct mlx5_ifc_definer_hl_ipsec_bits ipsec; + struct mlx5_ifc_definer_hl_metadata_bits metadata; + u8 unsupported_utc_timestamp[0x40]; + u8 unsupported_free_running_timestamp[0x40]; + struct mlx5_ifc_definer_hl_flex_parser_bits flex_parser; + struct mlx5_ifc_definer_hl_registers_bits registers; + /* struct x ib_l3_extended; */ + /* struct x rwh */ + /* struct x dcceth */ + /* struct x dceth */ +}; + +enum mlx5dr_definer_gtp { + MLX5DR_DEFINER_GTP_EXT_HDR_BIT = 0x04, +}; + +struct mlx5_ifc_header_gtp_bits { + u8 version[0x3]; + u8 proto_type[0x1]; + u8 reserved1[0x1]; + u8 ext_hdr_flag[0x1]; + u8 seq_num_flag[0x1]; + u8 pdu_flag[0x1]; + u8 msg_type[0x8]; + u8 msg_len[0x8]; + u8 teid[0x20]; +}; + +struct mlx5_ifc_header_opt_gtp_bits { + u8 seq_num[0x10]; + u8 pdu_num[0x8]; + u8 next_ext_hdr_type[0x8]; +}; + +struct mlx5_ifc_header_gtp_psc_bits { + u8 len[0x8]; + u8 pdu_type[0x4]; + u8 flags[0x4]; + u8 qfi[0x8]; + u8 reserved2[0x8]; +}; + +struct mlx5_ifc_header_ipv6_vtc_bits { + u8 version[0x4]; + union { + u8 tos[0x8]; + struct { + u8 dscp[0x6]; + u8 ecn[0x2]; + }; + }; + u8 flow_label[0x14]; +}; + +struct mlx5_ifc_header_vxlan_bits { + u8 flags[0x8]; + u8 reserved1[0x18]; + u8 vni[0x18]; + u8 reserved2[0x8]; +}; + +struct mlx5_ifc_header_gre_bits { + union { + u8 c_rsvd0_ver[0x10]; + struct { + u8 gre_c_present[0x1]; + u8 reserved_at_1[0x1]; + u8 gre_k_present[0x1]; + u8 gre_s_present[0x1]; + u8 reserved_at_4[0x9]; + u8 version[0x3]; + }; + }; + u8 gre_protocol[0x10]; + u8 checksum[0x10]; + u8 reserved_at_30[0x10]; +}; + +struct mlx5_ifc_header_icmp_bits { + union { + u8 icmp_dw1[0x20]; + struct { + u8 type[0x8]; + u8 code[0x8]; + u8 cksum[0x10]; + }; + }; + union { + u8 icmp_dw2[0x20]; + struct { + u8 ident[0x10]; + u8 seq_nb[0x10]; + }; + }; +}; + +struct mlx5dr_definer { + enum mlx5dr_definer_type type; + uint8_t dw_selector[DW_SELECTORS]; + uint8_t byte_selector[BYTE_SELECTORS]; + struct mlx5dr_rule_match_tag mask; + struct mlx5dr_devx_obj *obj; +}; + +static inline bool +mlx5dr_definer_is_jumbo(struct mlx5dr_definer *definer) +{ + return (definer->type == MLX5DR_DEFINER_TYPE_JUMBO); +} + +void mlx5dr_definer_create_tag(const struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag); + +int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b); + +int mlx5dr_definer_get_id(struct mlx5dr_definer *definer); + +int mlx5dr_definer_get(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt); + +void mlx5dr_definer_put(struct mlx5dr_match_template *mt); + +#endif /* MLX5DR_DEFINER_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 12/18] net/mlx5/hws: Add HWS context object 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (10 preceding siblings ...) 2022-10-19 20:57 ` [v5 11/18] net/mlx5/hws: Add HWS definer layer Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 13/18] net/mlx5/hws: Add HWS table object Alex Vesker ` (5 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Context is the first mlx5dr object created, all sub object: table, matcher, rule, action are created using the context. The context holds the capabilities and send queues used for configuring the offloads to the HW. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_context.c | 223 ++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_context.h | 40 +++++ 2 files changed, 263 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.h diff --git a/drivers/net/mlx5/hws/mlx5dr_context.c b/drivers/net/mlx5/hws/mlx5dr_context.c new file mode 100644 index 0000000000..ae86694a51 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_context.c @@ -0,0 +1,223 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static int mlx5dr_context_pools_init(struct mlx5dr_context *ctx) +{ + struct mlx5dr_pool_attr pool_attr = {0}; + uint8_t max_log_sz; + int i; + + if (mlx5dr_pat_init_pattern_cache(&ctx->pattern_cache)) + return rte_errno; + + /* Create an STC pool per FT type */ + pool_attr.pool_type = MLX5DR_POOL_TYPE_STC; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_STC_POOL; + max_log_sz = RTE_MIN(MLX5DR_POOL_STC_LOG_SZ, ctx->caps->stc_alloc_log_max); + pool_attr.alloc_log_sz = RTE_MAX(max_log_sz, ctx->caps->stc_alloc_log_gran); + + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) { + pool_attr.table_type = i; + ctx->stc_pool[i] = mlx5dr_pool_create(ctx, &pool_attr); + if (!ctx->stc_pool[i]) { + DR_LOG(ERR, "Failed to allocate STC pool [%d]", i); + goto free_stc_pools; + } + } + + return 0; + +free_stc_pools: + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) + if (ctx->stc_pool[i]) + mlx5dr_pool_destroy(ctx->stc_pool[i]); + + mlx5dr_pat_uninit_pattern_cache(ctx->pattern_cache); + + return rte_errno; +} + +static void mlx5dr_context_pools_uninit(struct mlx5dr_context *ctx) +{ + int i; + + mlx5dr_pat_uninit_pattern_cache(ctx->pattern_cache); + + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) { + if (ctx->stc_pool[i]) + mlx5dr_pool_destroy(ctx->stc_pool[i]); + } +} + +static int mlx5dr_context_init_pd(struct mlx5dr_context *ctx, + struct ibv_pd *pd) +{ + struct mlx5dv_pd mlx5_pd = {0}; + struct mlx5dv_obj obj; + int ret; + + if (pd) { + ctx->pd = pd; + } else { + ctx->pd = mlx5_glue->alloc_pd(ctx->ibv_ctx); + if (!ctx->pd) { + DR_LOG(ERR, "Failed to allocate PD"); + rte_errno = errno; + return rte_errno; + } + ctx->flags |= MLX5DR_CONTEXT_FLAG_PRIVATE_PD; + } + + obj.pd.in = ctx->pd; + obj.pd.out = &mlx5_pd; + + ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD); + if (ret) + goto free_private_pd; + + ctx->pd_num = mlx5_pd.pdn; + + return 0; + +free_private_pd: + if (ctx->flags & MLX5DR_CONTEXT_FLAG_PRIVATE_PD) + mlx5_glue->dealloc_pd(ctx->pd); + + return ret; +} + +static int mlx5dr_context_uninit_pd(struct mlx5dr_context *ctx) +{ + if (ctx->flags & MLX5DR_CONTEXT_FLAG_PRIVATE_PD) + return mlx5_glue->dealloc_pd(ctx->pd); + + return 0; +} + +static void mlx5dr_context_check_hws_supp(struct mlx5dr_context *ctx) +{ + struct mlx5dr_cmd_query_caps *caps = ctx->caps; + + /* HWS not supported on device / FW */ + if (!caps->wqe_based_update) { + DR_LOG(INFO, "Required HWS WQE based insertion cap not supported"); + return; + } + + /* Current solution requires all rules to set reparse bit */ + if ((!caps->nic_ft.reparse || !caps->fdb_ft.reparse) || + !IS_BIT_SET(caps->rtc_reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS)) { + DR_LOG(INFO, "Required HWS reparse cap not supported"); + return; + } + + /* FW/HW must support 8DW STE */ + if (!IS_BIT_SET(caps->ste_format, MLX5_IFC_RTC_STE_FORMAT_8DW)) { + DR_LOG(INFO, "Required HWS STE format not supported"); + return; + } + + /* Adding rules by hash and by offset are requirements */ + if (!IS_BIT_SET(caps->rtc_index_mode, MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH) || + !IS_BIT_SET(caps->rtc_index_mode, MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET)) { + DR_LOG(INFO, "Required HWS RTC update mode not supported"); + return; + } + + /* Support for SELECT definer ID is required */ + if (!IS_BIT_SET(caps->definer_format_sup, MLX5_IFC_DEFINER_FORMAT_ID_SELECT)) { + DR_LOG(INFO, "Required HWS Dynamic definer not supported"); + return; + } + + ctx->flags |= MLX5DR_CONTEXT_FLAG_HWS_SUPPORT; +} + +static int mlx5dr_context_init_hws(struct mlx5dr_context *ctx, + struct mlx5dr_context_attr *attr) +{ + int ret; + + mlx5dr_context_check_hws_supp(ctx); + + if (!(ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) + return 0; + + ret = mlx5dr_context_init_pd(ctx, attr->pd); + if (ret) + return ret; + + ret = mlx5dr_context_pools_init(ctx); + if (ret) + goto uninit_pd; + + ret = mlx5dr_send_queues_open(ctx, attr->queues, attr->queue_size); + if (ret) + goto pools_uninit; + + return 0; + +pools_uninit: + mlx5dr_context_pools_uninit(ctx); +uninit_pd: + mlx5dr_context_uninit_pd(ctx); + return ret; +} + +static void mlx5dr_context_uninit_hws(struct mlx5dr_context *ctx) +{ + if (!(ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) + return; + + mlx5dr_send_queues_close(ctx); + mlx5dr_context_pools_uninit(ctx); + mlx5dr_context_uninit_pd(ctx); +} + +struct mlx5dr_context *mlx5dr_context_open(struct ibv_context *ibv_ctx, + struct mlx5dr_context_attr *attr) +{ + struct mlx5dr_context *ctx; + int ret; + + ctx = simple_calloc(1, sizeof(*ctx)); + if (!ctx) { + rte_errno = ENOMEM; + return NULL; + } + + ctx->ibv_ctx = ibv_ctx; + pthread_spin_init(&ctx->ctrl_lock, PTHREAD_PROCESS_PRIVATE); + + ctx->caps = simple_calloc(1, sizeof(*ctx->caps)); + if (!ctx->caps) + goto free_ctx; + + ret = mlx5dr_cmd_query_caps(ibv_ctx, ctx->caps); + if (ret) + goto free_caps; + + ret = mlx5dr_context_init_hws(ctx, attr); + if (ret) + goto free_caps; + + return ctx; + +free_caps: + simple_free(ctx->caps); +free_ctx: + simple_free(ctx); + return NULL; +} + +int mlx5dr_context_close(struct mlx5dr_context *ctx) +{ + mlx5dr_context_uninit_hws(ctx); + simple_free(ctx->caps); + pthread_spin_destroy(&ctx->ctrl_lock); + simple_free(ctx); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_context.h b/drivers/net/mlx5/hws/mlx5dr_context.h new file mode 100644 index 0000000000..b0c7802daf --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_context.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_CONTEXT_H_ +#define MLX5DR_CONTEXT_H_ + +enum mlx5dr_context_flags { + MLX5DR_CONTEXT_FLAG_HWS_SUPPORT = 1 << 0, + MLX5DR_CONTEXT_FLAG_PRIVATE_PD = 1 << 1, +}; + +enum mlx5dr_context_shared_stc_type { + MLX5DR_CONTEXT_SHARED_STC_DECAP = 0, + MLX5DR_CONTEXT_SHARED_STC_POP = 1, + MLX5DR_CONTEXT_SHARED_STC_MAX = 2, +}; + +struct mlx5dr_context_common_res { + struct mlx5dr_action_default_stc *default_stc; + struct mlx5dr_action_shared_stc *shared_stc[MLX5DR_CONTEXT_SHARED_STC_MAX]; + struct mlx5dr_cmd_forward_tbl *default_miss; +}; + +struct mlx5dr_context { + struct ibv_context *ibv_ctx; + struct mlx5dr_cmd_query_caps *caps; + struct ibv_pd *pd; + uint32_t pd_num; + struct mlx5dr_pool *stc_pool[MLX5DR_TABLE_TYPE_MAX]; + struct mlx5dr_context_common_res common_res[MLX5DR_TABLE_TYPE_MAX]; + struct mlx5dr_pattern_cache *pattern_cache; + pthread_spinlock_t ctrl_lock; + enum mlx5dr_context_flags flags; + struct mlx5dr_send_engine *send_queue; + size_t queues; + LIST_HEAD(table_head, mlx5dr_table) head; +}; + +#endif /* MLX5DR_CONTEXT_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 13/18] net/mlx5/hws: Add HWS table object 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (11 preceding siblings ...) 2022-10-19 20:57 ` [v5 12/18] net/mlx5/hws: Add HWS context object Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 14/18] net/mlx5/hws: Add HWS matcher object Alex Vesker ` (4 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit HWS table resides under the context object, each context can have multiple tables with different steering types RX/TX/FDB. The table is not only a logical object but it is also represented in the HW, packets can be steered to the table and from there to other tables. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_table.c | 248 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_table.h | 44 +++++ 2 files changed, 292 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.h diff --git a/drivers/net/mlx5/hws/mlx5dr_table.c b/drivers/net/mlx5/hws/mlx5dr_table.c new file mode 100644 index 0000000000..d3f77e4780 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_table.c @@ -0,0 +1,248 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static void mlx5dr_table_init_next_ft_attr(struct mlx5dr_table *tbl, + struct mlx5dr_cmd_ft_create_attr *ft_attr) +{ + ft_attr->type = tbl->fw_ft_type; + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) + ft_attr->level = tbl->ctx->caps->fdb_ft.max_level - 1; + else + ft_attr->level = tbl->ctx->caps->nic_ft.max_level - 1; + ft_attr->rtc_valid = true; +} + +/* Call this under ctx->ctrl_lock */ +static int +mlx5dr_table_up_default_fdb_miss_tbl(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_ft_create_attr ft_attr = {0}; + struct mlx5dr_cmd_forward_tbl *default_miss; + struct mlx5dr_context *ctx = tbl->ctx; + uint8_t tbl_type = tbl->type; + uint32_t vport; + + if (tbl->type != MLX5DR_TABLE_TYPE_FDB) + return 0; + + if (ctx->common_res[tbl_type].default_miss) { + ctx->common_res[tbl_type].default_miss->refcount++; + return 0; + } + + ft_attr.type = tbl->fw_ft_type; + ft_attr.level = tbl->ctx->caps->fdb_ft.max_level; /* The last level */ + ft_attr.rtc_valid = false; + + assert(ctx->caps->eswitch_manager); + vport = ctx->caps->eswitch_manager_vport_number; + + default_miss = mlx5dr_cmd_miss_ft_create(ctx->ibv_ctx, &ft_attr, vport); + if (!default_miss) { + DR_LOG(ERR, "Failed to default miss table type: 0x%x", tbl_type); + return rte_errno; + } + + ctx->common_res[tbl_type].default_miss = default_miss; + ctx->common_res[tbl_type].default_miss->refcount++; + return 0; +} + +/* Called under pthread_spin_lock(&ctx->ctrl_lock) */ +static void mlx5dr_table_down_default_fdb_miss_tbl(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_forward_tbl *default_miss; + struct mlx5dr_context *ctx = tbl->ctx; + uint8_t tbl_type = tbl->type; + + if (tbl->type != MLX5DR_TABLE_TYPE_FDB) + return; + + default_miss = ctx->common_res[tbl_type].default_miss; + if (--default_miss->refcount) + return; + + mlx5dr_cmd_miss_ft_destroy(default_miss); + + simple_free(default_miss); + ctx->common_res[tbl_type].default_miss = NULL; +} + +static int +mlx5dr_table_connect_to_default_miss_tbl(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + int ret; + + assert(tbl->type == MLX5DR_TABLE_TYPE_FDB); + + mlx5dr_cmd_set_attr_connect_miss_tbl(tbl->ctx, + tbl->fw_ft_type, + tbl->type, + &ft_attr); + + /* Connect to next */ + ret = mlx5dr_cmd_flow_table_modify(ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect FT to default FDB FT"); + return errno; + } + + return 0; +} + +struct mlx5dr_devx_obj * +mlx5dr_table_create_default_ft(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_ft_create_attr ft_attr = {0}; + struct mlx5dr_devx_obj *ft_obj; + int ret; + + mlx5dr_table_init_next_ft_attr(tbl, &ft_attr); + + ft_obj = mlx5dr_cmd_flow_table_create(tbl->ctx->ibv_ctx, &ft_attr); + if (ft_obj && tbl->type == MLX5DR_TABLE_TYPE_FDB) { + /* Take/create ref over the default miss */ + ret = mlx5dr_table_up_default_fdb_miss_tbl(tbl); + if (ret) { + DR_LOG(ERR, "Failed to get default fdb miss"); + goto free_ft_obj; + } + ret = mlx5dr_table_connect_to_default_miss_tbl(tbl, ft_obj); + if (ret) { + DR_LOG(ERR, "Failed connecting to default miss tbl"); + goto down_miss_tbl; + } + } + + return ft_obj; + +down_miss_tbl: + mlx5dr_table_down_default_fdb_miss_tbl(tbl); +free_ft_obj: + mlx5dr_cmd_destroy_obj(ft_obj); + return NULL; +} + +void mlx5dr_table_destroy_default_ft(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft_obj) +{ + mlx5dr_table_down_default_fdb_miss_tbl(tbl); + mlx5dr_cmd_destroy_obj(ft_obj); +} + +static int mlx5dr_table_init(struct mlx5dr_table *tbl) +{ + struct mlx5dr_context *ctx = tbl->ctx; + int ret; + + if (mlx5dr_table_is_root(tbl)) + return 0; + + if (!(tbl->ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) { + DR_LOG(ERR, "HWS not supported, cannot create mlx5dr_table"); + rte_errno = EOPNOTSUPP; + return rte_errno; + } + + switch (tbl->type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + tbl->fw_ft_type = FS_FT_NIC_RX; + break; + case MLX5DR_TABLE_TYPE_NIC_TX: + tbl->fw_ft_type = FS_FT_NIC_TX; + break; + case MLX5DR_TABLE_TYPE_FDB: + tbl->fw_ft_type = FS_FT_FDB; + break; + default: + assert(0); + break; + } + + pthread_spin_lock(&ctx->ctrl_lock); + tbl->ft = mlx5dr_table_create_default_ft(tbl); + if (!tbl->ft) { + DR_LOG(ERR, "Failed to create flow table devx object"); + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; + } + + ret = mlx5dr_action_get_default_stc(ctx, tbl->type); + if (ret) + goto tbl_destroy; + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +tbl_destroy: + mlx5dr_table_destroy_default_ft(tbl, tbl->ft); + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void mlx5dr_table_uninit(struct mlx5dr_table *tbl) +{ + if (mlx5dr_table_is_root(tbl)) + return; + pthread_spin_lock(&tbl->ctx->ctrl_lock); + mlx5dr_action_put_default_stc(tbl->ctx, tbl->type); + mlx5dr_table_destroy_default_ft(tbl, tbl->ft); + pthread_spin_unlock(&tbl->ctx->ctrl_lock); +} + +struct mlx5dr_table *mlx5dr_table_create(struct mlx5dr_context *ctx, + struct mlx5dr_table_attr *attr) +{ + struct mlx5dr_table *tbl; + int ret; + + if (attr->type > MLX5DR_TABLE_TYPE_FDB) { + DR_LOG(ERR, "Invalid table type %d", attr->type); + return NULL; + } + + tbl = simple_malloc(sizeof(*tbl)); + if (!tbl) { + rte_errno = ENOMEM; + return NULL; + } + + tbl->ctx = ctx; + tbl->type = attr->type; + tbl->level = attr->level; + LIST_INIT(&tbl->head); + + ret = mlx5dr_table_init(tbl); + if (ret) { + DR_LOG(ERR, "Failed to initialise table"); + goto free_tbl; + } + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_INSERT_HEAD(&ctx->head, tbl, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + return tbl; + +free_tbl: + simple_free(tbl); + return NULL; +} + +int mlx5dr_table_destroy(struct mlx5dr_table *tbl) +{ + struct mlx5dr_context *ctx = tbl->ctx; + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_REMOVE(tbl, next); + pthread_spin_unlock(&ctx->ctrl_lock); + mlx5dr_table_uninit(tbl); + simple_free(tbl); + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_table.h b/drivers/net/mlx5/hws/mlx5dr_table.h new file mode 100644 index 0000000000..786dddfaa4 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_table.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_TABLE_H_ +#define MLX5DR_TABLE_H_ + +#define MLX5DR_ROOT_LEVEL 0 + +struct mlx5dr_table { + struct mlx5dr_context *ctx; + struct mlx5dr_devx_obj *ft; + enum mlx5dr_table_type type; + uint32_t fw_ft_type; + uint32_t level; + LIST_HEAD(matcher_head, mlx5dr_matcher) head; + LIST_ENTRY(mlx5dr_table) next; +}; + +static inline +uint32_t mlx5dr_table_get_res_fw_ft_type(enum mlx5dr_table_type tbl_type, + bool is_mirror) +{ + if (tbl_type == MLX5DR_TABLE_TYPE_NIC_RX) + return FS_FT_NIC_RX; + else if (tbl_type == MLX5DR_TABLE_TYPE_NIC_TX) + return FS_FT_NIC_TX; + else if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + return is_mirror ? FS_FT_FDB_TX : FS_FT_FDB_RX; + + assert(0); + return 0; +} + +static inline bool mlx5dr_table_is_root(struct mlx5dr_table *tbl) +{ + return (tbl->level == MLX5DR_ROOT_LEVEL); +} + +struct mlx5dr_devx_obj *mlx5dr_table_create_default_ft(struct mlx5dr_table *tbl); + +void mlx5dr_table_destroy_default_ft(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft_obj); +#endif /* MLX5DR_TABLE_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 14/18] net/mlx5/hws: Add HWS matcher object 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (12 preceding siblings ...) 2022-10-19 20:57 ` [v5 13/18] net/mlx5/hws: Add HWS table object Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 15/18] net/mlx5/hws: Add HWS rule object Alex Vesker ` (3 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika HWS matcher resides under the table object, each table can have multiple chained matcher with different attributes. Each matcher represents a combination of match and action templates. Each matcher can contain multiple configurations based on the templates. Packets are steered from the table to the matcher and from there to other objects. The matcher allows efficent HW packet field matching and action execution based on the configuration done to it. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/common/mlx5/linux/meson.build | 2 + drivers/net/mlx5/hws/mlx5dr_matcher.c | 919 ++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_matcher.h | 76 +++ 3 files changed, 997 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.h diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build index e77b46d157..e8b9a07db5 100644 --- a/drivers/common/mlx5/linux/meson.build +++ b/drivers/common/mlx5/linux/meson.build @@ -74,6 +74,8 @@ has_member_args = [ 'struct ibv_counters_init_attr', 'comp_mask' ], [ 'HAVE_MLX5DV_DEVX_UAR_OFFSET', 'infiniband/mlx5dv.h', 'struct mlx5dv_devx_uar', 'mmap_off' ], + [ 'HAVE_MLX5DV_FLOW_MATCHER_FT_TYPE', 'infiniband/mlx5dv.h', + 'struct mlx5dv_flow_matcher_attr', 'ft_type' ], ] # input array for meson symbol search: # [ "MACRO to define if found", "header for the search", diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c new file mode 100644 index 0000000000..d1205c42fa --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -0,0 +1,919 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static bool mlx5dr_matcher_requires_col_tbl(uint8_t log_num_of_rules) +{ + /* Collision table concatenation is done only for large rule tables */ + return log_num_of_rules > MLX5DR_MATCHER_ASSURED_RULES_TH; +} + +static uint8_t mlx5dr_matcher_rules_to_tbl_depth(uint8_t log_num_of_rules) +{ + if (mlx5dr_matcher_requires_col_tbl(log_num_of_rules)) + return MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH; + + /* For small rule tables we use a single deep table to assure insertion */ + return RTE_MIN(log_num_of_rules, MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH); +} + +static int mlx5dr_matcher_create_end_ft(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_table *tbl = matcher->tbl; + + matcher->end_ft = mlx5dr_table_create_default_ft(tbl); + if (!matcher->end_ft) { + DR_LOG(ERR, "Failed to create matcher end flow table"); + return rte_errno; + } + return 0; +} + +static void mlx5dr_matcher_destroy_end_ft(struct mlx5dr_matcher *matcher) +{ + mlx5dr_table_destroy_default_ft(matcher->tbl, matcher->end_ft); +} + +static int mlx5dr_matcher_connect(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_matcher *prev = NULL; + struct mlx5dr_matcher *next = NULL; + struct mlx5dr_matcher *tmp_matcher; + struct mlx5dr_devx_obj *ft; + int ret; + + /* Find location in matcher list */ + if (LIST_EMPTY(&tbl->head)) { + LIST_INSERT_HEAD(&tbl->head, matcher, next); + goto connect; + } + + LIST_FOREACH(tmp_matcher, &tbl->head, next) { + if (tmp_matcher->attr.priority > matcher->attr.priority) { + next = tmp_matcher; + break; + } + prev = tmp_matcher; + } + + if (next) + LIST_INSERT_BEFORE(next, matcher, next); + else + LIST_INSERT_AFTER(prev, matcher, next); + +connect: + ft_attr.modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID; + ft_attr.type = tbl->fw_ft_type; + + /* Connect to next */ + if (next) { + if (next->match_ste.rtc_0) + ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; + if (next->match_ste.rtc_1) + ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + + ret = mlx5dr_cmd_flow_table_modify(matcher->end_ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect new matcher to next RTC"); + goto remove_from_list; + } + } + + /* Connect to previous */ + ft = prev ? prev->end_ft : tbl->ft; + + if (matcher->match_ste.rtc_0) + ft_attr.rtc_id_0 = matcher->match_ste.rtc_0->id; + if (matcher->match_ste.rtc_1) + ft_attr.rtc_id_1 = matcher->match_ste.rtc_1->id; + + ret = mlx5dr_cmd_flow_table_modify(ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect new matcher to previous FT"); + goto remove_from_list; + } + + return 0; + +remove_from_list: + LIST_REMOVE(matcher, next); + return ret; +} + +static int mlx5dr_matcher_disconnect(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_matcher *tmp_matcher; + struct mlx5dr_devx_obj *prev_ft; + struct mlx5dr_matcher *next; + int ret; + + prev_ft = matcher->tbl->ft; + LIST_FOREACH(tmp_matcher, &tbl->head, next) { + if (tmp_matcher == matcher) + break; + + prev_ft = tmp_matcher->end_ft; + } + + next = matcher->next.le_next; + + ft_attr.modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID; + ft_attr.type = matcher->tbl->fw_ft_type; + + if (next) { + /* Connect previous end FT to next RTC if exists */ + if (next->match_ste.rtc_0) + ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; + if (next->match_ste.rtc_1) + ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + } else { + /* Matcher is last, point prev end FT to default miss */ + mlx5dr_cmd_set_attr_connect_miss_tbl(tbl->ctx, + tbl->fw_ft_type, + tbl->type, + &ft_attr); + } + + ret = mlx5dr_cmd_flow_table_modify(prev_ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to disconnect matcher"); + return ret; + } + + LIST_REMOVE(matcher, next); + + return 0; +} + +static void mlx5dr_matcher_set_rtc_attr_sz(struct mlx5dr_matcher *matcher, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr, + bool is_match_rtc, + bool is_mirror) +{ + enum mlx5dr_matcher_flow_src flow_src = matcher->attr.optimize_flow_src; + struct mlx5dr_pool_chunk *ste = &matcher->action_ste.ste; + + if ((flow_src == MLX5DR_MATCHER_FLOW_SRC_VPORT && !is_mirror) || + (flow_src == MLX5DR_MATCHER_FLOW_SRC_WIRE && is_mirror)) { + /* Optimize FDB RTC */ + rtc_attr->log_size = 0; + rtc_attr->log_depth = 0; + } else { + /* Keep original values */ + rtc_attr->log_size = is_match_rtc ? matcher->attr.table.sz_row_log : ste->order; + rtc_attr->log_depth = is_match_rtc ? matcher->attr.table.sz_col_log : 0; + } +} + +static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, + bool is_match_rtc) +{ + const char *rtc_type_str = is_match_rtc ? "match" : "action"; + struct mlx5dr_cmd_rtc_create_attr rtc_attr = {0}; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_action_default_stc *default_stc; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_devx_obj **rtc_0, **rtc_1; + struct mlx5dr_pool *ste_pool, *stc_pool; + struct mlx5dr_devx_obj *devx_obj; + struct mlx5dr_pool_chunk *ste; + int ret; + + if (is_match_rtc) { + rtc_0 = &matcher->match_ste.rtc_0; + rtc_1 = &matcher->match_ste.rtc_1; + ste_pool = matcher->match_ste.pool; + ste = &matcher->match_ste.ste; + ste->order = matcher->attr.table.sz_col_log + + matcher->attr.table.sz_row_log; + rtc_attr.log_size = matcher->attr.table.sz_row_log; + rtc_attr.log_depth = matcher->attr.table.sz_col_log; + rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH; + /* The first match template is used since all share the same definer */ + rtc_attr.definer_id = mlx5dr_definer_get_id(matcher->mt[0]->definer); + rtc_attr.is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + rtc_attr.miss_ft_id = matcher->end_ft->id; + /* Match pool requires implicit allocation */ + ret = mlx5dr_pool_chunk_alloc(ste_pool, ste); + if (ret) { + DR_LOG(ERR, "Failed to allocate STE for %s RTC", rtc_type_str); + return ret; + } + } else { + rtc_0 = &matcher->action_ste.rtc_0; + rtc_1 = &matcher->action_ste.rtc_1; + ste_pool = matcher->action_ste.pool; + ste = &matcher->action_ste.ste; + ste->order = rte_log2_u32(matcher->action_ste.max_stes) + + matcher->attr.table.sz_row_log; + rtc_attr.log_size = ste->order; + rtc_attr.log_depth = 0; + rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; + /* The action STEs use the default always hit definer */ + rtc_attr.definer_id = ctx->caps->trivial_match_definer; + rtc_attr.is_jumbo = false; + rtc_attr.miss_ft_id = 0; + } + + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + + rtc_attr.pd = ctx->pd_num; + rtc_attr.ste_base = devx_obj->id; + rtc_attr.ste_offset = ste->offset; + rtc_attr.table_type = mlx5dr_table_get_res_fw_ft_type(tbl->type, false); + mlx5dr_matcher_set_rtc_attr_sz(matcher, &rtc_attr, is_match_rtc, false); + + /* STC is a single resource (devx_obj), use any STC for the ID */ + stc_pool = ctx->stc_pool[tbl->type]; + default_stc = ctx->common_res[tbl->type].default_stc; + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, &default_stc->default_hit); + rtc_attr.stc_base = devx_obj->id; + + *rtc_0 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); + if (!*rtc_0) { + DR_LOG(ERR, "Failed to create matcher %s RTC", rtc_type_str); + goto free_ste; + } + + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) { + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + rtc_attr.ste_base = devx_obj->id; + rtc_attr.table_type = mlx5dr_table_get_res_fw_ft_type(tbl->type, true); + + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, &default_stc->default_hit); + rtc_attr.stc_base = devx_obj->id; + mlx5dr_matcher_set_rtc_attr_sz(matcher, &rtc_attr, is_match_rtc, true); + + *rtc_1 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); + if (!*rtc_1) { + DR_LOG(ERR, "Failed to create peer matcher %s RTC0", rtc_type_str); + goto destroy_rtc_0; + } + } + + return 0; + +destroy_rtc_0: + mlx5dr_cmd_destroy_obj(*rtc_0); +free_ste: + if (is_match_rtc) + mlx5dr_pool_chunk_free(ste_pool, ste); + return rte_errno; +} + +static void mlx5dr_matcher_destroy_rtc(struct mlx5dr_matcher *matcher, + bool is_match_rtc) +{ + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_devx_obj *rtc_0, *rtc_1; + struct mlx5dr_pool_chunk *ste; + struct mlx5dr_pool *ste_pool; + + if (is_match_rtc) { + rtc_0 = matcher->match_ste.rtc_0; + rtc_1 = matcher->match_ste.rtc_1; + ste_pool = matcher->match_ste.pool; + ste = &matcher->match_ste.ste; + } else { + rtc_0 = matcher->action_ste.rtc_0; + rtc_1 = matcher->action_ste.rtc_1; + ste_pool = matcher->action_ste.pool; + ste = &matcher->action_ste.ste; + } + + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) + mlx5dr_cmd_destroy_obj(rtc_1); + + mlx5dr_cmd_destroy_obj(rtc_0); + if (is_match_rtc) + mlx5dr_pool_chunk_free(ste_pool, ste); +} + +static void mlx5dr_matcher_set_pool_attr(struct mlx5dr_pool_attr *attr, + struct mlx5dr_matcher *matcher) +{ + switch (matcher->attr.optimize_flow_src) { + case MLX5DR_MATCHER_FLOW_SRC_VPORT: + attr->opt_type = MLX5DR_POOL_OPTIMIZE_ORIG; + break; + case MLX5DR_MATCHER_FLOW_SRC_WIRE: + attr->opt_type = MLX5DR_POOL_OPTIMIZE_MIRROR; + break; + default: + break; + } +} + +static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher) +{ + bool is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_pool_attr pool_attr = {0}; + struct mlx5dr_context *ctx = tbl->ctx; + uint32_t required_stes; + int i, ret; + bool valid; + + for (i = 0; i < matcher->num_of_at; i++) { + struct mlx5dr_action_template *at = matcher->at[i]; + + /* Check if action combinabtion is valid */ + valid = mlx5dr_action_check_combo(at->action_type_arr, matcher->tbl->type); + if (!valid) { + DR_LOG(ERR, "Invalid combination in action template %d", i); + return rte_errno; + } + + /* Process action template to setters */ + ret = mlx5dr_action_template_process(at); + if (ret) { + DR_LOG(ERR, "Failed to process action template %d", i); + return rte_errno; + } + + required_stes = at->num_of_action_stes - (!is_jumbo || at->only_term); + matcher->action_ste.max_stes = RTE_MAX(matcher->action_ste.max_stes, required_stes); + + /* Future: Optimize reparse */ + } + + /* There are no additioanl STEs required for matcher */ + if (!matcher->action_ste.max_stes) + return 0; + + /* Allocate action STE mempool */ + pool_attr.table_type = tbl->type; + pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_STE_ACTION_POOL; + pool_attr.alloc_log_sz = rte_log2_u32(matcher->action_ste.max_stes) + + matcher->attr.table.sz_row_log; + mlx5dr_matcher_set_pool_attr(&pool_attr, matcher); + matcher->action_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); + if (!matcher->action_ste.pool) { + DR_LOG(ERR, "Failed to create action ste pool"); + return rte_errno; + } + + /* Allocate action RTC */ + ret = mlx5dr_matcher_create_rtc(matcher, false); + if (ret) { + DR_LOG(ERR, "Failed to create action RTC"); + goto free_ste_pool; + } + + /* Allocate STC for jumps to STE */ + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE; + stc_attr.ste_table.ste = matcher->action_ste.ste; + stc_attr.ste_table.ste_pool = matcher->action_ste.pool; + stc_attr.ste_table.match_definer_id = ctx->caps->trivial_match_definer; + + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl->type, + &matcher->action_ste.stc); + if (ret) { + DR_LOG(ERR, "Failed to create action jump to table STC"); + goto free_rtc; + } + + return 0; + +free_rtc: + mlx5dr_matcher_destroy_rtc(matcher, false); +free_ste_pool: + mlx5dr_pool_destroy(matcher->action_ste.pool); + return rte_errno; +} + +static void mlx5dr_matcher_unbind_at(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_table *tbl = matcher->tbl; + + if (!matcher->action_ste.max_stes) + return; + + mlx5dr_action_free_single_stc(tbl->ctx, tbl->type, &matcher->action_ste.stc); + mlx5dr_matcher_destroy_rtc(matcher, false); + mlx5dr_pool_destroy(matcher->action_ste.pool); +} + +static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_pool_attr pool_attr = {0}; + int i, created = 0; + int ret = -1; + + for (i = 0; i < matcher->num_of_mt; i++) { + /* Get a definer for each match template */ + ret = mlx5dr_definer_get(ctx, matcher->mt[i]); + if (ret) + goto definer_put; + + created++; + + /* Verify all templates produce the same definer */ + if (i == 0) + continue; + + ret = mlx5dr_definer_compare(matcher->mt[i]->definer, + matcher->mt[i - 1]->definer); + if (ret) { + DR_LOG(ERR, "Match templates cannot be used on the same matcher"); + rte_errno = ENOTSUP; + goto definer_put; + } + } + + /* Create an STE pool per matcher*/ + pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_MATCHER_STE_POOL; + pool_attr.table_type = matcher->tbl->type; + pool_attr.alloc_log_sz = matcher->attr.table.sz_col_log + + matcher->attr.table.sz_row_log; + mlx5dr_matcher_set_pool_attr(&pool_attr, matcher); + + matcher->match_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); + if (!matcher->match_ste.pool) { + DR_LOG(ERR, "Failed to allocate matcher STE pool"); + goto definer_put; + } + + return 0; + +definer_put: + while (created--) + mlx5dr_definer_put(matcher->mt[created]); + + return ret; +} + +static void mlx5dr_matcher_unbind_mt(struct mlx5dr_matcher *matcher) +{ + int i; + + for (i = 0; i < matcher->num_of_mt; i++) + mlx5dr_definer_put(matcher->mt[i]); + + mlx5dr_pool_destroy(matcher->match_ste.pool); +} + +static int +mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps, + struct mlx5dr_matcher *matcher, + bool is_root) +{ + struct mlx5dr_matcher_attr *attr = &matcher->attr; + + if (matcher->tbl->type != MLX5DR_TABLE_TYPE_FDB && attr->optimize_flow_src) { + DR_LOG(ERR, "NIC domain doesn't support flow_src"); + goto not_supported; + } + + if (is_root) { + if (attr->mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) { + DR_LOG(ERR, "Root matcher supports only rule resource mode"); + goto not_supported; + } + if (attr->optimize_flow_src) { + DR_LOG(ERR, "Root matcher can't specify FDB direction"); + goto not_supported; + } + return 0; + } + + /* Convert number of rules to the required depth */ + if (attr->mode == MLX5DR_MATCHER_RESOURCE_MODE_RULE) + attr->table.sz_col_log = mlx5dr_matcher_rules_to_tbl_depth(attr->rule.num_log); + + if (attr->table.sz_col_log > caps->rtc_log_depth_max) { + DR_LOG(ERR, "Matcher depth exceeds limit %d", caps->rtc_log_depth_max); + goto not_supported; + } + + if (attr->table.sz_col_log + attr->table.sz_row_log > caps->ste_alloc_log_max) { + DR_LOG(ERR, "Total matcher size exceeds limit %d", caps->ste_alloc_log_max); + goto not_supported; + } + + if (attr->table.sz_col_log + attr->table.sz_row_log < caps->ste_alloc_log_gran) { + DR_LOG(ERR, "Total matcher size below limit %d", caps->ste_alloc_log_gran); + goto not_supported; + } + + return 0; + +not_supported: + rte_errno = EOPNOTSUPP; + return rte_errno; +} + +static int mlx5dr_matcher_create_and_connect(struct mlx5dr_matcher *matcher) +{ + int ret; + + /* Select and create the definers for current matcher */ + ret = mlx5dr_matcher_bind_mt(matcher); + if (ret) + return ret; + + /* Calculate and verify action combination */ + ret = mlx5dr_matcher_bind_at(matcher); + if (ret) + goto unbind_mt; + + /* Create matcher end flow table anchor */ + ret = mlx5dr_matcher_create_end_ft(matcher); + if (ret) + goto unbind_at; + + /* Allocate the RTC for the new matcher */ + ret = mlx5dr_matcher_create_rtc(matcher, true); + if (ret) + goto destroy_end_ft; + + /* Connect the matcher to the matcher list */ + ret = mlx5dr_matcher_connect(matcher); + if (ret) + goto destroy_rtc; + + return 0; + +destroy_rtc: + mlx5dr_matcher_destroy_rtc(matcher, true); +destroy_end_ft: + mlx5dr_matcher_destroy_end_ft(matcher); +unbind_at: + mlx5dr_matcher_unbind_at(matcher); +unbind_mt: + mlx5dr_matcher_unbind_mt(matcher); + return ret; +} + +static void mlx5dr_matcher_destroy_and_disconnect(struct mlx5dr_matcher *matcher) +{ + mlx5dr_matcher_disconnect(matcher); + mlx5dr_matcher_destroy_rtc(matcher, true); + mlx5dr_matcher_destroy_end_ft(matcher); + mlx5dr_matcher_unbind_at(matcher); + mlx5dr_matcher_unbind_mt(matcher); +} + +static int +mlx5dr_matcher_create_col_matcher(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_matcher *col_matcher; + int ret; + + if (matcher->attr.mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) + return 0; + + if (!mlx5dr_matcher_requires_col_tbl(matcher->attr.rule.num_log)) + return 0; + + col_matcher = simple_calloc(1, sizeof(*matcher)); + if (!col_matcher) { + rte_errno = ENOMEM; + return rte_errno; + } + + col_matcher->tbl = matcher->tbl; + col_matcher->num_of_mt = matcher->num_of_mt; + memcpy(col_matcher->mt, matcher->mt, matcher->num_of_mt * sizeof(*matcher->mt)); + col_matcher->num_of_at = matcher->num_of_at; + memcpy(col_matcher->at, matcher->at, matcher->num_of_at * sizeof(*matcher->at)); + + col_matcher->attr.priority = matcher->attr.priority; + col_matcher->attr.mode = MLX5DR_MATCHER_RESOURCE_MODE_HTABLE; + col_matcher->attr.optimize_flow_src = matcher->attr.optimize_flow_src; + col_matcher->attr.table.sz_row_log = matcher->attr.rule.num_log; + col_matcher->attr.table.sz_col_log = MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH; + if (col_matcher->attr.table.sz_row_log > MLX5DR_MATCHER_ASSURED_ROW_RATIO) + col_matcher->attr.table.sz_row_log -= MLX5DR_MATCHER_ASSURED_ROW_RATIO; + + ret = mlx5dr_matcher_process_attr(ctx->caps, col_matcher, false); + if (ret) + goto free_col_matcher; + + ret = mlx5dr_matcher_create_and_connect(col_matcher); + if (ret) + goto free_col_matcher; + + matcher->col_matcher = col_matcher; + + return 0; + +free_col_matcher: + simple_free(col_matcher); + DR_LOG(ERR, "Failed to create assured collision matcher"); + return ret; +} + +static void +mlx5dr_matcher_destroy_col_matcher(struct mlx5dr_matcher *matcher) +{ + if (matcher->attr.mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) + return; + + if (matcher->col_matcher) { + mlx5dr_matcher_destroy_and_disconnect(matcher->col_matcher); + simple_free(matcher->col_matcher); + } +} + +static int mlx5dr_matcher_init(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + + /* Allocate matcher resource and connect to the packet pipe */ + ret = mlx5dr_matcher_create_and_connect(matcher); + if (ret) + goto unlock_err; + + /* Create additional matcher for collision handling */ + ret = mlx5dr_matcher_create_col_matcher(matcher); + if (ret) + goto destory_and_disconnect; + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +destory_and_disconnect: + mlx5dr_matcher_destroy_and_disconnect(matcher); +unlock_err: + pthread_spin_unlock(&ctx->ctrl_lock); + return ret; +} + +static int mlx5dr_matcher_uninit(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + + pthread_spin_lock(&ctx->ctrl_lock); + mlx5dr_matcher_destroy_col_matcher(matcher); + mlx5dr_matcher_destroy_and_disconnect(matcher); + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; +} + +static int mlx5dr_matcher_init_root(struct mlx5dr_matcher *matcher) +{ + enum mlx5dr_table_type type = matcher->tbl->type; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dv_flow_matcher_attr attr = {0}; + struct mlx5dv_flow_match_parameters *mask; + struct mlx5_flow_attr flow_attr = {0}; + struct rte_flow_error rte_error; + uint8_t match_criteria; + int ret; + + switch (type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + case MLX5DR_TABLE_TYPE_NIC_TX: + break; +#ifdef HAVE_MLX5DV_FLOW_MATCHER_FT_TYPE + case MLX5DR_TABLE_TYPE_FDB: + attr.comp_mask = MLX5DV_FLOW_MATCHER_MASK_FT_TYPE; + attr.ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; + break; +#endif + default: + assert(0); + break; + } + + if (matcher->attr.priority > UINT16_MAX) { + DR_LOG(ERR, "Root matcher priority exceeds allowed limit"); + rte_errno = EINVAL; + return rte_errno; + } + + mask = simple_calloc(1, MLX5_ST_SZ_BYTES(fte_match_param) + + offsetof(struct mlx5dv_flow_match_parameters, match_buf)); + if (!mask) { + rte_errno = ENOMEM; + return rte_errno; + } + + flow_attr.tbl_type = type; + + /* On root table matcher, only a single match template is supported */ + ret = flow_dv_translate_items_hws(matcher->mt[0]->items, + &flow_attr, mask->match_buf, + MLX5_SET_MATCHER_HS_M, NULL, + &match_criteria, + &rte_error); + if (ret) { + DR_LOG(ERR, "Failed to convert items to PRM [%s]", rte_error.message); + goto free_mask; + } + + mask->match_sz = MLX5_ST_SZ_BYTES(fte_match_param); + attr.match_mask = mask; + attr.match_criteria_enable = match_criteria; + attr.type = IBV_FLOW_ATTR_NORMAL; + attr.priority = matcher->attr.priority; + + matcher->dv_matcher = + mlx5_glue->dv_create_flow_matcher_root(ctx->ibv_ctx, &attr); + if (!matcher->dv_matcher) { + DR_LOG(ERR, "Failed to create DV flow matcher"); + rte_errno = errno; + goto free_mask; + } + + simple_free(mask); + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_INSERT_HEAD(&matcher->tbl->head, matcher, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_mask: + simple_free(mask); + return rte_errno; +} + +static int mlx5dr_matcher_uninit_root(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_REMOVE(matcher, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + ret = mlx5_glue->dv_destroy_flow_matcher_root(matcher->dv_matcher); + if (ret) { + DR_LOG(ERR, "Failed to Destroy DV flow matcher"); + rte_errno = errno; + } + + return ret; +} + +static int +mlx5dr_matcher_check_template(uint8_t num_of_mt, uint8_t num_of_at, bool is_root) +{ + uint8_t max_num_of_mt; + + max_num_of_mt = is_root ? + MLX5DR_MATCHER_MAX_MT_ROOT : + MLX5DR_MATCHER_MAX_MT; + + if (!num_of_mt || !num_of_at) { + DR_LOG(ERR, "Number of action/match template cannot be zero"); + goto out_not_sup; + } + + if (num_of_at > MLX5DR_MATCHER_MAX_AT) { + DR_LOG(ERR, "Number of action templates exceeds limit"); + goto out_not_sup; + } + + if (num_of_mt > max_num_of_mt) { + DR_LOG(ERR, "Number of match templates exceeds limit"); + goto out_not_sup; + } + + return 0; + +out_not_sup: + rte_errno = ENOTSUP; + return rte_errno; +} + +struct mlx5dr_matcher * +mlx5dr_matcher_create(struct mlx5dr_table *tbl, + struct mlx5dr_match_template *mt[], + uint8_t num_of_mt, + struct mlx5dr_action_template *at[], + uint8_t num_of_at, + struct mlx5dr_matcher_attr *attr) +{ + bool is_root = mlx5dr_table_is_root(tbl); + struct mlx5dr_matcher *matcher; + int ret; + + ret = mlx5dr_matcher_check_template(num_of_mt, num_of_at, is_root); + if (ret) + return NULL; + + matcher = simple_calloc(1, sizeof(*matcher)); + if (!matcher) { + rte_errno = ENOMEM; + return NULL; + } + + matcher->tbl = tbl; + matcher->attr = *attr; + matcher->num_of_mt = num_of_mt; + memcpy(matcher->mt, mt, num_of_mt * sizeof(*mt)); + matcher->num_of_at = num_of_at; + memcpy(matcher->at, at, num_of_at * sizeof(*at)); + + ret = mlx5dr_matcher_process_attr(tbl->ctx->caps, matcher, is_root); + if (ret) + goto free_matcher; + + if (is_root) + ret = mlx5dr_matcher_init_root(matcher); + else + ret = mlx5dr_matcher_init(matcher); + + if (ret) { + DR_LOG(ERR, "Failed to initialise matcher: %d", ret); + goto free_matcher; + } + + return matcher; + +free_matcher: + simple_free(matcher); + return NULL; +} + +int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher) +{ + if (mlx5dr_table_is_root(matcher->tbl)) + mlx5dr_matcher_uninit_root(matcher); + else + mlx5dr_matcher_uninit(matcher); + + simple_free(matcher); + return 0; +} + +struct mlx5dr_match_template * +mlx5dr_match_template_create(const struct rte_flow_item items[], + enum mlx5dr_match_template_flags flags) +{ + struct mlx5dr_match_template *mt; + struct rte_flow_error error; + int ret, len; + + if (flags > MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH) { + DR_LOG(ERR, "Unsupported match template flag provided"); + rte_errno = EINVAL; + return NULL; + } + + mt = simple_calloc(1, sizeof(*mt)); + if (!mt) { + DR_LOG(ERR, "Failed to allocate match template"); + rte_errno = ENOMEM; + return NULL; + } + + mt->flags = flags; + + /* Duplicate the user given items */ + ret = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, NULL, 0, items, &error); + if (ret <= 0) { + DR_LOG(ERR, "Unable to process items (%s): %s", + error.message ? error.message : "unspecified", + strerror(rte_errno)); + goto free_template; + } + + len = RTE_ALIGN(ret, 16); + mt->items = simple_calloc(1, len); + if (!mt->items) { + DR_LOG(ERR, "Failed to allocate item copy"); + rte_errno = ENOMEM; + goto free_template; + } + + ret = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, mt->items, ret, items, &error); + if (ret <= 0) + goto free_dst; + + return mt; + +free_dst: + simple_free(mt->items); +free_template: + simple_free(mt); + return NULL; +} + +int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt) +{ + assert(!mt->refcount); + simple_free(mt->items); + simple_free(mt); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h new file mode 100644 index 0000000000..b7bf94762c --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h @@ -0,0 +1,76 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_MATCHER_H_ +#define MLX5DR_MATCHER_H_ + +/* Max supported match template */ +#define MLX5DR_MATCHER_MAX_MT 2 +#define MLX5DR_MATCHER_MAX_MT_ROOT 1 + +/* Max supported action template */ +#define MLX5DR_MATCHER_MAX_AT 4 + +/* We calculated that concatenating a collision table to the main table with + * 3% of the main table rows will be enough resources for high insertion + * success probability. + * + * The calculation: log2(2^x * 3 / 100) = log2(2^x) + log2(3/100) = x - 5.05 ~ 5 + */ +#define MLX5DR_MATCHER_ASSURED_ROW_RATIO 5 +/* Thrashold to determine if amount of rules require a collision table */ +#define MLX5DR_MATCHER_ASSURED_RULES_TH 10 +/* Required depth of an assured collision table */ +#define MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH 4 +/* Required depth of the main large table */ +#define MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH 2 + +struct mlx5dr_match_template { + struct rte_flow_item *items; + struct mlx5dr_definer *definer; + struct mlx5dr_definer_fc *fc; + uint32_t fc_sz; + uint64_t item_flags; + uint8_t vport_item_id; + enum mlx5dr_match_template_flags flags; + uint32_t refcount; +}; + +struct mlx5dr_matcher_match_ste { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_devx_obj *rtc_0; + struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_pool *pool; +}; + +struct mlx5dr_matcher_action_ste { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_pool_chunk stc; + struct mlx5dr_devx_obj *rtc_0; + struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_pool *pool; + uint8_t max_stes; +}; + +struct mlx5dr_matcher { + struct mlx5dr_table *tbl; + struct mlx5dr_matcher_attr attr; + struct mlx5dv_flow_matcher *dv_matcher; + struct mlx5dr_match_template *mt[MLX5DR_MATCHER_MAX_MT]; + uint8_t num_of_mt; + struct mlx5dr_action_template *at[MLX5DR_MATCHER_MAX_AT]; + uint8_t num_of_at; + struct mlx5dr_devx_obj *end_ft; + struct mlx5dr_matcher *col_matcher; + struct mlx5dr_matcher_match_ste match_ste; + struct mlx5dr_matcher_action_ste action_ste; + LIST_ENTRY(mlx5dr_matcher) next; +}; + +int mlx5dr_matcher_conv_items_to_prm(uint64_t *match_buf, + struct rte_flow_item *items, + uint8_t *match_criteria, + bool is_value); + +#endif /* MLX5DR_MATCHER_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 15/18] net/mlx5/hws: Add HWS rule object 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (13 preceding siblings ...) 2022-10-19 20:57 ` [v5 14/18] net/mlx5/hws: Add HWS matcher object Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 16/18] net/mlx5/hws: Add HWS action object Alex Vesker ` (2 subsequent siblings) 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit HWS rule objects reside under the matcher, each rule holds the configuration for the packet fields to match on and the set of actions to execute over the packet that has the requested fields. Rules can be created asynchronously in parallel over multiple queues to different matchers. Each rule is configured to the HW. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_rule.c | 528 +++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_rule.h | 50 +++ 2 files changed, 578 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.h diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c new file mode 100644 index 0000000000..b27318e6d4 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -0,0 +1,528 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static void mlx5dr_rule_skip(struct mlx5dr_matcher *matcher, + const struct rte_flow_item *items, + bool *skip_rx, bool *skip_tx) +{ + struct mlx5dr_match_template *mt = matcher->mt[0]; + const struct flow_hw_port_info *vport; + const struct rte_flow_item_ethdev *v; + + /* Flow_src is the 1st priority */ + if (matcher->attr.optimize_flow_src) { + *skip_tx = matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_WIRE; + *skip_rx = matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_VPORT; + return; + } + + /* By default FDB rules are added to both RX and TX */ + *skip_rx = false; + *skip_tx = false; + + if (mt->item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) { + v = items[mt->vport_item_id].spec; + vport = flow_hw_conv_port_id(v->port_id); + if (unlikely(!vport)) { + DR_LOG(ERR, "Fail to map port ID %d, ignoring", v->port_id); + return; + } + + if (!vport->is_wire) + /* Match vport ID is not WIRE -> Skip RX */ + *skip_rx = true; + else + /* Match vport ID is WIRE -> Skip TX */ + *skip_tx = true; + } +} + +static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, + struct mlx5dr_rule *rule, + const struct rte_flow_item *items, + void *user_data) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_table *tbl = matcher->tbl; + bool skip_rx, skip_tx; + + dep_wqe->rule = rule; + dep_wqe->user_data = user_data; + + switch (tbl->type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + case MLX5DR_TABLE_TYPE_NIC_TX: + dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; + dep_wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0->id : 0; + dep_wqe->rtc_1 = 0; + dep_wqe->retry_rtc_1 = 0; + break; + + case MLX5DR_TABLE_TYPE_FDB: + mlx5dr_rule_skip(matcher, items, &skip_rx, &skip_tx); + + if (!skip_rx) { + dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; + dep_wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0->id : 0; + } else { + dep_wqe->rtc_0 = 0; + dep_wqe->retry_rtc_0 = 0; + } + + if (!skip_tx) { + dep_wqe->rtc_1 = matcher->match_ste.rtc_1->id; + dep_wqe->retry_rtc_1 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_1->id : 0; + } else { + dep_wqe->rtc_1 = 0; + dep_wqe->retry_rtc_1 = 0; + } + + break; + + default: + assert(false); + break; + } +} + +static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue, + struct mlx5dr_rule *rule, + bool err, + void *user_data, + enum mlx5dr_rule_status rule_status_on_succ) +{ + enum rte_flow_op_status comp_status; + + if (!err) { + comp_status = RTE_FLOW_OP_SUCCESS; + rule->status = rule_status_on_succ; + } else { + comp_status = RTE_FLOW_OP_ERROR; + rule->status = MLX5DR_RULE_STATUS_FAILED; + } + + mlx5dr_send_engine_inc_rule(queue); + mlx5dr_send_engine_gen_comp(queue, user_data, comp_status); +} + +static int mlx5dr_rule_alloc_action_ste(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + int ret; + + /* Use rule_idx for locking optimzation, otherwise allocate from pool */ + if (matcher->attr.optimize_using_rule_idx) { + rule->action_ste_idx = attr->rule_idx * matcher->action_ste.max_stes; + } else { + struct mlx5dr_pool_chunk ste = {0}; + + ste.order = rte_log2_u32(matcher->action_ste.max_stes); + ret = mlx5dr_pool_chunk_alloc(matcher->action_ste.pool, &ste); + if (ret) { + DR_LOG(ERR, "Failed to allocate STE for rule actions"); + return ret; + } + rule->action_ste_idx = ste.offset; + } + return 0; +} + +void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + + if (rule->action_ste_idx > -1 && !matcher->attr.optimize_using_rule_idx) { + struct mlx5dr_pool_chunk ste = {0}; + + /* This release is safe only when the rule match part was deleted */ + ste.order = rte_log2_u32(matcher->action_ste.max_stes); + ste.offset = rule->action_ste_idx; + mlx5dr_pool_chunk_free(matcher->action_ste.pool, &ste); + } +} + +static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr, + struct mlx5dr_actions_apply_data *apply) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_context *ctx = tbl->ctx; + + /* Init rule before reuse */ + rule->rtc_0 = 0; + rule->rtc_1 = 0; + rule->pending_wqes = 0; + rule->action_ste_idx = -1; + rule->status = MLX5DR_RULE_STATUS_CREATING; + + /* Init default send STE attributes */ + ste_attr->gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE; + ste_attr->send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr->send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr->send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + + /* Init default action apply */ + apply->tbl_type = tbl->type; + apply->common_res = &ctx->common_res[tbl->type]; + apply->jump_to_action_stc = matcher->action_ste.stc.offset; + apply->require_dep = 0; +} + +static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dr_action_template *at = rule->matcher->at[at_idx]; + struct mlx5dr_match_template *mt = rule->matcher->mt[mt_idx]; + bool is_jumbo = mlx5dr_definer_is_jumbo(mt->definer); + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + struct mlx5dr_actions_wqe_setter *setter; + struct mlx5dr_actions_apply_data apply; + struct mlx5dr_send_engine *queue; + uint8_t total_stes, action_stes; + int i, ret; + + queue = &ctx->send_queue[attr->queue_id]; + if (unlikely(mlx5dr_send_engine_err(queue))) { + rte_errno = EIO; + return rte_errno; + } + + mlx5dr_rule_create_init(rule, &ste_attr, &apply); + + /* Allocate dependent match WQE since rule might have dependent writes. + * The queued dependent WQE can be later aborted or kept as a dependency. + * dep_wqe buffers (ctrl, data) are also reused for all STE writes. + */ + dep_wqe = mlx5dr_send_add_new_dep_wqe(queue); + mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, attr->user_data); + + ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; + ste_attr.wqe_data = &dep_wqe->wqe_data; + apply.wqe_ctrl = &dep_wqe->wqe_ctrl; + apply.wqe_data = (uint32_t *)&dep_wqe->wqe_data; + apply.rule_action = rule_actions; + apply.queue = queue; + + setter = &at->setters[at->num_of_action_stes]; + total_stes = at->num_of_action_stes + (is_jumbo && !at->only_term); + action_stes = total_stes - 1; + + if (action_stes) { + /* Allocate action STEs for complex rules */ + ret = mlx5dr_rule_alloc_action_ste(rule, attr); + if (ret) { + DR_LOG(ERR, "Failed to allocate action memory %d", ret); + mlx5dr_send_abort_new_dep_wqe(queue); + return ret; + } + /* Skip RX/TX based on the dep_wqe init */ + ste_attr.rtc_0 = dep_wqe->rtc_0 ? matcher->action_ste.rtc_0->id : 0; + ste_attr.rtc_1 = dep_wqe->rtc_1 ? matcher->action_ste.rtc_1->id : 0; + /* Action STEs are written to a specific index last to first */ + ste_attr.direct_index = rule->action_ste_idx + action_stes; + apply.next_direct_idx = ste_attr.direct_index; + } else { + apply.next_direct_idx = 0; + } + + for (i = total_stes; i-- > 0;) { + mlx5dr_action_apply_setter(&apply, setter--, !i && is_jumbo); + + if (i == 0) { + /* Handle last match STE */ + mlx5dr_definer_create_tag(items, mt->fc, mt->fc_sz, + (uint8_t *)dep_wqe->wqe_data.action); + + /* Rule has dependent WQEs, match dep_wqe is queued */ + if (action_stes || apply.require_dep) + break; + + /* Rule has no dependencies, abort dep_wqe and send WQE now */ + mlx5dr_send_abort_new_dep_wqe(queue); + ste_attr.wqe_tag_is_jumbo = is_jumbo; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = dep_wqe->user_data; + ste_attr.send_attr.rule = dep_wqe->rule; + ste_attr.direct_index = 0; + ste_attr.rtc_0 = dep_wqe->rtc_0; + ste_attr.rtc_1 = dep_wqe->rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.retry_rtc_0 = dep_wqe->retry_rtc_0; + ste_attr.retry_rtc_1 = dep_wqe->retry_rtc_1; + } else { + apply.next_direct_idx = --ste_attr.direct_index; + } + + mlx5dr_send_ste(queue, &ste_attr); + } + + /* Backup TAG on the rule for deletion */ + if (is_jumbo) + memcpy(rule->tag.jumbo, dep_wqe->wqe_data.action, MLX5DR_JUMBO_TAG_SZ); + else + memcpy(rule->tag.match, dep_wqe->wqe_data.tag, MLX5DR_MATCH_TAG_SZ); + + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQEs */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + return 0; +} + +static void mlx5dr_rule_destroy_failed_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + + mlx5dr_rule_gen_comp(queue, rule, false, + attr->user_data, MLX5DR_RULE_STATUS_DELETED); + + /* Rule failed now we can safely release action STEs */ + mlx5dr_rule_free_action_ste_idx(rule); + + /* If a rule that was indicated as burst (need to trigger HW) has failed + * insertion we won't ring the HW as nothing is being written to the WQ. + * In such case update the last WQE and ring the HW with that work + */ + if (attr->burst) + return; + + mlx5dr_send_all_dep_wqe(queue); + mlx5dr_send_engine_flush_queue(queue); +} + +static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_wqe_gta_ctrl_seg wqe_ctrl = {0}; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + + /* Rule is not completed yet */ + if (rule->status == MLX5DR_RULE_STATUS_CREATING) { + rte_errno = EBUSY; + return rte_errno; + } + + /* Rule failed and doesn't require cleanup */ + if (rule->status == MLX5DR_RULE_STATUS_FAILED) { + mlx5dr_rule_destroy_failed_hws(rule, attr); + return 0; + } + + if (unlikely(mlx5dr_send_engine_err(queue))) { + mlx5dr_rule_destroy_failed_hws(rule, attr); + return 0; + } + + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQE */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + rule->status = MLX5DR_RULE_STATUS_DELETING; + + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + + ste_attr.send_attr.rule = rule; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = attr->user_data; + + ste_attr.rtc_0 = rule->rtc_0; + ste_attr.rtc_1 = rule->rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.wqe_ctrl = &wqe_ctrl; + ste_attr.wqe_tag = &rule->tag; + ste_attr.wqe_tag_is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE; + + mlx5dr_send_ste(queue, &ste_attr); + + return 0; +} + +static int mlx5dr_rule_create_root(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *rule_attr, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dv_flow_matcher *dv_matcher = rule->matcher->dv_matcher; + uint8_t num_actions = rule->matcher->at[at_idx]->num_actions; + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dv_flow_match_parameters *value; + struct mlx5_flow_attr flow_attr = {0}; + struct mlx5dv_flow_action_attr *attr; + struct rte_flow_error error; + uint8_t match_criteria; + int ret; + + attr = simple_calloc(num_actions, sizeof(*attr)); + if (!attr) { + rte_errno = ENOMEM; + return rte_errno; + } + + value = simple_calloc(1, MLX5_ST_SZ_BYTES(fte_match_param) + + offsetof(struct mlx5dv_flow_match_parameters, match_buf)); + if (!value) { + rte_errno = ENOMEM; + goto free_attr; + } + + flow_attr.tbl_type = rule->matcher->tbl->type; + + ret = flow_dv_translate_items_hws(items, &flow_attr, value->match_buf, + MLX5_SET_MATCHER_HS_V, NULL, + &match_criteria, + &error); + if (ret) { + DR_LOG(ERR, "Failed to convert items to PRM [%s]", error.message); + goto free_value; + } + + /* Convert actions to verb action attr */ + ret = mlx5dr_action_root_build_attr(rule_actions, num_actions, attr); + if (ret) + goto free_value; + + /* Create verb flow */ + value->match_sz = MLX5_ST_SZ_BYTES(fte_match_param); + rule->flow = mlx5_glue->dv_create_flow_root(dv_matcher, + value, + num_actions, + attr); + + mlx5dr_rule_gen_comp(&ctx->send_queue[rule_attr->queue_id], rule, !rule->flow, + rule_attr->user_data, MLX5DR_RULE_STATUS_CREATED); + + simple_free(value); + simple_free(attr); + + return 0; + +free_value: + simple_free(value); +free_attr: + simple_free(attr); + + return -rte_errno; +} + +static int mlx5dr_rule_destroy_root(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + int err = 0; + + if (rule->flow) + err = ibv_destroy_flow(rule->flow); + + mlx5dr_rule_gen_comp(&ctx->send_queue[attr->queue_id], rule, err, + attr->user_data, MLX5DR_RULE_STATUS_DELETED); + + return 0; +} + +int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[], + struct mlx5dr_rule_attr *attr, + struct mlx5dr_rule *rule_handle) +{ + struct mlx5dr_context *ctx; + int ret; + + rule_handle->matcher = matcher; + ctx = matcher->tbl->ctx; + + if (unlikely(!attr->user_data)) { + rte_errno = EINVAL; + return -rte_errno; + } + + /* Check if there is room in queue */ + if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + rte_errno = EBUSY; + return -rte_errno; + } + + assert(matcher->num_of_mt >= mt_idx); + assert(matcher->num_of_at >= at_idx); + + if (unlikely(mlx5dr_table_is_root(matcher->tbl))) + ret = mlx5dr_rule_create_root(rule_handle, + attr, + items, + at_idx, + rule_actions); + else + ret = mlx5dr_rule_create_hws(rule_handle, + attr, + mt_idx, + items, + at_idx, + rule_actions); + return -ret; +} + +int mlx5dr_rule_destroy(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + int ret; + + if (unlikely(!attr->user_data)) { + rte_errno = EINVAL; + return -rte_errno; + } + + /* Check if there is room in queue */ + if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + rte_errno = EBUSY; + return -rte_errno; + } + + if (unlikely(mlx5dr_table_is_root(rule->matcher->tbl))) + ret = mlx5dr_rule_destroy_root(rule, attr); + else + ret = mlx5dr_rule_destroy_hws(rule, attr); + + return -ret; +} + +size_t mlx5dr_rule_get_handle_size(void) +{ + return sizeof(struct mlx5dr_rule); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h new file mode 100644 index 0000000000..96c85674f2 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_rule.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_RULE_H_ +#define MLX5DR_RULE_H_ + +enum { + MLX5DR_STE_CTRL_SZ = 20, + MLX5DR_ACTIONS_SZ = 12, + MLX5DR_MATCH_TAG_SZ = 32, + MLX5DR_JUMBO_TAG_SZ = 44, +}; + +enum mlx5dr_rule_status { + MLX5DR_RULE_STATUS_UNKNOWN, + MLX5DR_RULE_STATUS_CREATING, + MLX5DR_RULE_STATUS_CREATED, + MLX5DR_RULE_STATUS_DELETING, + MLX5DR_RULE_STATUS_DELETED, + MLX5DR_RULE_STATUS_FAILING, + MLX5DR_RULE_STATUS_FAILED, +}; + +struct mlx5dr_rule_match_tag { + union { + uint8_t jumbo[MLX5DR_JUMBO_TAG_SZ]; + struct { + uint8_t reserved[MLX5DR_ACTIONS_SZ]; + uint8_t match[MLX5DR_MATCH_TAG_SZ]; + }; + }; +}; + +struct mlx5dr_rule { + struct mlx5dr_matcher *matcher; + union { + struct mlx5dr_rule_match_tag tag; + struct ibv_flow *flow; + }; + uint32_t rtc_0; /* The RTC into which the STE was inserted */ + uint32_t rtc_1; /* The RTC into which the STE was inserted */ + int action_ste_idx; /* Action STE pool ID */ + uint8_t status; /* enum mlx5dr_rule_status */ + uint8_t pending_wqes; +}; + +void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule); + +#endif /* MLX5DR_RULE_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 16/18] net/mlx5/hws: Add HWS action object 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (14 preceding siblings ...) 2022-10-19 20:57 ` [v5 15/18] net/mlx5/hws: Add HWS rule object Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 17/18] net/mlx5/hws: Add HWS debug layer Alex Vesker 2022-10-19 20:57 ` [v5 18/18] net/mlx5/hws: Enable HWS Alex Vesker 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit From: Erez Shitrit <erezsh@nvidia.com> Action objects are used for executing different HW actions over packets. Each action contains the HW resources and parameters needed for action use over the HW when creating a rule. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 2237 +++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 253 +++ drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 511 ++++++ drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 83 + 4 files changed, 3084 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.h diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c new file mode 100644 index 0000000000..ea43383a33 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -0,0 +1,2237 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +#define WIRE_PORT 0xFFFF + +#define MLX5DR_ACTION_METER_INIT_COLOR_OFFSET 1 + +/* This is the maximum allowed action order for each table type: + * TX: POP_VLAN, CTR, ASO_METER, AS_CT, PUSH_VLAN, MODIFY, ENCAP, Term + * RX: TAG, DECAP, POP_VLAN, CTR, ASO_METER, ASO_CT, PUSH_VLAN, MODIFY, + * ENCAP, Term + * FDB: DECAP, POP_VLAN, CTR, ASO_METER, ASO_CT, PUSH_VLAN, MODIFY, + * ENCAP, Term + */ +static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_MAX] = { + [MLX5DR_TABLE_TYPE_NIC_RX] = { + BIT(MLX5DR_ACTION_TYP_TAG), + BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_TIR) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, + [MLX5DR_TABLE_TYPE_NIC_TX] = { + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, + [MLX5DR_TABLE_TYPE_FDB] = { + BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_VPORT) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, +}; + +static int mlx5dr_action_get_shared_stc_nic(struct mlx5dr_context *ctx, + enum mlx5dr_context_shared_stc_type stc_type, + uint8_t tbl_type) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_action_shared_stc *shared_stc; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + if (ctx->common_res[tbl_type].shared_stc[stc_type]) { + rte_atomic32_add(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount, 1); + pthread_spin_unlock(&ctx->ctrl_lock); + return 0; + } + + shared_stc = simple_calloc(1, sizeof(*shared_stc)); + if (!shared_stc) { + DR_LOG(ERR, "Failed to allocate memory for shared STCs"); + rte_errno = ENOMEM; + goto unlock_and_out; + } + switch (stc_type) { + case MLX5DR_CONTEXT_SHARED_STC_DECAP: + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + stc_attr.remove_header.decap = 0; + stc_attr.remove_header.start_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + stc_attr.remove_header.end_anchor = MLX5_HEADER_ANCHOR_IPV6_IPV4; + break; + case MLX5DR_CONTEXT_SHARED_STC_POP: + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + stc_attr.remove_words.start_anchor = MLX5_HEADER_ANCHOR_FIRST_VLAN_START; + stc_attr.remove_words.num_of_words = MLX5DR_ACTION_HDR_LEN_L2_VLAN; + break; + default: + DR_LOG(ERR, "No such type : stc_type\n"); + assert(false); + rte_errno = EINVAL; + goto unlock_and_out; + } + + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &shared_stc->remove_header); + if (ret) { + DR_LOG(ERR, "Failed to allocate shared decap l2 STC"); + goto free_shared_stc; + } + + ctx->common_res[tbl_type].shared_stc[stc_type] = shared_stc; + + rte_atomic32_init(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount); + rte_atomic32_set(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount, 1); + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_shared_stc: + simple_free(shared_stc); +unlock_and_out: + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void mlx5dr_action_put_shared_stc_nic(struct mlx5dr_context *ctx, + enum mlx5dr_context_shared_stc_type stc_type, + uint8_t tbl_type) +{ + struct mlx5dr_action_shared_stc *shared_stc; + + pthread_spin_lock(&ctx->ctrl_lock); + if (!rte_atomic32_dec_and_test(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount)) { + pthread_spin_unlock(&ctx->ctrl_lock); + return; + } + + shared_stc = ctx->common_res[tbl_type].shared_stc[stc_type]; + + mlx5dr_action_free_single_stc(ctx, tbl_type, &shared_stc->remove_header); + simple_free(shared_stc); + ctx->common_res[tbl_type].shared_stc[stc_type] = NULL; + pthread_spin_unlock(&ctx->ctrl_lock); +} + +static int mlx5dr_action_get_shared_stc(struct mlx5dr_action *action, + enum mlx5dr_context_shared_stc_type stc_type) +{ + struct mlx5dr_context *ctx = action->ctx; + int ret; + + if (stc_type >= MLX5DR_CONTEXT_SHARED_STC_MAX) { + assert(false); + rte_errno = EINVAL; + return rte_errno; + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for RX shared STCs (type: %d)", + stc_type); + return ret; + } + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for TX shared STCs(type: %d)", + stc_type); + goto clean_nic_rx_stc; + } + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for FDB shared STCs (type: %d)", + stc_type); + goto clean_nic_tx_stc; + } + } + + return 0; + +clean_nic_tx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); +clean_nic_rx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + + return ret; +} + +static void mlx5dr_action_put_shared_stc(struct mlx5dr_action *action, + enum mlx5dr_context_shared_stc_type stc_type) +{ + struct mlx5dr_context *ctx = action->ctx; + + if (stc_type >= MLX5DR_CONTEXT_SHARED_STC_MAX) { + assert(false); + return; + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB); +} + +static void mlx5dr_action_print_combo(enum mlx5dr_action_type *user_actions) +{ + DR_LOG(ERR, "Invalid action_type sequence"); + while (*user_actions != MLX5DR_ACTION_TYP_LAST) { + DR_LOG(ERR, "%s", mlx5dr_debug_action_type_to_str(*user_actions)); + user_actions++; + } +} + +bool mlx5dr_action_check_combo(enum mlx5dr_action_type *user_actions, + enum mlx5dr_table_type table_type) +{ + const uint32_t *order_arr = action_order_arr[table_type]; + uint8_t order_idx = 0; + uint8_t user_idx = 0; + bool valid_combo; + + while (order_arr[order_idx] != BIT(MLX5DR_ACTION_TYP_LAST)) { + /* User action order validated move to next user action */ + if (BIT(user_actions[user_idx]) & order_arr[order_idx]) + user_idx++; + + /* Iterate to the next supported action in the order */ + order_idx++; + } + + /* Combination is valid if all user action were processed */ + valid_combo = user_actions[user_idx] == MLX5DR_ACTION_TYP_LAST; + if (!valid_combo) + mlx5dr_action_print_combo(user_actions); + + return valid_combo; +} + +int mlx5dr_action_root_build_attr(struct mlx5dr_rule_action rule_actions[], + uint32_t num_actions, + struct mlx5dv_flow_action_attr *attr) +{ + struct mlx5dr_action *action; + uint32_t i; + + for (i = 0; i < num_actions; i++) { + action = rule_actions[i].action; + + switch (action->type) { + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_TIR: + attr[i].type = MLX5DV_FLOW_ACTION_DEST_DEVX; + attr[i].obj = action->devx_obj; + break; + case MLX5DR_ACTION_TYP_TAG: + attr[i].type = MLX5DV_FLOW_ACTION_TAG; + attr[i].tag_value = rule_actions[i].tag.value; + break; +#ifdef HAVE_MLX5_DR_CREATE_ACTION_DEFAULT_MISS + case MLX5DR_ACTION_TYP_MISS: + attr[i].type = MLX5DV_FLOW_ACTION_DEFAULT_MISS; + break; +#endif + case MLX5DR_ACTION_TYP_DROP: + attr[i].type = MLX5DV_FLOW_ACTION_DROP; + break; + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + attr[i].type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; + attr[i].action = action->flow_action; + break; +#ifdef HAVE_IBV_FLOW_DEVX_COUNTERS + case MLX5DR_ACTION_TYP_CTR: + attr[i].type = MLX5DV_FLOW_ACTION_COUNTERS_DEVX; + attr[i].obj = action->devx_obj; + + if (rule_actions[i].counter.offset) { + DR_LOG(ERR, "Counter offset not supported over root"); + rte_errno = ENOTSUP; + return rte_errno; + } + break; +#endif + default: + DR_LOG(ERR, "Found unsupported action type: %d", action->type); + rte_errno = ENOTSUP; + return rte_errno; + } + } + + return 0; +} + +static bool mlx5dr_action_fixup_stc_attr(struct mlx5dr_cmd_stc_modify_attr *stc_attr, + struct mlx5dr_cmd_stc_modify_attr *fixup_stc_attr, + enum mlx5dr_table_type table_type, + bool is_mirror) +{ + struct mlx5dr_devx_obj *devx_obj; + bool use_fixup = false; + uint32_t fw_tbl_type; + + fw_tbl_type = mlx5dr_table_get_res_fw_ft_type(table_type, is_mirror); + + switch (stc_attr->action_type) { + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE: + if (!is_mirror) + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_attr->ste_table.ste_pool, + &stc_attr->ste_table.ste); + else + devx_obj = + mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_attr->ste_table.ste_pool, + &stc_attr->ste_table.ste); + + *fixup_stc_attr = *stc_attr; + fixup_stc_attr->ste_table.ste_obj_id = devx_obj->id; + use_fixup = true; + break; + + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT: + if (stc_attr->vport.vport_num != WIRE_PORT) + break; + + if (fw_tbl_type == FS_FT_FDB_RX) { + /* The FW doesn't allow to go back to wire in RX, so change it to DROP */ + fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + fixup_stc_attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + fixup_stc_attr->stc_offset = stc_attr->stc_offset; + } else if (fw_tbl_type == FS_FT_FDB_TX) { + /*The FW doesn't allow to go to wire in the TX by JUMP_TO_VPORT*/ + fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK; + fixup_stc_attr->action_offset = stc_attr->action_offset; + fixup_stc_attr->stc_offset = stc_attr->stc_offset; + fixup_stc_attr->vport.vport_num = 0; + fixup_stc_attr->vport.esw_owner_vhca_id = stc_attr->vport.esw_owner_vhca_id; + } + use_fixup = true; + break; + + default: + break; + } + + return use_fixup; +} + +int mlx5dr_action_alloc_single_stc(struct mlx5dr_context *ctx, + struct mlx5dr_cmd_stc_modify_attr *stc_attr, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc) +{ + struct mlx5dr_cmd_stc_modify_attr cleanup_stc_attr = {0}; + struct mlx5dr_pool *stc_pool = ctx->stc_pool[table_type]; + struct mlx5dr_cmd_stc_modify_attr fixup_stc_attr = {0}; + struct mlx5dr_devx_obj *devx_obj_0; + bool use_fixup; + int ret; + + ret = mlx5dr_pool_chunk_alloc(stc_pool, stc); + if (ret) { + DR_LOG(ERR, "Failed to allocate single action STC"); + return ret; + } + + stc_attr->stc_offset = stc->offset; + devx_obj_0 = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, stc); + + /* According to table/action limitation change the stc_attr */ + use_fixup = mlx5dr_action_fixup_stc_attr(stc_attr, &fixup_stc_attr, table_type, false); + ret = mlx5dr_cmd_stc_modify(devx_obj_0, use_fixup ? &fixup_stc_attr : stc_attr); + if (ret) { + DR_LOG(ERR, "Failed to modify STC action_type %d tbl_type %d", + stc_attr->action_type, table_type); + goto free_chunk; + } + + /* Modify the FDB peer */ + if (table_type == MLX5DR_TABLE_TYPE_FDB) { + struct mlx5dr_devx_obj *devx_obj_1; + + devx_obj_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, stc); + + use_fixup = mlx5dr_action_fixup_stc_attr(stc_attr, &fixup_stc_attr, + table_type, true); + ret = mlx5dr_cmd_stc_modify(devx_obj_1, use_fixup ? &fixup_stc_attr : stc_attr); + if (ret) { + DR_LOG(ERR, "Failed to modify peer STC action_type %d tbl_type %d", + stc_attr->action_type, table_type); + goto clean_devx_obj_0; + } + } + + return 0; + +clean_devx_obj_0: + cleanup_stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + cleanup_stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + cleanup_stc_attr.stc_offset = stc->offset; + mlx5dr_cmd_stc_modify(devx_obj_0, &cleanup_stc_attr); +free_chunk: + mlx5dr_pool_chunk_free(stc_pool, stc); + return rte_errno; +} + +void mlx5dr_action_free_single_stc(struct mlx5dr_context *ctx, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc) +{ + struct mlx5dr_pool *stc_pool = ctx->stc_pool[table_type]; + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_devx_obj *devx_obj; + + /* Modify the STC not to point to an object */ + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + stc_attr.stc_offset = stc->offset; + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, stc); + mlx5dr_cmd_stc_modify(devx_obj, &stc_attr); + + if (table_type == MLX5DR_TABLE_TYPE_FDB) { + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, stc); + mlx5dr_cmd_stc_modify(devx_obj, &stc_attr); + } + + mlx5dr_pool_chunk_free(stc_pool, stc); +} + +static uint32_t mlx5dr_action_get_mh_stc_type(__be64 pattern) +{ + uint8_t action_type = MLX5_GET(set_action_in, &pattern, action_type); + + switch (action_type) { + case MLX5_MODIFICATION_TYPE_SET: + return MLX5_IFC_STC_ACTION_TYPE_SET; + case MLX5_MODIFICATION_TYPE_ADD: + return MLX5_IFC_STC_ACTION_TYPE_ADD; + case MLX5_MODIFICATION_TYPE_COPY: + return MLX5_IFC_STC_ACTION_TYPE_COPY; + default: + assert(false); + DR_LOG(ERR, "Unsupported action type: 0x%x\n", action_type); + rte_errno = ENOTSUP; + return MLX5_IFC_STC_ACTION_TYPE_NOP; + } +} + +static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, + struct mlx5dr_devx_obj *obj, + struct mlx5dr_cmd_stc_modify_attr *attr) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TAG: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_TAG; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + break; + case MLX5DR_ACTION_TYP_DROP: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + break; + case MLX5DR_ACTION_TYP_MISS: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ALLOW; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + /* TODO Need to support default miss for FDB */ + break; + case MLX5DR_ACTION_TYP_CTR: + attr->id = obj->id; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_COUNTER; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW0; + break; + case MLX5DR_ACTION_TYP_TIR: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->dest_tir_num = obj->id; + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + if (action->modify_header.num_of_actions == 1) { + attr->modify_action.data = action->modify_header.single_action; + attr->action_type = mlx5dr_action_get_mh_stc_type(attr->modify_action.data); + + if (attr->action_type == MLX5_IFC_STC_ACTION_TYPE_ADD || + attr->action_type == MLX5_IFC_STC_ACTION_TYPE_SET) + MLX5_SET(set_action_in, &attr->modify_action.data, data, 0); + } else { + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST; + attr->modify_header.arg_id = action->modify_header.arg_obj->id; + attr->modify_header.pattern_id = action->modify_header.pattern_obj->id; + } + break; + case MLX5DR_ACTION_TYP_FT: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->dest_table_id = obj->id; + break; + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + attr->remove_header.decap = 1; + attr->remove_header.start_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->remove_header.end_anchor = MLX5_HEADER_ANCHOR_INNER_MAC; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.arg_id = action->reformat.arg_obj->id; + attr->insert_header.header_size = action->reformat.header_size; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.arg_id = action->reformat.arg_obj->id; + attr->insert_header.header_size = action->reformat.header_size; + break; + case MLX5DR_ACTION_TYP_ASO_METER: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ASO; + attr->aso.aso_type = ASO_OPC_MOD_POLICER; + attr->aso.devx_obj_id = obj->id; + attr->aso.return_reg_id = action->aso.return_reg_id; + break; + case MLX5DR_ACTION_TYP_ASO_CT: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ASO; + attr->aso.aso_type = ASO_OPC_MOD_CONNECTION_TRACKING; + attr->aso.devx_obj_id = obj->id; + attr->aso.return_reg_id = action->aso.return_reg_id; + break; + case MLX5DR_ACTION_TYP_VPORT: + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT; + attr->vport.vport_num = action->vport.vport_num; + attr->vport.esw_owner_vhca_id = action->vport.esw_owner_vhca_id; + break; + case MLX5DR_ACTION_TYP_POP_VLAN: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + attr->remove_words.start_anchor = MLX5_HEADER_ANCHOR_FIRST_VLAN_START; + attr->remove_words.num_of_words = MLX5DR_ACTION_HDR_LEN_L2_VLAN / 2; + break; + case MLX5DR_ACTION_TYP_PUSH_VLAN: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 0; + attr->insert_header.is_inline = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.insert_offset = MLX5DR_ACTION_HDR_LEN_L2_MACS; + attr->insert_header.header_size = MLX5DR_ACTION_HDR_LEN_L2_VLAN; + break; + default: + DR_LOG(ERR, "Invalid action type %d", action->type); + assert(false); + } +} + +static int +mlx5dr_action_create_stcs(struct mlx5dr_action *action, + struct mlx5dr_devx_obj *obj) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_context *ctx = action->ctx; + int ret; + + mlx5dr_action_fill_stc_attr(action, obj, &stc_attr); + + /* Block unsupported parallel devx obj modify over the same base */ + pthread_spin_lock(&ctx->ctrl_lock); + + /* Allocate STC for RX */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); + if (ret) + goto out_err; + } + + /* Allocate STC for TX */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); + if (ret) + goto free_nic_rx_stc; + } + + /* Allocate STC for FDB */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_FDB, + &action->stc[MLX5DR_TABLE_TYPE_FDB]); + if (ret) + goto free_nic_tx_stc; + } + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_nic_tx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_free_single_stc(ctx, + MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); +free_nic_rx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_free_single_stc(ctx, + MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); +out_err: + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void +mlx5dr_action_destroy_stcs(struct mlx5dr_action *action) +{ + struct mlx5dr_context *ctx = action->ctx; + + /* Block unsupported parallel devx obj modify over the same base */ + pthread_spin_lock(&ctx->ctrl_lock); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_FDB, + &action->stc[MLX5DR_TABLE_TYPE_FDB]); + + pthread_spin_unlock(&ctx->ctrl_lock); +} + +static bool +mlx5dr_action_is_root_flags(uint32_t flags) +{ + return flags & (MLX5DR_ACTION_FLAG_ROOT_RX | + MLX5DR_ACTION_FLAG_ROOT_TX | + MLX5DR_ACTION_FLAG_ROOT_FDB); +} + +static bool +mlx5dr_action_is_hws_flags(uint32_t flags) +{ + return flags & (MLX5DR_ACTION_FLAG_HWS_RX | + MLX5DR_ACTION_FLAG_HWS_TX | + MLX5DR_ACTION_FLAG_HWS_FDB); +} + +static struct mlx5dr_action * +mlx5dr_action_create_generic(struct mlx5dr_context *ctx, + uint32_t flags, + enum mlx5dr_action_type action_type) +{ + struct mlx5dr_action *action; + + if (!mlx5dr_action_is_root_flags(flags) && + !mlx5dr_action_is_hws_flags(flags)) { + DR_LOG(ERR, "Action flags must specify root or non root (HWS)"); + rte_errno = ENOTSUP; + return NULL; + } + + action = simple_calloc(1, sizeof(*action)); + if (!action) { + DR_LOG(ERR, "Failed to allocate memory for action [%d]", action_type); + rte_errno = ENOMEM; + return NULL; + } + + action->ctx = ctx; + action->flags = flags; + action->type = action_type; + + return action; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx, + struct mlx5dr_table *tbl, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_table_is_root(tbl)) { + DR_LOG(ERR, "Root table cannot be set as destination"); + rte_errno = ENOTSUP; + return NULL; + } + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_FT); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = tbl->ft->obj; + } else { + ret = mlx5dr_action_create_stcs(action, tbl->ft); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *obj, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_TIR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = obj->obj; + } else { + ret = mlx5dr_action_create_stcs(action, obj); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_drop(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_DROP); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_default_miss(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_MISS); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_tag(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_TAG); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static struct mlx5dr_action * +mlx5dr_action_create_aso(struct mlx5dr_context *ctx, + enum mlx5dr_action_type action_type, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "ASO action cannot be used over root table"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + action->aso.devx_obj = devx_obj; + action->aso.return_reg_id = return_reg_id; + + ret = mlx5dr_action_create_stcs(action, devx_obj); + if (ret) + goto free_action; + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_aso_meter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + return mlx5dr_action_create_aso(ctx, MLX5DR_ACTION_TYP_ASO_METER, + devx_obj, return_reg_id, flags); +} + +struct mlx5dr_action * +mlx5dr_action_create_aso_ct(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + return mlx5dr_action_create_aso(ctx, MLX5DR_ACTION_TYP_ASO_CT, + devx_obj, return_reg_id, flags); +} + +struct mlx5dr_action * +mlx5dr_action_create_counter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *obj, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_CTR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = obj->obj; + } else { + ret = mlx5dr_action_create_stcs(action, obj); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static int mlx5dr_action_create_dest_vport_hws(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint32_t ib_port_num) +{ + struct mlx5dr_cmd_query_vport_caps vport_caps = {0}; + int ret; + + ret = mlx5dr_cmd_query_ib_port(ctx->ibv_ctx, &vport_caps, ib_port_num); + if (ret) { + DR_LOG(ERR, "Failed querying port %d\n", ib_port_num); + return ret; + } + action->vport.vport_num = vport_caps.vport_num; + action->vport.esw_owner_vhca_id = vport_caps.esw_owner_vhca_id; + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed creating stc for port %d\n", ib_port_num); + return ret; + } + + return 0; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_vport(struct mlx5dr_context *ctx, + uint32_t ib_port_num, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (!(flags & MLX5DR_ACTION_FLAG_HWS_FDB)) { + DR_LOG(ERR, "Vport action is supported for FDB only\n"); + rte_errno = EINVAL; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_VPORT); + if (!action) + return NULL; + + ret = mlx5dr_action_create_dest_vport_hws(ctx, action, ib_port_num); + if (ret) { + DR_LOG(ERR, "Failed to create vport action HWS\n"); + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_push_vlan(struct mlx5dr_context *ctx, uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Push vlan action not supported for root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_PUSH_VLAN); + if (!action) + return NULL; + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed creating stc for push vlan\n"); + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_pop_vlan(struct mlx5dr_context *ctx, uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Pop vlan action not supported for root"); + rte_errno = ENOTSUP; + return NULL; + } + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_POP_VLAN); + if (!action) + return NULL; + + ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); + if (ret) { + DR_LOG(ERR, "Failed to create remove stc for reformat"); + goto free_action; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed creating stc for pop vlan\n"); + goto free_shared; + } + + return action; + +free_shared: + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); +free_action: + simple_free(action); + return NULL; +} + +static int +mlx5dr_action_conv_reformat_type_to_action(uint32_t reformat_type, + enum mlx5dr_action_type *action_type) +{ + switch (reformat_type) { + case MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2: + *action_type = MLX5DR_ACTION_TYP_TNL_L2_TO_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2: + *action_type = MLX5DR_ACTION_TYP_L2_TO_TNL_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_TNL_L3_TO_L2: + *action_type = MLX5DR_ACTION_TYP_TNL_L3_TO_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3: + *action_type = MLX5DR_ACTION_TYP_L2_TO_TNL_L3; + break; + default: + DR_LOG(ERR, "Invalid reformat type requested"); + rte_errno = ENOTSUP; + return rte_errno; + } + return 0; +} + +static void +mlx5dr_action_conv_reformat_to_verbs(uint32_t action_type, + uint32_t *verb_reformat_type) +{ + switch (action_type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TUNNEL_TO_L2; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TO_L2_TUNNEL; + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L3_TUNNEL_TO_L2; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TO_L3_TUNNEL; + break; + } +} + +static int +mlx5dr_action_conv_flags_to_ft_type(uint32_t flags, enum mlx5dv_flow_table_type *ft_type) +{ + if (flags & MLX5DR_ACTION_FLAG_ROOT_RX) { + *ft_type = MLX5DV_FLOW_TABLE_TYPE_NIC_RX; + } else if (flags & MLX5DR_ACTION_FLAG_ROOT_TX) { + *ft_type = MLX5DV_FLOW_TABLE_TYPE_NIC_TX; +#ifdef HAVE_MLX5DV_FLOW_MATCHER_FT_TYPE + } else if (flags & MLX5DR_ACTION_FLAG_ROOT_FDB) { + *ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; +#endif + } else { + rte_errno = ENOTSUP; + return 1; + } + + return 0; +} + +static int +mlx5dr_action_create_reformat_root(struct mlx5dr_action *action, + size_t data_sz, + void *data) +{ + enum mlx5dv_flow_table_type ft_type = 0; /*fix compilation warn*/ + uint32_t verb_reformat_type = 0; + int ret; + + /* Convert action to FT type and verbs reformat type */ + ret = mlx5dr_action_conv_flags_to_ft_type(action->flags, &ft_type); + if (ret) + return rte_errno; + + mlx5dr_action_conv_reformat_to_verbs(action->type, &verb_reformat_type); + + /* Create the reformat type for root table */ + action->flow_action = + mlx5_glue->dv_create_flow_action_packet_reformat_root(action->ctx->ibv_ctx, + data_sz, + data, + verb_reformat_type, + ft_type); + if (!action->flow_action) { + rte_errno = errno; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_action_handle_reformat_args(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + uint32_t args_log_size; + int ret; + + if (data_sz % 2 != 0) { + DR_LOG(ERR, "Data size should be multiply of 2"); + rte_errno = EINVAL; + return rte_errno; + } + action->reformat.header_size = data_sz; + + args_log_size = mlx5dr_arg_data_size_to_arg_log_size(data_sz); + if (args_log_size >= MLX5DR_ARG_CHUNK_SIZE_MAX) { + DR_LOG(ERR, "Data size is bigger than supported"); + rte_errno = EINVAL; + return rte_errno; + } + args_log_size += bulk_size; + + if (!mlx5dr_arg_is_valid_arg_request_size(ctx, args_log_size)) { + DR_LOG(ERR, "Arg size %d does not fit FW requests", + args_log_size); + rte_errno = EINVAL; + return rte_errno; + } + + action->reformat.arg_obj = mlx5dr_cmd_arg_create(ctx->ibv_ctx, + args_log_size, + ctx->pd_num); + if (!action->reformat.arg_obj) { + DR_LOG(ERR, "Failed to create arg for reformat"); + return rte_errno; + } + + /* When INLINE need to write the arg data */ + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->reformat.arg_obj->id, + data, + data_sz); + if (ret) { + DR_LOG(ERR, "Failed to write inline arg for reformat"); + goto free_arg; + } + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static int mlx5dr_action_handle_l2_to_tunnel_l2(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + ret = mlx5dr_action_handle_reformat_args(ctx, data_sz, data, bulk_size, + action); + if (ret) { + DR_LOG(ERR, "Failed to create args for reformat"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed to create stc for reformat"); + goto free_arg; + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static int mlx5dr_action_get_shared_stc_offset(struct mlx5dr_context_common_res *common_res, + enum mlx5dr_context_shared_stc_type stc_type) +{ + return common_res->shared_stc[stc_type]->remove_header.offset; +} + +static int mlx5dr_action_handle_l2_to_tunnel_l3(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + ret = mlx5dr_action_handle_reformat_args(ctx, data_sz, data, bulk_size, + action); + if (ret) { + DR_LOG(ERR, "Failed to create args for reformat"); + return ret; + } + + /* The action is remove-l2-header + insert-l3-header */ + ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); + if (ret) { + DR_LOG(ERR, "Failed to create remove stc for reformat"); + goto free_arg; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed to create insert stc for reformat"); + goto down_shared; + } + + return 0; + +down_shared: + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static void mlx5dr_action_prepare_decap_l3_actions(size_t data_sz, + uint8_t *mh_data, + int *num_of_actions) +{ + int actions; + uint32_t i; + + /* Remove L2L3 outer headers */ + MLX5_SET(stc_ste_param_remove, mh_data, action_type, + MLX5_MODIFICATION_TYPE_REMOVE); + MLX5_SET(stc_ste_param_remove, mh_data, decap, 0x1); + MLX5_SET(stc_ste_param_remove, mh_data, remove_start_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + MLX5_SET(stc_ste_param_remove, mh_data, remove_end_anchor, + MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4); + mh_data += MLX5DR_ACTION_DOUBLE_SIZE; /* Assume every action is 2 dw */ + actions = 1; + + /* Add the new header using inline action 4Byte at a time, the header + * is added in reversed order to the beginning of the packet to avoid + * incorrect parsing by the HW. Since header is 14B or 18B an extra + * two bytes are padded and later removed. + */ + for (i = 0; i < data_sz / MLX5DR_ACTION_INLINE_DATA_SIZE + 1; i++) { + MLX5_SET(stc_ste_param_insert, mh_data, action_type, + MLX5_MODIFICATION_TYPE_INSERT); + MLX5_SET(stc_ste_param_insert, mh_data, inline_data, 0x1); + MLX5_SET(stc_ste_param_insert, mh_data, insert_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + MLX5_SET(stc_ste_param_insert, mh_data, insert_size, 2); + mh_data += MLX5DR_ACTION_DOUBLE_SIZE; + actions++; + } + + /* Remove first 2 extra bytes */ + MLX5_SET(stc_ste_param_remove_words, mh_data, action_type, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS); + MLX5_SET(stc_ste_param_remove_words, mh_data, remove_start_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + /* The hardware expects here size in words (2 bytes) */ + MLX5_SET(stc_ste_param_remove_words, mh_data, remove_size, 1); + actions++; + + *num_of_actions = actions; +} + +static int +mlx5dr_action_handle_tunnel_l3_to_l2(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + uint8_t mh_data[MLX5DR_ACTION_REFORMAT_DATA_SIZE] = {0}; + int num_of_actions; + int mh_data_size; + int ret; + + if (data_sz != MLX5DR_ACTION_HDR_LEN_L2 && + data_sz != MLX5DR_ACTION_HDR_LEN_L2_W_VLAN) { + DR_LOG(ERR, "Data size is not supported for decap-l3\n"); + rte_errno = EINVAL; + return rte_errno; + } + + mlx5dr_action_prepare_decap_l3_actions(data_sz, mh_data, &num_of_actions); + + mh_data_size = num_of_actions * MLX5DR_MODIFY_ACTION_SIZE; + + ret = mlx5dr_pat_arg_create_modify_header(ctx, action, mh_data_size, + (__be64 *)mh_data, bulk_size); + if (ret) { + DR_LOG(ERR, "Failed allocating modify-header for decap-l3\n"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_mh_obj; + + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + mlx5dr_action_prepare_decap_l3_data(data, mh_data, num_of_actions); + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->modify_header.arg_obj->id, + (uint8_t *)mh_data, + num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); + if (ret) { + DR_LOG(ERR, "Failed writing INLINE arg decap_l3"); + goto clean_stc; + } + } + + return 0; + +clean_stc: + mlx5dr_action_destroy_stcs(action); +free_mh_obj: + mlx5dr_pat_arg_destroy_modify_header(ctx, action); + return ret; +} + +static int +mlx5dr_action_create_reformat_hws(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + switch (action->type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + ret = mlx5dr_action_create_stcs(action, NULL); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + ret = mlx5dr_action_handle_l2_to_tunnel_l2(ctx, data_sz, data, bulk_size, action); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + ret = mlx5dr_action_handle_l2_to_tunnel_l3(ctx, data_sz, data, bulk_size, action); + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + ret = mlx5dr_action_handle_tunnel_l3_to_l2(ctx, data_sz, data, bulk_size, action); + break; + + default: + assert(false); + rte_errno = ENOTSUP; + return rte_errno; + } + + return ret; +} + +struct mlx5dr_action * +mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, + enum mlx5dr_action_reformat_type reformat_type, + size_t data_sz, + void *inline_data, + uint32_t log_bulk_size, + uint32_t flags) +{ + enum mlx5dr_action_type action_type; + struct mlx5dr_action *action; + int ret; + + ret = mlx5dr_action_conv_reformat_type_to_action(reformat_type, &action_type); + if (ret) + return NULL; + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + if (log_bulk_size) { + DR_LOG(ERR, "Bulk reformat not supported over root"); + rte_errno = ENOTSUP; + goto free_action; + } + + ret = mlx5dr_action_create_reformat_root(action, data_sz, inline_data); + if (ret) + goto free_action; + + return action; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Reformat flags don't fit HWS (flags: %x0x)\n", + flags); + rte_errno = EINVAL; + goto free_action; + } + + ret = mlx5dr_action_create_reformat_hws(ctx, data_sz, inline_data, log_bulk_size, action); + if (ret) { + DR_LOG(ERR, "Failed to create reformat.\n"); + rte_errno = EINVAL; + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static int +mlx5dr_action_create_modify_header_root(struct mlx5dr_action *action, + size_t actions_sz, + __be64 *actions) +{ + enum mlx5dv_flow_table_type ft_type = 0; + int ret; + + ret = mlx5dr_action_conv_flags_to_ft_type(action->flags, &ft_type); + if (ret) + return rte_errno; + + action->flow_action = + mlx5_glue->dv_create_flow_action_modify_header_root(action->ctx->ibv_ctx, + actions_sz, + (uint64_t *)actions, + ft_type); + if (!action->flow_action) { + rte_errno = errno; + return rte_errno; + } + + return 0; +} + +struct mlx5dr_action * +mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, + size_t pattern_sz, + __be64 pattern[], + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_MODIFY_HDR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + if (log_bulk_size) { + DR_LOG(ERR, "Bulk modify-header not supported over root"); + rte_errno = ENOTSUP; + goto free_action; + } + ret = mlx5dr_action_create_modify_header_root(action, pattern_sz, pattern); + if (ret) + goto free_action; + + return action; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Flags don't fit hws (flags: %x0x, log_bulk_size: %d)\n", + flags, log_bulk_size); + rte_errno = EINVAL; + goto free_action; + } + + if (pattern_sz / MLX5DR_MODIFY_ACTION_SIZE == 1) { + /* Optimize single modiy action to be used inline */ + action->modify_header.single_action = pattern[0]; + action->modify_header.num_of_actions = 1; + action->modify_header.single_action_type = + MLX5_GET(set_action_in, pattern, action_type); + } else { + /* Use multi action pattern and argument */ + ret = mlx5dr_pat_arg_create_modify_header(ctx, action, pattern_sz, + pattern, log_bulk_size); + if (ret) { + DR_LOG(ERR, "Failed allocating modify-header\n"); + goto free_action; + } + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_mh_obj; + + return action; + +free_mh_obj: + if (action->modify_header.num_of_actions > 1) + mlx5dr_pat_arg_destroy_modify_header(ctx, action); +free_action: + simple_free(action); + return NULL; +} + +static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TIR: + case MLX5DR_ACTION_TYP_MISS: + case MLX5DR_ACTION_TYP_TAG: + case MLX5DR_ACTION_TYP_DROP: + case MLX5DR_ACTION_TYP_CTR: + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_ASO_METER: + case MLX5DR_ACTION_TYP_ASO_CT: + case MLX5DR_ACTION_TYP_PUSH_VLAN: + mlx5dr_action_destroy_stcs(action); + break; + case MLX5DR_ACTION_TYP_POP_VLAN: + mlx5dr_action_destroy_stcs(action); + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + mlx5dr_action_destroy_stcs(action); + if (action->modify_header.num_of_actions > 1) + mlx5dr_pat_arg_destroy_modify_header(action->ctx, action); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + mlx5dr_action_destroy_stcs(action); + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + mlx5dr_action_destroy_stcs(action); + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + break; + } +} + +static void mlx5dr_action_destroy_root(struct mlx5dr_action *action) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + ibv_destroy_flow_action(action->flow_action); + break; + } +} + +int mlx5dr_action_destroy(struct mlx5dr_action *action) +{ + if (mlx5dr_action_is_root_flags(action->flags)) + mlx5dr_action_destroy_root(action); + else + mlx5dr_action_destroy_hws(action); + + simple_free(action); + return 0; +} + +/* Called under pthread_spin_lock(&ctx->ctrl_lock) */ +int mlx5dr_action_get_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_action_default_stc *default_stc; + int ret; + + if (ctx->common_res[tbl_type].default_stc) { + ctx->common_res[tbl_type].default_stc->refcount++; + return 0; + } + + default_stc = simple_calloc(1, sizeof(*default_stc)); + if (!default_stc) { + DR_LOG(ERR, "Failed to allocate memory for default STCs"); + rte_errno = ENOMEM; + return rte_errno; + } + + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_NOP; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW0; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_ctr); + if (ret) { + DR_LOG(ERR, "Failed to allocate default counter STC"); + goto free_default_stc; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw5); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW5 STC"); + goto free_nop_ctr; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW6; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw6); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW6 STC"); + goto free_nop_dw5; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW7; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw7); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW7 STC"); + goto free_nop_dw6; + } + + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_ALLOW; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->default_hit); + if (ret) { + DR_LOG(ERR, "Failed to allocate default allow STC"); + goto free_nop_dw7; + } + + ctx->common_res[tbl_type].default_stc = default_stc; + ctx->common_res[tbl_type].default_stc->refcount++; + + return 0; + +free_nop_dw7: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw7); +free_nop_dw6: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw6); +free_nop_dw5: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw5); +free_nop_ctr: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_ctr); +free_default_stc: + simple_free(default_stc); + return rte_errno; +} + +void mlx5dr_action_put_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type) +{ + struct mlx5dr_action_default_stc *default_stc; + + default_stc = ctx->common_res[tbl_type].default_stc; + + default_stc = ctx->common_res[tbl_type].default_stc; + if (--default_stc->refcount) + return; + + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->default_hit); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw7); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw6); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw5); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_ctr); + simple_free(default_stc); + ctx->common_res[tbl_type].default_stc = NULL; +} + +static void mlx5dr_action_modify_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions) +{ + mlx5dr_arg_write(queue, NULL, arg_idx, arg_data, + num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); +} + +void +mlx5dr_action_prepare_decap_l3_data(uint8_t *src, uint8_t *dst, + uint16_t num_of_actions) +{ + uint8_t *e_src; + int i; + + /* num_of_actions = remove l3l2 + 4/5 inserts + remove extra 2 bytes + * copy from end of src to the start of dst. + * move to the end, 2 is the leftover from 14B or 18B + */ + if (num_of_actions == DECAP_L3_NUM_ACTIONS_W_NO_VLAN) + e_src = src + MLX5DR_ACTION_HDR_LEN_L2; + else + e_src = src + MLX5DR_ACTION_HDR_LEN_L2_W_VLAN; + + /* Move dst over the first remove action + zero data */ + dst += MLX5DR_ACTION_DOUBLE_SIZE; + /* Move dst over the first insert ctrl action */ + dst += MLX5DR_ACTION_DOUBLE_SIZE / 2; + /* Actions: + * no vlan: r_h-insert_4b-insert_4b-insert_4b-insert_4b-remove_2b. + * with vlan: r_h-insert_4b-insert_4b-insert_4b-insert_4b-insert_4b-remove_2b. + * the loop is without the last insertion. + */ + for (i = 0; i < num_of_actions - 3; i++) { + e_src -= MLX5DR_ACTION_INLINE_DATA_SIZE; + memcpy(dst, e_src, MLX5DR_ACTION_INLINE_DATA_SIZE); /* data */ + dst += MLX5DR_ACTION_DOUBLE_SIZE; + } + /* Copy the last 2 bytes after a gap of 2 bytes which will be removed */ + e_src -= MLX5DR_ACTION_INLINE_DATA_SIZE / 2; + dst += MLX5DR_ACTION_INLINE_DATA_SIZE / 2; + memcpy(dst, e_src, 2); +} + +static struct mlx5dr_actions_wqe_setter * +mlx5dr_action_setter_find_first(struct mlx5dr_actions_wqe_setter *setter, + uint8_t req_flags) +{ + /* Use a new setter if requested flags are taken */ + while (setter->flags & req_flags) + setter++; + + /* Use current setter in required flags are not used */ + return setter; +} + +static void +mlx5dr_action_apply_stc(struct mlx5dr_actions_apply_data *apply, + enum mlx5dr_action_stc_idx stc_idx, + uint8_t action_idx) +{ + struct mlx5dr_action *action = apply->rule_action[action_idx].action; + + apply->wqe_ctrl->stc_ix[stc_idx] = + htobe32(action->stc[apply->tbl_type].offset); +} + +static void +mlx5dr_action_setter_push_vlan(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_double]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = rule_action->push_vlan.vlan_hdr; + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; +} + +static void +mlx5dr_action_setter_modify_header(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + uint32_t arg_sz, arg_idx; + uint8_t *single_action; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + + if (action->modify_header.num_of_actions == 1) { + if (action->modify_header.single_action_type == + MLX5_MODIFICATION_TYPE_COPY) { + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + return; + } + + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) + single_action = (uint8_t *)&action->modify_header.single_action; + else + single_action = rule_action->modify_header.data; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = + *(__be32 *)MLX5_ADDR_OF(set_action_in, single_action, data); + } else { + /* Argument offset multiple with number of args per these actions */ + arg_sz = mlx5dr_arg_get_arg_size(action->modify_header.num_of_actions); + arg_idx = rule_action->modify_header.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + if (!(action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_action_modify_write(apply->queue, + action->modify_header.arg_obj->id + arg_idx, + rule_action->modify_header.data, + action->modify_header.num_of_actions); + } + } +} + +static void +mlx5dr_action_setter_insert_ptr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + uint32_t arg_idx, arg_sz; + + rule_action = &apply->rule_action[setter->idx_double]; + + /* Argument offset multiple on args required for header size */ + arg_sz = mlx5dr_arg_data_size_to_arg_size(rule_action->action->reformat.header_size); + arg_idx = rule_action->reformat.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + if (!(rule_action->action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_arg_write(apply->queue, NULL, + rule_action->action->reformat.arg_obj->id + arg_idx, + rule_action->reformat.data, + rule_action->action->reformat.header_size); + } +} + +static void +mlx5dr_action_setter_tnl_l3_to_l2(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + uint32_t arg_sz, arg_idx; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + + /* Argument offset multiple on args required for num of actions */ + arg_sz = mlx5dr_arg_get_arg_size(action->modify_header.num_of_actions); + arg_idx = rule_action->reformat.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + if (!(action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_arg_decapl3_write(apply->queue, + action->modify_header.arg_obj->id + arg_idx, + rule_action->reformat.data, + action->modify_header.num_of_actions); + } +} + +static void +mlx5dr_action_setter_aso(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + uint32_t exe_aso_ctrl; + uint32_t offset; + + rule_action = &apply->rule_action[setter->idx_double]; + + switch (rule_action->action->type) { + case MLX5DR_ACTION_TYP_ASO_METER: + /* exe_aso_ctrl format: + * [STC only and reserved bits 29b][init_color 2b][meter_id 1b] + */ + offset = rule_action->aso_meter.offset / MLX5_ASO_METER_NUM_PER_OBJ; + exe_aso_ctrl = rule_action->aso_meter.offset % MLX5_ASO_METER_NUM_PER_OBJ; + exe_aso_ctrl |= rule_action->aso_meter.init_color << + MLX5DR_ACTION_METER_INIT_COLOR_OFFSET; + break; + case MLX5DR_ACTION_TYP_ASO_CT: + /* exe_aso_ctrl CT format: + * [STC only and reserved bits 31b][direction 1b] + */ + offset = rule_action->aso_ct.offset / MLX5_ASO_CT_NUM_PER_OBJ; + exe_aso_ctrl = rule_action->aso_ct.direction; + break; + default: + DR_LOG(ERR, "Unsupported ASO action type: %d", rule_action->action->type); + rte_errno = ENOTSUP; + return; + } + + /* aso_object_offset format: [24B] */ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = htobe32(offset); + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(exe_aso_ctrl); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; +} + +static void +mlx5dr_action_setter_tag(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_single]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = htobe32(rule_action->tag.value); + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW5, setter->idx_single); +} + +static void +mlx5dr_action_setter_ctrl_ctr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_ctr]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW0] = htobe32(rule_action->counter.offset); + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_CTRL, setter->idx_ctr); +} + +static void +mlx5dr_action_setter_single(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW5, setter->idx_single); +} + +static void +mlx5dr_action_setter_single_double_pop(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(mlx5dr_action_get_shared_stc_offset(apply->common_res, + MLX5DR_CONTEXT_SHARED_STC_POP)); +} + +static void +mlx5dr_action_setter_hit(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = 0; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_HIT, setter->idx_hit); +} + +static void +mlx5dr_action_setter_default_hit(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_HIT] = + htobe32(apply->common_res->default_stc->default_hit.offset); +} + +static void +mlx5dr_action_setter_hit_next_action(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = htobe32(apply->next_direct_idx << 6); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_HIT] = htobe32(apply->jump_to_action_stc); +} + +static void +mlx5dr_action_setter_common_decap(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(mlx5dr_action_get_shared_stc_offset(apply->common_res, + MLX5DR_CONTEXT_SHARED_STC_DECAP)); +} + +int mlx5dr_action_template_process(struct mlx5dr_action_template *at) +{ + struct mlx5dr_actions_wqe_setter *start_setter = at->setters + 1; + enum mlx5dr_action_type *action_type = at->action_type_arr; + struct mlx5dr_actions_wqe_setter *setter = at->setters; + struct mlx5dr_actions_wqe_setter *pop_setter = NULL; + struct mlx5dr_actions_wqe_setter *last_setter; + int i; + + /* Note: Given action combination must be valid */ + + /* Check if action were already processed */ + if (at->num_of_action_stes) + return 0; + + for (i = 0; i < MLX5DR_ACTION_MAX_STE; i++) + setter[i].set_hit = &mlx5dr_action_setter_hit_next_action; + + /* The same action template setters can be used with jumbo or match + * STE, to support both cases we reseve the first setter for cases + * with jumbo STE to allow jump to the first action STE. + * This extra setter can be reduced in some cases on rule creation. + */ + setter = start_setter; + last_setter = start_setter; + + for (i = 0; i < at->num_actions; i++) { + switch (action_type[i]) { + case MLX5DR_ACTION_TYP_DROP: + case MLX5DR_ACTION_TYP_TIR: + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_VPORT: + case MLX5DR_ACTION_TYP_MISS: + /* Hit action */ + last_setter->flags |= ASF_HIT; + last_setter->set_hit = &mlx5dr_action_setter_hit; + last_setter->idx_hit = i; + break; + + case MLX5DR_ACTION_TYP_POP_VLAN: + /* Single remove header to header */ + if (pop_setter) { + /* We have 2 pops, use the shared */ + pop_setter->set_single = &mlx5dr_action_setter_single_double_pop; + break; + } + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY); + setter->flags |= ASF_SINGLE1 | ASF_REPARSE | ASF_REMOVE; + setter->set_single = &mlx5dr_action_setter_single; + setter->idx_single = i; + pop_setter = setter; + break; + + case MLX5DR_ACTION_TYP_PUSH_VLAN: + /* Double insert inline */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_REPARSE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_push_vlan; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_MODIFY_HDR: + /* Double modify header list */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_modify_header; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_ASO_METER: + case MLX5DR_ACTION_TYP_ASO_CT: + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE); + setter->flags |= ASF_DOUBLE; + setter->set_double = &mlx5dr_action_setter_aso; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + /* Single remove header to header */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY); + setter->flags |= ASF_SINGLE1 | ASF_REMOVE | ASF_REPARSE; + setter->set_single = &mlx5dr_action_setter_single; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + /* Double insert header with pointer */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE); + setter->flags |= ASF_DOUBLE | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_insert_ptr; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + /* Single remove + Double insert header with pointer */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_DOUBLE); + setter->flags |= ASF_SINGLE1 | ASF_DOUBLE | ASF_REPARSE | ASF_REMOVE; + setter->set_double = &mlx5dr_action_setter_insert_ptr; + setter->idx_double = i; + setter->set_single = &mlx5dr_action_setter_common_decap; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + /* Double modify header list with remove and push inline */ + setter = mlx5dr_action_setter_find_first(last_setter, + ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_tnl_l3_to_l2; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_TAG: + /* Single TAG action, search for any room from the start */ + setter = mlx5dr_action_setter_find_first(start_setter, ASF_SINGLE1); + setter->flags |= ASF_SINGLE1; + setter->set_single = &mlx5dr_action_setter_tag; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_CTR: + /* Control counter action + * TODO: Current counter executed first. Support is needed + * for single ation counter action which is done last. + * Example: Decap + CTR + */ + setter = mlx5dr_action_setter_find_first(start_setter, ASF_CTR); + setter->flags |= ASF_CTR; + setter->set_ctr = &mlx5dr_action_setter_ctrl_ctr; + setter->idx_ctr = i; + break; + + default: + DR_LOG(ERR, "Unsupported action type: %d", action_type[i]); + rte_errno = ENOTSUP; + assert(false); + return rte_errno; + } + + last_setter = RTE_MAX(setter, last_setter); + } + + /* Set default hit on the last STE if no hit action provided */ + if (!(last_setter->flags & ASF_HIT)) + last_setter->set_hit = &mlx5dr_action_setter_default_hit; + + at->num_of_action_stes = last_setter - start_setter + 1; + + /* Check if action template doesn't require any action DWs */ + at->only_term = (at->num_of_action_stes == 1) && + !(last_setter->flags & ~(ASF_CTR | ASF_HIT)); + + return 0; +} + +struct mlx5dr_action_template * +mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[]) +{ + struct mlx5dr_action_template *at; + uint8_t num_actions = 0; + int i; + + at = simple_calloc(1, sizeof(*at)); + if (!at) { + DR_LOG(ERR, "Failed to allocate action template"); + rte_errno = ENOMEM; + return NULL; + } + + while (action_type[num_actions] != MLX5DR_ACTION_TYP_LAST) + num_actions++; + + at->num_actions = num_actions - 1; + at->action_type_arr = simple_calloc(num_actions, sizeof(*action_type)); + if (!at->action_type_arr) { + DR_LOG(ERR, "Failed to allocate action type array"); + rte_errno = ENOMEM; + goto free_at; + } + + for (i = 0; i < num_actions; i++) + at->action_type_arr[i] = action_type[i]; + + return at; + +free_at: + simple_free(at); + return NULL; +} + +int mlx5dr_action_template_destroy(struct mlx5dr_action_template *at) +{ + simple_free(at->action_type_arr); + simple_free(at); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h new file mode 100644 index 0000000000..f14d91f994 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -0,0 +1,253 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_ACTION_H_ +#define MLX5DR_ACTION_H_ + +/* Max number of STEs needed for a rule (including match) */ +#define MLX5DR_ACTION_MAX_STE 7 + +enum mlx5dr_action_stc_idx { + MLX5DR_ACTION_STC_IDX_CTRL = 0, + MLX5DR_ACTION_STC_IDX_HIT = 1, + MLX5DR_ACTION_STC_IDX_DW5 = 2, + MLX5DR_ACTION_STC_IDX_DW6 = 3, + MLX5DR_ACTION_STC_IDX_DW7 = 4, + MLX5DR_ACTION_STC_IDX_MAX = 5, + /* STC Jumvo STE combo: CTR, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_JUMBO_STE = 1, + /* STC combo1: CTR, SINGLE, DOUBLE, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_COMBO1 = 3, + /* STC combo2: CTR, 3 x SINGLE, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_COMBO2 = 4, +}; + +enum mlx5dr_action_offset { + MLX5DR_ACTION_OFFSET_DW0 = 0, + MLX5DR_ACTION_OFFSET_DW5 = 5, + MLX5DR_ACTION_OFFSET_DW6 = 6, + MLX5DR_ACTION_OFFSET_DW7 = 7, + MLX5DR_ACTION_OFFSET_HIT = 3, + MLX5DR_ACTION_OFFSET_HIT_LSB = 4, +}; + +enum { + MLX5DR_ACTION_DOUBLE_SIZE = 8, + MLX5DR_ACTION_INLINE_DATA_SIZE = 4, + MLX5DR_ACTION_HDR_LEN_L2_MACS = 12, + MLX5DR_ACTION_HDR_LEN_L2_VLAN = 4, + MLX5DR_ACTION_HDR_LEN_L2_ETHER = 2, + MLX5DR_ACTION_HDR_LEN_L2 = (MLX5DR_ACTION_HDR_LEN_L2_MACS + + MLX5DR_ACTION_HDR_LEN_L2_ETHER), + MLX5DR_ACTION_HDR_LEN_L2_W_VLAN = (MLX5DR_ACTION_HDR_LEN_L2 + + MLX5DR_ACTION_HDR_LEN_L2_VLAN), + MLX5DR_ACTION_REFORMAT_DATA_SIZE = 64, + DECAP_L3_NUM_ACTIONS_W_NO_VLAN = 6, + DECAP_L3_NUM_ACTIONS_W_VLAN = 7, +}; + +enum mlx5dr_action_setter_flag { + ASF_SINGLE1 = 1 << 0, + ASF_SINGLE2 = 1 << 1, + ASF_SINGLE3 = 1 << 2, + ASF_DOUBLE = ASF_SINGLE2 | ASF_SINGLE3, + ASF_REPARSE = 1 << 3, + ASF_REMOVE = 1 << 4, + ASF_MODIFY = 1 << 5, + ASF_CTR = 1 << 6, + ASF_HIT = 1 << 7, +}; + +struct mlx5dr_action_default_stc { + struct mlx5dr_pool_chunk nop_ctr; + struct mlx5dr_pool_chunk nop_dw5; + struct mlx5dr_pool_chunk nop_dw6; + struct mlx5dr_pool_chunk nop_dw7; + struct mlx5dr_pool_chunk default_hit; + uint32_t refcount; +}; + +struct mlx5dr_action_shared_stc { + struct mlx5dr_pool_chunk remove_header; + rte_atomic32_t refcount; +}; + +struct mlx5dr_actions_apply_data { + struct mlx5dr_send_engine *queue; + struct mlx5dr_rule_action *rule_action; + uint32_t *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + uint32_t jump_to_action_stc; + struct mlx5dr_context_common_res *common_res; + enum mlx5dr_table_type tbl_type; + uint32_t next_direct_idx; + uint8_t require_dep; +}; + +struct mlx5dr_actions_wqe_setter; + +typedef void (*mlx5dr_action_setter_fp) + (struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter); + +struct mlx5dr_actions_wqe_setter { + mlx5dr_action_setter_fp set_single; + mlx5dr_action_setter_fp set_double; + mlx5dr_action_setter_fp set_hit; + mlx5dr_action_setter_fp set_ctr; + uint8_t idx_single; + uint8_t idx_double; + uint8_t idx_ctr; + uint8_t idx_hit; + uint8_t flags; +}; + +struct mlx5dr_action_template { + struct mlx5dr_actions_wqe_setter setters[MLX5DR_ACTION_MAX_STE]; + enum mlx5dr_action_type *action_type_arr; + uint8_t num_of_action_stes; + uint8_t num_actions; + uint8_t only_term; +}; + +struct mlx5dr_action { + uint8_t type; + uint8_t flags; + struct mlx5dr_context *ctx; + union { + struct { + struct mlx5dr_pool_chunk stc[MLX5DR_TABLE_TYPE_MAX]; + union { + struct { + struct mlx5dr_devx_obj *pattern_obj; + struct mlx5dr_devx_obj *arg_obj; + __be64 single_action; + uint8_t single_action_type; + uint16_t num_of_actions; + } modify_header; + struct { + struct mlx5dr_devx_obj *arg_obj; + uint32_t header_size; + } reformat; + struct { + struct mlx5dr_devx_obj *devx_obj; + uint8_t return_reg_id; + } aso; + struct { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + } vport; + }; + }; + + struct ibv_flow_action *flow_action; + struct mlx5dv_devx_obj *devx_obj; + struct ibv_qp *qp; + }; +}; + +int mlx5dr_action_root_build_attr(struct mlx5dr_rule_action rule_actions[], + uint32_t num_actions, + struct mlx5dv_flow_action_attr *attr); + +int mlx5dr_action_get_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type); + +void mlx5dr_action_put_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type); + +void mlx5dr_action_prepare_decap_l3_data(uint8_t *src, uint8_t *dst, + uint16_t num_of_actions); + +int mlx5dr_action_template_process(struct mlx5dr_action_template *at); + +bool mlx5dr_action_check_combo(enum mlx5dr_action_type *user_actions, + enum mlx5dr_table_type table_type); + +int mlx5dr_action_alloc_single_stc(struct mlx5dr_context *ctx, + struct mlx5dr_cmd_stc_modify_attr *stc_attr, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc); + +void mlx5dr_action_free_single_stc(struct mlx5dr_context *ctx, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc); + +static inline void +mlx5dr_action_setter_default_single(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(apply->common_res->default_stc->nop_dw5.offset); +} + +static inline void +mlx5dr_action_setter_default_double(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = + htobe32(apply->common_res->default_stc->nop_dw6.offset); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = + htobe32(apply->common_res->default_stc->nop_dw7.offset); +} + +static inline void +mlx5dr_action_setter_default_ctr(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW0] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_CTRL] = + htobe32(apply->common_res->default_stc->nop_ctr.offset); +} + +static inline void +mlx5dr_action_apply_setter(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter, + bool is_jumbo) +{ + uint8_t num_of_actions; + + /* Set control counter */ + if (setter->flags & ASF_CTR) + setter->set_ctr(apply, setter); + else + mlx5dr_action_setter_default_ctr(apply, setter); + + /* Set single and double on match */ + if (!is_jumbo) { + if (setter->flags & ASF_SINGLE1) + setter->set_single(apply, setter); + else + mlx5dr_action_setter_default_single(apply, setter); + + if (setter->flags & ASF_DOUBLE) + setter->set_double(apply, setter); + else + mlx5dr_action_setter_default_double(apply, setter); + + num_of_actions = setter->flags & ASF_DOUBLE ? + MLX5DR_ACTION_STC_IDX_LAST_COMBO1 : + MLX5DR_ACTION_STC_IDX_LAST_COMBO2; + } else { + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + num_of_actions = MLX5DR_ACTION_STC_IDX_LAST_JUMBO_STE; + } + + /* Set next/final hit action */ + setter->set_hit(apply, setter); + + /* Set number of actions */ + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_CTRL] |= + htobe32(num_of_actions << 29); +} + +#endif /* MLX5DR_ACTION_H_ */ diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.c b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c new file mode 100644 index 0000000000..584b7f3dfd --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c @@ -0,0 +1,511 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_data_size_to_arg_log_size(uint16_t data_size) +{ + /* Return the roundup of log2(data_size) */ + if (data_size <= MLX5DR_ARG_DATA_SIZE) + return MLX5DR_ARG_CHUNK_SIZE_1; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 2) + return MLX5DR_ARG_CHUNK_SIZE_2; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 4) + return MLX5DR_ARG_CHUNK_SIZE_3; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 8) + return MLX5DR_ARG_CHUNK_SIZE_4; + + return MLX5DR_ARG_CHUNK_SIZE_MAX; +} + +uint32_t mlx5dr_arg_data_size_to_arg_size(uint16_t data_size) +{ + return BIT(mlx5dr_arg_data_size_to_arg_log_size(data_size)); +} + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_get_arg_log_size(uint16_t num_of_actions) +{ + return mlx5dr_arg_data_size_to_arg_log_size(num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); +} + +uint32_t mlx5dr_arg_get_arg_size(uint16_t num_of_actions) +{ + return BIT(mlx5dr_arg_get_arg_log_size(num_of_actions)); +} + +/* Cache and cache element handling */ +int mlx5dr_pat_init_pattern_cache(struct mlx5dr_pattern_cache **cache) +{ + struct mlx5dr_pattern_cache *new_cache; + + new_cache = simple_calloc(1, sizeof(*new_cache)); + if (!new_cache) { + rte_errno = ENOMEM; + return rte_errno; + } + LIST_INIT(&new_cache->head); + pthread_spin_init(&new_cache->lock, PTHREAD_PROCESS_PRIVATE); + + *cache = new_cache; + + return 0; +} + +void mlx5dr_pat_uninit_pattern_cache(struct mlx5dr_pattern_cache *cache) +{ + simple_free(cache); +} + +static bool mlx5dr_pat_compare_pattern(enum mlx5dr_action_type cur_type, + int cur_num_of_actions, + __be64 cur_actions[], + enum mlx5dr_action_type type, + int num_of_actions, + __be64 actions[]) +{ + int i; + + if ((cur_num_of_actions != num_of_actions) || (cur_type != type)) + return false; + + /* All decap-l3 look the same, only change is the num of actions */ + if (type == MLX5DR_ACTION_TYP_TNL_L3_TO_L2) + return true; + + for (i = 0; i < num_of_actions; i++) { + u8 action_id = + MLX5_GET(set_action_in, &actions[i], action_type); + + if (action_id == MLX5_MODIFICATION_TYPE_COPY) { + if (actions[i] != cur_actions[i]) + return false; + } else { + /* Compare just the control, not the values */ + if ((__be32)actions[i] != + (__be32)cur_actions[i]) + return false; + } + } + + return true; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_find_cached_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pat; + + LIST_FOREACH(cached_pat, &cache->head, next) { + if (mlx5dr_pat_compare_pattern(cached_pat->type, + cached_pat->mh_data.num_of_actions, + (__be64 *)cached_pat->mh_data.data, + action->type, + num_of_actions, + actions)) + return cached_pat; + } + + return NULL; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_get_existing_cached_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + cached_pattern = mlx5dr_pat_find_cached_pattern(cache, action, num_of_actions, actions); + if (cached_pattern) { + /* LRU: move it to be first in the list */ + LIST_REMOVE(cached_pattern, next); + LIST_INSERT_HEAD(&cache->head, cached_pattern, next); + rte_atomic32_add(&cached_pattern->refcount, 1); + } + + return cached_pattern; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_get_cached_pattern_by_action(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + LIST_FOREACH(cached_pattern, &cache->head, next) { + if (cached_pattern->mh_data.pattern_obj->id == action->modify_header.pattern_obj->id) + return cached_pattern; + } + + return NULL; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_add_pattern_to_cache(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_devx_obj *pattern_obj, + enum mlx5dr_action_type type, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + cached_pattern = simple_calloc(1, sizeof(*cached_pattern)); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to allocate cached_pattern"); + rte_errno = ENOMEM; + return NULL; + } + + cached_pattern->type = type; + cached_pattern->mh_data.num_of_actions = num_of_actions; + cached_pattern->mh_data.pattern_obj = pattern_obj; + cached_pattern->mh_data.data = + simple_malloc(num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); + if (!cached_pattern->mh_data.data) { + DR_LOG(ERR, "Failed to allocate mh_data.data"); + rte_errno = ENOMEM; + goto free_cached_obj; + } + + memcpy(cached_pattern->mh_data.data, actions, + num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); + + LIST_INSERT_HEAD(&cache->head, cached_pattern, next); + + rte_atomic32_init(&cached_pattern->refcount); + rte_atomic32_set(&cached_pattern->refcount, 1); + + return cached_pattern; + +free_cached_obj: + simple_free(cached_pattern); + return NULL; +} + +static void +mlx5dr_pat_remove_pattern(struct mlx5dr_pat_cached_pattern *cached_pattern) +{ + LIST_REMOVE(cached_pattern, next); + simple_free(cached_pattern->mh_data.data); + simple_free(cached_pattern); +} + +static void +mlx5dr_pat_put_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + pthread_spin_lock(&cache->lock); + cached_pattern = mlx5dr_pat_get_cached_pattern_by_action(cache, action); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to find pattern according to action with pt"); + assert(false); + goto out; + } + + if (!rte_atomic32_dec_and_test(&cached_pattern->refcount)) + goto out; + + mlx5dr_pat_remove_pattern(cached_pattern); + +out: + pthread_spin_unlock(&cache->lock); +} + +static int mlx5dr_pat_get_pattern(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint16_t num_of_actions, + size_t pattern_sz, + __be64 *pattern) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + int ret = 0; + + pthread_spin_lock(&ctx->pattern_cache->lock); + + cached_pattern = mlx5dr_pat_get_existing_cached_pattern(ctx->pattern_cache, + action, + num_of_actions, + pattern); + if (cached_pattern) { + action->modify_header.pattern_obj = cached_pattern->mh_data.pattern_obj; + goto out_unlock; + } + + action->modify_header.pattern_obj = + mlx5dr_cmd_header_modify_pattern_create(ctx->ibv_ctx, + pattern_sz, + (uint8_t *)pattern); + if (!action->modify_header.pattern_obj) { + DR_LOG(ERR, "Failed to create pattern FW object"); + + ret = rte_errno; + goto out_unlock; + } + + cached_pattern = + mlx5dr_pat_add_pattern_to_cache(ctx->pattern_cache, + action->modify_header.pattern_obj, + action->type, + num_of_actions, + pattern); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to add pattern to cache"); + ret = rte_errno; + goto clean_pattern; + } + +out_unlock: + pthread_spin_unlock(&ctx->pattern_cache->lock); + return ret; + +clean_pattern: + mlx5dr_cmd_destroy_obj(action->modify_header.pattern_obj); + pthread_spin_unlock(&ctx->pattern_cache->lock); + return ret; +} + +static void +mlx5d_arg_init_send_attr(struct mlx5dr_send_engine_post_attr *send_attr, + void *comp_data, + uint32_t arg_idx) +{ + send_attr->opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + send_attr->opmod = MLX5DR_WQE_GTA_OPMOD_MOD_ARG; + send_attr->len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + send_attr->id = arg_idx; + send_attr->user_data = comp_data; +} + +void mlx5dr_arg_decapl3_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_arg *wqe_arg; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + size_t wqe_len; + + mlx5d_arg_init_send_attr(&send_attr, NULL, arg_idx); + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + mlx5dr_action_prepare_decap_l3_data(arg_data, (uint8_t *) wqe_arg, + num_of_actions); + mlx5dr_send_engine_post_end(&ctrl, &send_attr); +} + +static int +mlx5dr_arg_poll_for_comp(struct mlx5dr_context *ctx, uint16_t queue_id) +{ + struct rte_flow_op_result comp[1]; + int ret; + + while (true) { + ret = mlx5dr_send_queue_poll(ctx, queue_id, comp, 1); + if (ret) { + if (ret < 0) { + DR_LOG(ERR, "Failed mlx5dr_send_queue_poll"); + } else if (comp[0].status == RTE_FLOW_OP_ERROR) { + DR_LOG(ERR, "Got comp with error"); + rte_errno = ENOENT; + } + break; + } + } + return (ret == 1 ? 0 : ret); +} + +void mlx5dr_arg_write(struct mlx5dr_send_engine *queue, + void *comp_data, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_arg *wqe_arg; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + int i, full_iter, leftover; + size_t wqe_len; + + mlx5d_arg_init_send_attr(&send_attr, comp_data, arg_idx); + + /* Each WQE can hold 64B of data, it might require multiple iteration */ + full_iter = data_size / MLX5DR_ARG_DATA_SIZE; + leftover = data_size & (MLX5DR_ARG_DATA_SIZE - 1); + + for (i = 0; i < full_iter; i++) { + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + memcpy(wqe_arg, arg_data, wqe_len); + send_attr.id = arg_idx++; + mlx5dr_send_engine_post_end(&ctrl, &send_attr); + + /* Move to next argument data */ + arg_data += MLX5DR_ARG_DATA_SIZE; + } + + if (leftover) { + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + memcpy(wqe_arg, arg_data, leftover); + send_attr.id = arg_idx; + mlx5dr_send_engine_post_end(&ctrl, &send_attr); + } +} + +int mlx5dr_arg_write_inline_arg_data(struct mlx5dr_context *ctx, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size) +{ + struct mlx5dr_send_engine *queue; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + + /* Get the control queue */ + queue = &ctx->send_queue[ctx->queues - 1]; + + mlx5dr_arg_write(queue, arg_data, arg_idx, arg_data, data_size); + + mlx5dr_send_engine_flush_queue(queue); + + /* Poll for completion */ + ret = mlx5dr_arg_poll_for_comp(ctx, ctx->queues - 1); + if (ret) + DR_LOG(ERR, "Failed to get completions for shared action"); + + pthread_spin_unlock(&ctx->ctrl_lock); + + return ret; +} + +bool mlx5dr_arg_is_valid_arg_request_size(struct mlx5dr_context *ctx, + uint32_t arg_size) +{ + if (arg_size < ctx->caps->log_header_modify_argument_granularity || + arg_size > ctx->caps->log_header_modify_argument_max_alloc) { + return false; + } + return true; +} + +static int +mlx5dr_arg_create_modify_header_arg(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *pattern, + uint32_t bulk_size) +{ + uint32_t flags = action->flags; + uint16_t args_log_size; + int ret = 0; + + /* Alloc bulk of args */ + args_log_size = mlx5dr_arg_get_arg_log_size(num_of_actions); + if (args_log_size >= MLX5DR_ARG_CHUNK_SIZE_MAX) { + DR_LOG(ERR, "Exceed number of allowed actions %u", + num_of_actions); + rte_errno = EINVAL; + return rte_errno; + } + + if (!mlx5dr_arg_is_valid_arg_request_size(ctx, args_log_size + bulk_size)) { + DR_LOG(ERR, "Arg size %d does not fit FW capability", + args_log_size + bulk_size); + rte_errno = EINVAL; + return rte_errno; + } + + action->modify_header.arg_obj = + mlx5dr_cmd_arg_create(ctx->ibv_ctx, args_log_size + bulk_size, + ctx->pd_num); + if (!action->modify_header.arg_obj) { + DR_LOG(ERR, "Failed allocating arg in order: %d", + args_log_size + bulk_size); + return rte_errno; + } + + /* When INLINE need to write the arg data */ + if (flags & MLX5DR_ACTION_FLAG_SHARED) + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->modify_header.arg_obj->id, + (uint8_t *)pattern, + num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); + if (ret) { + DR_LOG(ERR, "Failed writing INLINE arg in order: %d", + args_log_size + bulk_size); + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + return rte_errno; + } + + return 0; +} + +int mlx5dr_pat_arg_create_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + size_t pattern_sz, + __be64 pattern[], + uint32_t bulk_size) +{ + uint16_t num_of_actions; + int ret; + + num_of_actions = pattern_sz / MLX5DR_MODIFY_ACTION_SIZE; + if (num_of_actions == 0) { + DR_LOG(ERR, "Invalid number of actions %u\n", num_of_actions); + rte_errno = EINVAL; + return rte_errno; + } + + action->modify_header.num_of_actions = num_of_actions; + + ret = mlx5dr_arg_create_modify_header_arg(ctx, action, + num_of_actions, + pattern, + bulk_size); + if (ret) { + DR_LOG(ERR, "Failed to allocate arg"); + return ret; + } + + ret = mlx5dr_pat_get_pattern(ctx, action, num_of_actions, pattern_sz, + pattern); + if (ret) { + DR_LOG(ERR, "Failed to allocate pattern"); + goto free_arg; + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + return rte_errno; +} + +void mlx5dr_pat_arg_destroy_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action) +{ + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + mlx5dr_pat_put_pattern(ctx->pattern_cache, action); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.h b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h new file mode 100644 index 0000000000..8a4670427f --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_PAT_ARG_H_ +#define MLX5DR_PAT_ARG_H_ + +/* Modify-header arg pool */ +enum mlx5dr_arg_chunk_size { + MLX5DR_ARG_CHUNK_SIZE_1, + /* Keep MIN updated when changing */ + MLX5DR_ARG_CHUNK_SIZE_MIN = MLX5DR_ARG_CHUNK_SIZE_1, + MLX5DR_ARG_CHUNK_SIZE_2, + MLX5DR_ARG_CHUNK_SIZE_3, + MLX5DR_ARG_CHUNK_SIZE_4, + MLX5DR_ARG_CHUNK_SIZE_MAX, +}; + +enum { + MLX5DR_MODIFY_ACTION_SIZE = 8, + MLX5DR_ARG_DATA_SIZE = 64, +}; + +struct mlx5dr_pattern_cache { + /* Protect pattern list */ + pthread_spinlock_t lock; + LIST_HEAD(pattern_head, mlx5dr_pat_cached_pattern) head; +}; + +struct mlx5dr_pat_cached_pattern { + enum mlx5dr_action_type type; + struct { + struct mlx5dr_devx_obj *pattern_obj; + struct dr_icm_chunk *chunk; + uint8_t *data; + uint16_t num_of_actions; + } mh_data; + rte_atomic32_t refcount; + LIST_ENTRY(mlx5dr_pat_cached_pattern) next; +}; + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_get_arg_log_size(uint16_t num_of_actions); + +uint32_t mlx5dr_arg_get_arg_size(uint16_t num_of_actions); + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_data_size_to_arg_log_size(uint16_t data_size); + +uint32_t mlx5dr_arg_data_size_to_arg_size(uint16_t data_size); + +int mlx5dr_pat_init_pattern_cache(struct mlx5dr_pattern_cache **cache); + +void mlx5dr_pat_uninit_pattern_cache(struct mlx5dr_pattern_cache *cache); + +int mlx5dr_pat_arg_create_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + size_t pattern_sz, + __be64 pattern[], + uint32_t bulk_size); + +void mlx5dr_pat_arg_destroy_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action); + +bool mlx5dr_arg_is_valid_arg_request_size(struct mlx5dr_context *ctx, + uint32_t arg_size); + +void mlx5dr_arg_write(struct mlx5dr_send_engine *queue, + void *comp_data, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size); + +void mlx5dr_arg_decapl3_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions); + +int mlx5dr_arg_write_inline_arg_data(struct mlx5dr_context *ctx, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size); +#endif /* MLX5DR_PAT_ARG_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 17/18] net/mlx5/hws: Add HWS debug layer 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (15 preceding siblings ...) 2022-10-19 20:57 ` [v5 16/18] net/mlx5/hws: Add HWS action object Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 2022-10-19 20:57 ` [v5 18/18] net/mlx5/hws: Enable HWS Alex Vesker 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Hamdan Igbaria From: Hamdan Igbaria <hamdani@nvidia.com> The debug layer is used to generate a debug CSV file containing details of the context, table, matcher, rules and other useful debug information. Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_debug.c | 462 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_debug.h | 28 ++ 2 files changed, 490 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.h diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c new file mode 100644 index 0000000000..890a761c48 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -0,0 +1,462 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +const char *mlx5dr_debug_action_type_str[] = { + [MLX5DR_ACTION_TYP_LAST] = "LAST", + [MLX5DR_ACTION_TYP_TNL_L2_TO_L2] = "TNL_L2_TO_L2", + [MLX5DR_ACTION_TYP_L2_TO_TNL_L2] = "L2_TO_TNL_L2", + [MLX5DR_ACTION_TYP_TNL_L3_TO_L2] = "TNL_L3_TO_L2", + [MLX5DR_ACTION_TYP_L2_TO_TNL_L3] = "L2_TO_TNL_L3", + [MLX5DR_ACTION_TYP_DROP] = "DROP", + [MLX5DR_ACTION_TYP_TIR] = "TIR", + [MLX5DR_ACTION_TYP_FT] = "FT", + [MLX5DR_ACTION_TYP_CTR] = "CTR", + [MLX5DR_ACTION_TYP_TAG] = "TAG", + [MLX5DR_ACTION_TYP_MODIFY_HDR] = "MODIFY_HDR", + [MLX5DR_ACTION_TYP_VPORT] = "VPORT", + [MLX5DR_ACTION_TYP_MISS] = "DEFAULT_MISS", + [MLX5DR_ACTION_TYP_POP_VLAN] = "POP_VLAN", + [MLX5DR_ACTION_TYP_PUSH_VLAN] = "PUSH_VLAN", + [MLX5DR_ACTION_TYP_ASO_METER] = "ASO_METER", + [MLX5DR_ACTION_TYP_ASO_CT] = "ASO_CT", +}; + +static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX, + "Missing mlx5dr_debug_action_type_str"); + +const char *mlx5dr_debug_action_type_to_str(enum mlx5dr_action_type action_type) +{ + return mlx5dr_debug_action_type_str[action_type]; +} + +static int +mlx5dr_debug_dump_matcher_template_definer(FILE *f, + struct mlx5dr_match_template *mt) +{ + struct mlx5dr_definer *definer = mt->definer; + int i, ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,", + MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_DEFINER, + (uint64_t)(uintptr_t)definer, + (uint64_t)(uintptr_t)mt, + definer->obj->id, + definer->type); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (i = 0; i < DW_SELECTORS; i++) { + ret = fprintf(f, "0x%x%s", definer->dw_selector[i], + (i == DW_SELECTORS - 1) ? "," : "-"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + for (i = 0; i < BYTE_SELECTORS; i++) { + ret = fprintf(f, "0x%x%s", definer->byte_selector[i], + (i == BYTE_SELECTORS - 1) ? "," : "-"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + for (i = 0; i < MLX5DR_JUMBO_TAG_SZ; i++) { + ret = fprintf(f, "%02x", definer->mask.jumbo[i]); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + ret = fprintf(f, "\n"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int +mlx5dr_debug_dump_matcher_match_template(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + int i, ret; + + for (i = 0; i < matcher->num_of_mt; i++) { + struct mlx5dr_match_template *mt = matcher->mt[i]; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_MATCHER_MATCH_TEMPLATE, + (uint64_t)(uintptr_t)mt, + (uint64_t)(uintptr_t)matcher, + is_root ? 0 : mt->fc_sz, + mt->flags); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + if (!is_root) { + ret = mlx5dr_debug_dump_matcher_template_definer(f, mt); + if (ret) + return ret; + } + } + + return 0; +} + +static int +mlx5dr_debug_dump_matcher_action_template(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + enum mlx5dr_action_type action_type; + int i, j, ret; + + for (i = 0; i < matcher->num_of_at; i++) { + struct mlx5dr_action_template *at = matcher->at[i]; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,%d", + MLX5DR_DEBUG_RES_TYPE_MATCHER_ACTION_TEMPLATE, + (uint64_t)(uintptr_t)at, + (uint64_t)(uintptr_t)matcher, + at->only_term ? 0 : 1, + is_root ? 0 : at->num_of_action_stes, + at->num_actions); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (j = 0; j < at->num_actions; j++) { + action_type = at->action_type_arr[j]; + ret = fprintf(f, ",%s", mlx5dr_debug_action_type_to_str(action_type)); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + fprintf(f, "\n"); + } + + return 0; +} + +static int +mlx5dr_debug_dump_matcher_attr(FILE *f, struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_matcher_attr *attr = &matcher->attr; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_MATCHER_ATTR, + (uint64_t)(uintptr_t)matcher, + attr->priority, + attr->mode, + attr->table.sz_row_log, + attr->table.sz_col_log, + attr->optimize_using_rule_idx, + attr->optimize_flow_src); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_matcher(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + enum mlx5dr_table_type tbl_type = matcher->tbl->type; + struct mlx5dr_devx_obj *ste_0, *ste_1 = NULL; + struct mlx5dr_pool_chunk *ste; + struct mlx5dr_pool *ste_pool; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,0x%" PRIx64, + MLX5DR_DEBUG_RES_TYPE_MATCHER, + (uint64_t)(uintptr_t)matcher, + (uint64_t)(uintptr_t)matcher->tbl, + matcher->num_of_mt, + is_root ? 0 : matcher->end_ft->id, + matcher->col_matcher ? (uint64_t)(uintptr_t)matcher->col_matcher : 0); + if (ret < 0) + goto out_err; + + ste = &matcher->match_ste.ste; + ste_pool = matcher->match_ste.pool; + if (ste_pool) { + ste_0 = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + ste_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + } else { + ste_0 = NULL; + ste_1 = NULL; + } + + ret = fprintf(f, ",%d,%d,%d,%d", + matcher->match_ste.rtc_0 ? matcher->match_ste.rtc_0->id : 0, + ste_0 ? (int)ste_0->id : -1, + matcher->match_ste.rtc_1 ? matcher->match_ste.rtc_1->id : 0, + ste_1 ? (int)ste_1->id : -1); + if (ret < 0) + goto out_err; + + ste = &matcher->action_ste.ste; + ste_pool = matcher->action_ste.pool; + if (ste_pool) { + ste_0 = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + ste_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + } else { + ste_0 = NULL; + ste_1 = NULL; + } + + ret = fprintf(f, ",%d,%d,%d,%d\n", + matcher->action_ste.rtc_0 ? matcher->action_ste.rtc_0->id : 0, + ste_0 ? (int)ste_0->id : -1, + matcher->action_ste.rtc_1 ? matcher->action_ste.rtc_1->id : 0, + ste_1 ? (int)ste_1->id : -1); + if (ret < 0) + goto out_err; + + ret = mlx5dr_debug_dump_matcher_attr(f, matcher); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_matcher_match_template(f, matcher); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_matcher_action_template(f, matcher); + if (ret) + return ret; + + return 0; + +out_err: + rte_errno = EINVAL; + return rte_errno; +} + +static int mlx5dr_debug_dump_table(FILE *f, struct mlx5dr_table *tbl) +{ + bool is_root = tbl->level == MLX5DR_ROOT_LEVEL; + struct mlx5dr_matcher *matcher; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_TABLE, + (uint64_t)(uintptr_t)tbl, + (uint64_t)(uintptr_t)tbl->ctx, + is_root ? 0 : tbl->ft->id, + tbl->type, + is_root ? 0 : tbl->fw_ft_type, + tbl->level); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + LIST_FOREACH(matcher, &tbl->head, next) { + ret = mlx5dr_debug_dump_matcher(f, matcher); + if (ret) + return ret; + } + + return 0; +} + +static int +mlx5dr_debug_dump_context_send_engine(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_send_engine *send_queue; + int ret, i, j; + + for (i = 0; i < (int)ctx->queues; i++) { + send_queue = &ctx->send_queue[i]; + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_ENGINE, + (uint64_t)(uintptr_t)ctx, + i, + send_queue->used_entries, + send_queue->th_entries, + send_queue->rings, + send_queue->num_entries, + send_queue->err, + send_queue->completed.ci, + send_queue->completed.pi, + send_queue->completed.mask); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (j = 0; j < MLX5DR_NUM_SEND_RINGS; j++) { + struct mlx5dr_send_ring *send_ring = &send_queue->send_ring[j]; + struct mlx5dr_send_ring_cq *cq = &send_ring->send_cq; + struct mlx5dr_send_ring_sq *sq = &send_ring->send_sq; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_RING, + (uint64_t)(uintptr_t)ctx, + j, + i, + cq->cqn, + cq->cons_index, + cq->ncqe_mask, + cq->buf_sz, + cq->ncqe, + cq->cqe_log_sz, + cq->poll_wqe, + cq->cqe_sz, + sq->sqn, + sq->obj->id, + sq->cur_post, + sq->buf_mask); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + } + + return 0; +} + +static int mlx5dr_debug_dump_context_caps(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_cmd_query_caps *caps = ctx->caps; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%s,%d,%d,%d,%d,", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_CAPS, + (uint64_t)(uintptr_t)ctx, + caps->fw_ver, + caps->wqe_based_update, + caps->ste_format, + caps->ste_alloc_log_max, + caps->log_header_modify_argument_max_alloc); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + ret = fprintf(f, "%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + caps->flex_protocols, + caps->rtc_reparse_mode, + caps->rtc_index_mode, + caps->ste_alloc_log_gran, + caps->stc_alloc_log_max, + caps->stc_alloc_log_gran, + caps->rtc_log_depth_max, + caps->format_select_gtpu_dw_0, + caps->format_select_gtpu_dw_1, + caps->format_select_gtpu_dw_2, + caps->format_select_gtpu_ext_dw_0, + caps->nic_ft.max_level, + caps->nic_ft.reparse, + caps->fdb_ft.max_level, + caps->fdb_ft.reparse, + caps->log_header_modify_argument_granularity); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_context_attr(FILE *f, struct mlx5dr_context *ctx) +{ + int ret; + + ret = fprintf(f, "%u,0x%" PRIx64 ",%d,%zu,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_ATTR, + (uint64_t)(uintptr_t)ctx, + ctx->pd_num, + ctx->queues, + ctx->send_queue->num_entries); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_context_info(FILE *f, struct mlx5dr_context *ctx) +{ + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%s,%s\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT, + (uint64_t)(uintptr_t)ctx, + ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT, + mlx5_glue->get_device_name(ctx->ibv_ctx->device), + DEBUG_VERSION); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + ret = mlx5dr_debug_dump_context_attr(f, ctx); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_context_caps(f, ctx); + if (ret) + return ret; + + return 0; +} + +static int mlx5dr_debug_dump_context(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_table *tbl; + int ret; + + ret = mlx5dr_debug_dump_context_info(f, ctx); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_context_send_engine(f, ctx); + if (ret) + return ret; + + LIST_FOREACH(tbl, &ctx->head, next) { + ret = mlx5dr_debug_dump_table(f, tbl); + if (ret) + return ret; + } + + return 0; +} + +int mlx5dr_debug_dump(struct mlx5dr_context *ctx, FILE *f) +{ + int ret; + + if (!f || !ctx) { + rte_errno = EINVAL; + return -rte_errno; + } + + pthread_spin_lock(&ctx->ctrl_lock); + ret = mlx5dr_debug_dump_context(f, ctx); + pthread_spin_unlock(&ctx->ctrl_lock); + + return -ret; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.h b/drivers/net/mlx5/hws/mlx5dr_debug.h new file mode 100644 index 0000000000..cf00170f7d --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_debug.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_DEBUG_H_ +#define MLX5DR_DEBUG_H_ + +#define DEBUG_VERSION "1.0.DPDK" + +enum mlx5dr_debug_res_type { + MLX5DR_DEBUG_RES_TYPE_CONTEXT = 4000, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_ATTR = 4001, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_CAPS = 4002, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_ENGINE = 4003, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_RING = 4004, + + MLX5DR_DEBUG_RES_TYPE_TABLE = 4100, + + MLX5DR_DEBUG_RES_TYPE_MATCHER = 4200, + MLX5DR_DEBUG_RES_TYPE_MATCHER_ATTR = 4201, + MLX5DR_DEBUG_RES_TYPE_MATCHER_MATCH_TEMPLATE = 4202, + MLX5DR_DEBUG_RES_TYPE_MATCHER_ACTION_TEMPLATE = 4204, + MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_DEFINER = 4203, +}; + +const char *mlx5dr_debug_action_type_to_str(enum mlx5dr_action_type action_type); + +#endif /* MLX5DR_DEBUG_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v5 18/18] net/mlx5/hws: Enable HWS 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (16 preceding siblings ...) 2022-10-19 20:57 ` [v5 17/18] net/mlx5/hws: Add HWS debug layer Alex Vesker @ 2022-10-19 20:57 ` Alex Vesker 17 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-19 20:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Replace stub implenation of HWS with mlx5dr code. Signed-off-by: Alex Vesker <valex@nvidia.com> --- doc/guides/nics/mlx5.rst | 5 +- doc/guides/rel_notes/release_22_11.rst | 4 + drivers/common/mlx5/linux/meson.build | 3 + drivers/net/mlx5/hws/meson.build | 18 + drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} | 209 ++++++++-- drivers/net/mlx5/hws/mlx5dr_internal.h | 93 +++++ drivers/net/mlx5/meson.build | 7 +- drivers/net/mlx5/mlx5.c | 6 +- drivers/net/mlx5/mlx5.h | 7 +- drivers/net/mlx5/mlx5_devx.c | 2 +- drivers/net/mlx5/mlx5_dr.c | 383 ------------------- drivers/net/mlx5/mlx5_flow.c | 2 + drivers/net/mlx5/mlx5_flow.h | 11 +- drivers/net/mlx5/mlx5_flow_hw.c | 10 +- 14 files changed, 328 insertions(+), 432 deletions(-) create mode 100644 drivers/net/mlx5/hws/meson.build rename drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} (66%) create mode 100644 drivers/net/mlx5/hws/mlx5dr_internal.h delete mode 100644 drivers/net/mlx5/mlx5_dr.c diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index bb436892a0..303eb17714 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -539,7 +539,10 @@ Limitations - WQE based high scaling and safer flow insertion/destruction. - Set ``dv_flow_en`` to 2 in order to enable HW steering. - - Async queue-based ``rte_flow_q`` APIs supported only. + - Async queue-based ``rte_flow_async`` APIs supported only. + - NIC ConnectX-5 and before are not supported. + - Partial match with item template is not supported. + - IPv6 5-tuple matching is not supported. - Match on GRE header supports the following fields: diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index a3700bbb34..eed7acc838 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -237,6 +237,10 @@ New Features sysfs entries to adjust the minimum and maximum uncore frequency values, which works on Linux with Intel hardware only. +* **Updated Nvidia mlx5 driver.** + + * Added fully support for queue based async HW steering to the PMD. + * **Rewritten pmdinfo script.** The ``dpdk-pmdinfo.py`` script was rewritten to produce valid JSON only. diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build index e8b9a07db5..2c69c5e546 100644 --- a/drivers/common/mlx5/linux/meson.build +++ b/drivers/common/mlx5/linux/meson.build @@ -8,6 +8,7 @@ dlopen_ibverbs = (get_option('ibverbs_link') == 'dlopen') LIB_GLUE_BASE = 'librte_common_mlx5_glue.so' LIB_GLUE_VERSION = abi_version LIB_GLUE = LIB_GLUE_BASE + '.' + LIB_GLUE_VERSION +MLX5_HAVE_IBV_FLOW_DV_SUPPORT = false if dlopen_ibverbs dpdk_conf.set('RTE_IBVERBS_LINK_DLOPEN', 1) cflags += [ @@ -231,6 +232,8 @@ foreach arg:has_member_args endforeach configure_file(output : 'mlx5_autoconf.h', configuration : config) +MLX5_HAVE_IBV_FLOW_DV_SUPPORT=config.get('HAVE_IBV_FLOW_DV_SUPPORT') + # Build Glue Library if dlopen_ibverbs dlopen_name = 'mlx5_glue' diff --git a/drivers/net/mlx5/hws/meson.build b/drivers/net/mlx5/hws/meson.build new file mode 100644 index 0000000000..f94798dd2d --- /dev/null +++ b/drivers/net/mlx5/hws/meson.build @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2022 NVIDIA Corporation & Affiliates + +includes += include_directories('.') +sources += files( + 'mlx5dr_context.c', + 'mlx5dr_table.c', + 'mlx5dr_matcher.c', + 'mlx5dr_rule.c', + 'mlx5dr_action.c', + 'mlx5dr_buddy.c', + 'mlx5dr_pool.c', + 'mlx5dr_cmd.c', + 'mlx5dr_send.c', + 'mlx5dr_definer.c', + 'mlx5dr_debug.c', + 'mlx5dr_pat_arg.c', +) diff --git a/drivers/net/mlx5/mlx5_dr.h b/drivers/net/mlx5/hws/mlx5dr.h similarity index 66% rename from drivers/net/mlx5/mlx5_dr.h rename to drivers/net/mlx5/hws/mlx5dr.h index d0b2c15652..664dadbcde 100644 --- a/drivers/net/mlx5/mlx5_dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved. + * Copyright (c) 2022 NVIDIA Corporation & Affiliates */ -#ifndef MLX5_DR_H_ -#define MLX5_DR_H_ +#ifndef MLX5DR_H_ +#define MLX5DR_H_ #include <rte_flow.h> @@ -11,6 +11,7 @@ struct mlx5dr_context; struct mlx5dr_table; struct mlx5dr_matcher; struct mlx5dr_rule; +struct ibv_context; enum mlx5dr_table_type { MLX5DR_TABLE_TYPE_NIC_RX, @@ -26,6 +27,27 @@ enum mlx5dr_matcher_resource_mode { MLX5DR_MATCHER_RESOURCE_MODE_HTABLE, }; +enum mlx5dr_action_type { + MLX5DR_ACTION_TYP_LAST, + MLX5DR_ACTION_TYP_TNL_L2_TO_L2, + MLX5DR_ACTION_TYP_L2_TO_TNL_L2, + MLX5DR_ACTION_TYP_TNL_L3_TO_L2, + MLX5DR_ACTION_TYP_L2_TO_TNL_L3, + MLX5DR_ACTION_TYP_DROP, + MLX5DR_ACTION_TYP_TIR, + MLX5DR_ACTION_TYP_FT, + MLX5DR_ACTION_TYP_CTR, + MLX5DR_ACTION_TYP_TAG, + MLX5DR_ACTION_TYP_MODIFY_HDR, + MLX5DR_ACTION_TYP_VPORT, + MLX5DR_ACTION_TYP_MISS, + MLX5DR_ACTION_TYP_POP_VLAN, + MLX5DR_ACTION_TYP_PUSH_VLAN, + MLX5DR_ACTION_TYP_ASO_METER, + MLX5DR_ACTION_TYP_ASO_CT, + MLX5DR_ACTION_TYP_MAX, +}; + enum mlx5dr_action_flags { MLX5DR_ACTION_FLAG_ROOT_RX = 1 << 0, MLX5DR_ACTION_FLAG_ROOT_TX = 1 << 1, @@ -33,7 +55,10 @@ enum mlx5dr_action_flags { MLX5DR_ACTION_FLAG_HWS_RX = 1 << 3, MLX5DR_ACTION_FLAG_HWS_TX = 1 << 4, MLX5DR_ACTION_FLAG_HWS_FDB = 1 << 5, - MLX5DR_ACTION_FLAG_INLINE = 1 << 6, + /* Shared action can be used over a few threads, since data is written + * only once at the creation of the action. + */ + MLX5DR_ACTION_FLAG_SHARED = 1 << 6, }; enum mlx5dr_action_reformat_type { @@ -43,6 +68,18 @@ enum mlx5dr_action_reformat_type { MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3, }; +enum mlx5dr_action_aso_meter_color { + MLX5DR_ACTION_ASO_METER_COLOR_RED = 0x0, + MLX5DR_ACTION_ASO_METER_COLOR_YELLOW = 0x1, + MLX5DR_ACTION_ASO_METER_COLOR_GREEN = 0x2, + MLX5DR_ACTION_ASO_METER_COLOR_UNDEFINED = 0x3, +}; + +enum mlx5dr_action_aso_ct_flags { + MLX5DR_ACTION_ASO_CT_DIRECTION_INITIATOR = 0 << 0, + MLX5DR_ACTION_ASO_CT_DIRECTION_RESPONDER = 1 << 0, +}; + enum mlx5dr_match_template_flags { /* Allow relaxed matching by skipping derived dependent match fields. */ MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH = 1, @@ -56,7 +93,7 @@ enum mlx5dr_send_queue_actions { struct mlx5dr_context_attr { uint16_t queues; uint16_t queue_size; - size_t initial_log_ste_memory; + size_t initial_log_ste_memory; /* Currently not in use */ /* Optional PD used for allocating res ources */ struct ibv_pd *pd; }; @@ -66,9 +103,21 @@ struct mlx5dr_table_attr { uint32_t level; }; +enum mlx5dr_matcher_flow_src { + MLX5DR_MATCHER_FLOW_SRC_ANY = 0x0, + MLX5DR_MATCHER_FLOW_SRC_WIRE = 0x1, + MLX5DR_MATCHER_FLOW_SRC_VPORT = 0x2, +}; + struct mlx5dr_matcher_attr { + /* Processing priority inside table */ uint32_t priority; + /* Provide all rules with unique rule_idx in num_log range to reduce locking */ + bool optimize_using_rule_idx; + /* Resource mode and corresponding size */ enum mlx5dr_matcher_resource_mode mode; + /* Optimize insertion in case packet origin is the same for all rules */ + enum mlx5dr_matcher_flow_src optimize_flow_src; union { struct { uint8_t sz_row_log; @@ -84,6 +133,8 @@ struct mlx5dr_matcher_attr { struct mlx5dr_rule_attr { uint16_t queue_id; void *user_data; + /* Valid if matcher optimize_using_rule_idx is set */ + uint32_t rule_idx; uint32_t burst:1; }; @@ -92,6 +143,9 @@ struct mlx5dr_devx_obj { uint32_t id; }; +/* In actions that take offset, the offset is unique, and the user should not + * reuse the same index because data changing is not atomic. + */ struct mlx5dr_rule_action { struct mlx5dr_action *action; union { @@ -116,31 +170,17 @@ struct mlx5dr_rule_action { struct { rte_be32_t vlan_hdr; } push_vlan; - }; -}; - -enum { - MLX5DR_MATCH_TAG_SZ = 32, - MLX5DR_JAMBO_TAG_SZ = 44, -}; -enum mlx5dr_rule_status { - MLX5DR_RULE_STATUS_UNKNOWN, - MLX5DR_RULE_STATUS_CREATING, - MLX5DR_RULE_STATUS_CREATED, - MLX5DR_RULE_STATUS_DELETING, - MLX5DR_RULE_STATUS_DELETED, - MLX5DR_RULE_STATUS_FAILED, -}; + struct { + uint32_t offset; + enum mlx5dr_action_aso_meter_color init_color; + } aso_meter; -struct mlx5dr_rule { - struct mlx5dr_matcher *matcher; - union { - uint8_t match_tag[MLX5DR_MATCH_TAG_SZ]; - struct ibv_flow *flow; + struct { + uint32_t offset; + enum mlx5dr_action_aso_ct_flags direction; + } aso_ct; }; - enum mlx5dr_rule_status status; - uint32_t rtc_used; /* The RTC into which the STE was inserted */ }; /* Open a context used for direct rule insertion using hardware steering. @@ -153,7 +193,7 @@ struct mlx5dr_rule { * @return pointer to mlx5dr_context on success NULL otherwise. */ struct mlx5dr_context * -mlx5dr_context_open(void *ibv_ctx, +mlx5dr_context_open(struct ibv_context *ibv_ctx, struct mlx5dr_context_attr *attr); /* Close a context used for direct hardware steering. @@ -205,6 +245,26 @@ mlx5dr_match_template_create(const struct rte_flow_item items[], */ int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt); +/* Create new action template based on action_type array, the action template + * will be used for matcher creation. + * + * @param[in] action_type + * An array of actions based on the order of actions which will be provided + * with rule_actions to mlx5dr_rule_create. The last action is marked + * using MLX5DR_ACTION_TYP_LAST. + * @return pointer to mlx5dr_action_template on success NULL otherwise + */ +struct mlx5dr_action_template * +mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[]); + +/* Destroy action template. + * + * @param[in] at + * Action template to destroy. + * @return zero on success non zero otherwise. + */ +int mlx5dr_action_template_destroy(struct mlx5dr_action_template *at); + /* Create a new direct rule matcher. Each matcher can contain multiple rules. * Matchers on the table will be processed by priority. Matching fields and * mask are described by the match template. In some cases multiple match @@ -216,6 +276,10 @@ int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt); * Array of match templates to be used on matcher. * @param[in] num_of_mt * Number of match templates in mt array. + * @param[in] at + * Array of action templates to be used on matcher. + * @param[in] num_of_at + * Number of action templates in mt array. * @param[in] attr * Attributes used for matcher creation. * @return pointer to mlx5dr_matcher on success NULL otherwise. @@ -224,6 +288,8 @@ struct mlx5dr_matcher * mlx5dr_matcher_create(struct mlx5dr_table *table, struct mlx5dr_match_template *mt[], uint8_t num_of_mt, + struct mlx5dr_action_template *at[], + uint8_t num_of_at, struct mlx5dr_matcher_attr *attr); /* Destroy direct rule matcher. @@ -245,11 +311,13 @@ size_t mlx5dr_rule_get_handle_size(void); * @param[in] matcher * The matcher in which the new rule will be created. * @param[in] mt_idx - * Match template index to create the rule with. + * Match template index to create the match with. * @param[in] items * The items used for the value matching. * @param[in] rule_actions * Rule action to be executed on match. + * @param[in] at_idx + * Action template index to apply the actions with. * @param[in] num_of_actions * Number of rule actions. * @param[in] attr @@ -261,8 +329,8 @@ size_t mlx5dr_rule_get_handle_size(void); int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, uint8_t mt_idx, const struct rte_flow_item items[], + uint8_t at_idx, struct mlx5dr_rule_action rule_actions[], - uint8_t num_of_actions, struct mlx5dr_rule_attr *attr, struct mlx5dr_rule *rule_handle); @@ -317,6 +385,21 @@ mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx, struct mlx5dr_table *tbl, uint32_t flags); +/* Create direct rule goto vport action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] ib_port_num + * Destination ib_port number. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_dest_vport(struct mlx5dr_context *ctx, + uint32_t ib_port_num, + uint32_t flags); + /* Create direct rule goto TIR action. * * @param[in] ctx @@ -400,10 +483,66 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, struct mlx5dr_action * mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, size_t pattern_sz, - rte_be64_t pattern[], + __be64 pattern[], uint32_t log_bulk_size, uint32_t flags); +/* Create direct rule ASO flow meter action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] devx_obj + * The DEVX ASO object. + * @param[in] return_reg_c + * Copy the ASO object value into this reg_c, after a packet hits a rule with this ASO object. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_aso_meter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_c, + uint32_t flags); + +/* Create direct rule ASO CT action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] devx_obj + * The DEVX ASO object. + * @param[in] return_reg_id + * Copy the ASO object value into this reg_id, after a packet hits a rule with this ASO object. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_aso_ct(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags); + +/* Create direct rule pop vlan action. + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_pop_vlan(struct mlx5dr_context *ctx, uint32_t flags); + +/* Create direct rule push vlan action. + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_push_vlan(struct mlx5dr_context *ctx, uint32_t flags); + /* Destroy direct rule action. * * @param[in] action @@ -432,11 +571,11 @@ int mlx5dr_send_queue_poll(struct mlx5dr_context *ctx, /* Perform an action on the queue * * @param[in] ctx - * The context to which the queue belong to. + * The context to which the queue belong to. * @param[in] queue_id - * The id of the queue to perform the action on. + * The id of the queue to perform the action on. * @param[in] actions - * Actions to perform on the queue. (enum mlx5dr_send_queue_actions) + * Actions to perform on the queue. (enum mlx5dr_send_queue_actions) * @return zero on success non zero otherwise. */ int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, @@ -448,7 +587,7 @@ int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, * @param[in] ctx * The context which to dump the info from. * @param[in] f - * The file to write the dump to. + * The file to write the dump to. * @return zero on success non zero otherwise. */ int mlx5dr_debug_dump(struct mlx5dr_context *ctx, FILE *f); diff --git a/drivers/net/mlx5/hws/mlx5dr_internal.h b/drivers/net/mlx5/hws/mlx5dr_internal.h new file mode 100644 index 0000000000..dbd77b9c66 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_internal.h @@ -0,0 +1,93 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_INTERNAL_H_ +#define MLX5DR_INTERNAL_H_ + +#include <stdint.h> +#include <sys/queue.h> +/* Verbs headers do not support -pedantic. */ +#ifdef PEDANTIC +#pragma GCC diagnostic ignored "-Wpedantic" +#endif +#include <infiniband/verbs.h> +#include <infiniband/mlx5dv.h> +#ifdef PEDANTIC +#pragma GCC diagnostic error "-Wpedantic" +#endif +#include <rte_flow.h> +#include <rte_gtp.h> + +#include "mlx5_prm.h" +#include "mlx5_glue.h" +#include "mlx5_flow.h" +#include "mlx5_utils.h" +#include "mlx5_malloc.h" + +#include "mlx5dr.h" +#include "mlx5dr_pool.h" +#include "mlx5dr_context.h" +#include "mlx5dr_table.h" +#include "mlx5dr_matcher.h" +#include "mlx5dr_send.h" +#include "mlx5dr_rule.h" +#include "mlx5dr_cmd.h" +#include "mlx5dr_action.h" +#include "mlx5dr_definer.h" +#include "mlx5dr_debug.h" +#include "mlx5dr_pat_arg.h" + +#define DW_SIZE 4 +#define BITS_IN_BYTE 8 +#define BITS_IN_DW (BITS_IN_BYTE * DW_SIZE) + +#define BIT(_bit) (1ULL << (_bit)) +#define IS_BIT_SET(_value, _bit) (_value & (1ULL << (_bit))) + +#ifndef ARRAY_SIZE +#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) +#endif + +#ifdef RTE_LIBRTE_MLX5_DEBUG +/* Prevent double function name print when debug is set */ +#define DR_LOG DRV_LOG +#else +/* Print function name as part of the log */ +#define DR_LOG(level, ...) \ + DRV_LOG(level, RTE_FMT("[%s]: " RTE_FMT_HEAD(__VA_ARGS__,), __func__, RTE_FMT_TAIL(__VA_ARGS__,))) +#endif + +static inline void *simple_malloc(size_t size) +{ + return mlx5_malloc(MLX5_MEM_SYS, + size, + MLX5_MALLOC_ALIGNMENT, + SOCKET_ID_ANY); +} + +static inline void *simple_calloc(size_t nmemb, size_t size) +{ + return mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO, + nmemb * size, + MLX5_MALLOC_ALIGNMENT, + SOCKET_ID_ANY); +} + +static inline void simple_free(void *addr) +{ + mlx5_free(addr); +} + +static inline bool is_mem_zero(const uint8_t *mem, size_t size) +{ + assert(size); + return (*mem == 0) && memcmp(mem, mem + 1, size - 1) == 0; +} + +static inline uint64_t roundup_pow_of_two(uint64_t n) +{ + return n == 1 ? 1 : 1ULL << log2above(n); +} + +#endif /* MLX5DR_INTERNAL_H_ */ diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index 6a84d96380..c3b8fa16d3 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -14,10 +14,8 @@ sources = files( 'mlx5.c', 'mlx5_ethdev.c', 'mlx5_flow.c', - 'mlx5_dr.c', 'mlx5_flow_meter.c', 'mlx5_flow_dv.c', - 'mlx5_flow_hw.c', 'mlx5_flow_aso.c', 'mlx5_flow_flex.c', 'mlx5_mac.c', @@ -42,6 +40,7 @@ sources = files( if is_linux sources += files( + 'mlx5_flow_hw.c', 'mlx5_flow_verbs.c', ) if (dpdk_conf.has('RTE_ARCH_X86_64') @@ -72,3 +71,7 @@ endif testpmd_sources += files('mlx5_testpmd.c') subdir(exec_env) + +if (is_linux and MLX5_HAVE_IBV_FLOW_DV_SUPPORT) + subdir('hws') +endif diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index b39ef1ecbe..a34fbcf74d 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1700,7 +1700,7 @@ mlx5_free_table_hash_list(struct mlx5_priv *priv) *tbls = NULL; } -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#ifdef HAVE_MLX5_HWS_SUPPORT /** * Allocate HW steering group hash list. * @@ -1749,7 +1749,7 @@ mlx5_alloc_table_hash_list(struct mlx5_priv *priv __rte_unused) int err = 0; /* Tables are only used in DV and DR modes. */ -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#ifdef HAVE_MLX5_HWS_SUPPORT struct mlx5_dev_ctx_shared *sh = priv->sh; char s[MLX5_NAME_SIZE]; @@ -1942,7 +1942,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) /* Free the eCPRI flex parser resource. */ mlx5_flex_parser_ecpri_release(dev); mlx5_flex_item_port_cleanup(dev); -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#ifdef HAVE_MLX5_HWS_SUPPORT flow_hw_resource_release(dev); flow_hw_clear_port_info(dev); if (priv->sh->config.dv_flow_en == 2) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 741be2df98..1d3c1ad93d 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -34,7 +34,12 @@ #include "mlx5_os.h" #include "mlx5_autoconf.h" #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) -#include "mlx5_dr.h" +#ifndef RTE_EXEC_ENV_WINDOWS +#define HAVE_MLX5_HWS_SUPPORT 1 +#else +#define __be64 uint64_t +#endif +#include "hws/mlx5dr.h" #endif #define MLX5_SH(dev) (((struct mlx5_priv *)(dev)->data->dev_private)->sh) diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index fe303a73bb..137e7dd4ac 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -907,7 +907,7 @@ mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq, rte_errno = errno; goto error; } -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#ifdef HAVE_MLX5_HWS_SUPPORT if (hrxq->hws_flags) { hrxq->action = mlx5dr_action_create_dest_tir (priv->dr_ctx, diff --git a/drivers/net/mlx5/mlx5_dr.c b/drivers/net/mlx5/mlx5_dr.c deleted file mode 100644 index 7218708986..0000000000 --- a/drivers/net/mlx5/mlx5_dr.c +++ /dev/null @@ -1,383 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved. - */ -#include <rte_flow.h> - -#include "mlx5_defs.h" -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) -#include "mlx5_dr.h" - -/* - * The following null stubs are prepared in order not to break the linkage - * before the HW steering low-level implementation is added. - */ - -/* Open a context used for direct rule insertion using hardware steering. - * Each context can contain multiple tables of different types. - * - * @param[in] ibv_ctx - * The ibv context to used for HWS. - * @param[in] attr - * Attributes used for context open. - * @return pointer to mlx5dr_context on success NULL otherwise. - */ -__rte_weak struct mlx5dr_context * -mlx5dr_context_open(void *ibv_ctx, - struct mlx5dr_context_attr *attr) -{ - (void)ibv_ctx; - (void)attr; - return NULL; -} - -/* Close a context used for direct hardware steering. - * - * @param[in] ctx - * mlx5dr context to close. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_context_close(struct mlx5dr_context *ctx) -{ - (void)ctx; - return 0; -} - -/* Create a new direct rule table. Each table can contain multiple matchers. - * - * @param[in] ctx - * The context in which the new table will be opened. - * @param[in] attr - * Attributes used for table creation. - * @return pointer to mlx5dr_table on success NULL otherwise. - */ -__rte_weak struct mlx5dr_table * -mlx5dr_table_create(struct mlx5dr_context *ctx, - struct mlx5dr_table_attr *attr) -{ - (void)ctx; - (void)attr; - return NULL; -} - -/* Destroy direct rule table. - * - * @param[in] tbl - * mlx5dr table to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int mlx5dr_table_destroy(struct mlx5dr_table *tbl) -{ - (void)tbl; - return 0; -} - -/* Create new match template based on items mask, the match template - * will be used for matcher creation. - * - * @param[in] items - * Describe the mask for template creation - * @param[in] flags - * Template creation flags - * @return pointer to mlx5dr_match_template on success NULL otherwise - */ -__rte_weak struct mlx5dr_match_template * -mlx5dr_match_template_create(const struct rte_flow_item items[], - enum mlx5dr_match_template_flags flags) -{ - (void)items; - (void)flags; - return NULL; -} - -/* Destroy match template. - * - * @param[in] mt - * Match template to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt) -{ - (void)mt; - return 0; -} - -/* Create a new direct rule matcher. Each matcher can contain multiple rules. - * Matchers on the table will be processed by priority. Matching fields and - * mask are described by the match template. In some cases multiple match - * templates can be used on the same matcher. - * - * @param[in] table - * The table in which the new matcher will be opened. - * @param[in] mt - * Array of match templates to be used on matcher. - * @param[in] num_of_mt - * Number of match templates in mt array. - * @param[in] attr - * Attributes used for matcher creation. - * @return pointer to mlx5dr_matcher on success NULL otherwise. - */ -__rte_weak struct mlx5dr_matcher * -mlx5dr_matcher_create(struct mlx5dr_table *table __rte_unused, - struct mlx5dr_match_template *mt[] __rte_unused, - uint8_t num_of_mt __rte_unused, - struct mlx5dr_matcher_attr *attr __rte_unused) -{ - return NULL; -} - -/* Destroy direct rule matcher. - * - * @param[in] matcher - * Matcher to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher __rte_unused) -{ - return 0; -} - -/* Enqueue create rule operation. - * - * @param[in] matcher - * The matcher in which the new rule will be created. - * @param[in] mt_idx - * Match template index to create the rule with. - * @param[in] items - * The items used for the value matching. - * @param[in] rule_actions - * Rule action to be executed on match. - * @param[in] num_of_actions - * Number of rule actions. - * @param[in] attr - * Rule creation attributes. - * @param[in, out] rule_handle - * A valid rule handle. The handle doesn't require any initialization. - * @return zero on successful enqueue non zero otherwise. - */ -__rte_weak int -mlx5dr_rule_create(struct mlx5dr_matcher *matcher __rte_unused, - uint8_t mt_idx __rte_unused, - const struct rte_flow_item items[] __rte_unused, - struct mlx5dr_rule_action rule_actions[] __rte_unused, - uint8_t num_of_actions __rte_unused, - struct mlx5dr_rule_attr *attr __rte_unused, - struct mlx5dr_rule *rule_handle __rte_unused) -{ - return 0; -} - -/* Enqueue destroy rule operation. - * - * @param[in] rule - * The rule destruction to enqueue. - * @param[in] attr - * Rule destruction attributes. - * @return zero on successful enqueue non zero otherwise. - */ -__rte_weak int -mlx5dr_rule_destroy(struct mlx5dr_rule *rule __rte_unused, - struct mlx5dr_rule_attr *attr __rte_unused) -{ - return 0; -} - -/* Create direct rule drop action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_drop(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule default miss action. - * Defaults are RX: Drop TX: Wire. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_default_miss(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule goto table action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] tbl - * Destination table. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx __rte_unused, - struct mlx5dr_table *tbl __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule goto TIR action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] obj - * Direct rule TIR devx object. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx __rte_unused, - struct mlx5dr_devx_obj *obj __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule TAG action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_tag(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule counter action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] obj - * Direct rule counter devx object. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_counter(struct mlx5dr_context *ctx, - struct mlx5dr_devx_obj *obj, - uint32_t flags); - -/* Create direct rule reformat action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] reformat_type - * Type of reformat. - * @param[in] data_sz - * Size in bytes of data. - * @param[in] inline_data - * Header data array in case of inline action. - * @param[in] log_bulk_size - * Number of unique values used with this pattern. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_reformat(struct mlx5dr_context *ctx __rte_unused, - enum mlx5dr_action_reformat_type reformat_type __rte_unused, - size_t data_sz __rte_unused, - void *inline_data __rte_unused, - uint32_t log_bulk_size __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule modify header action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] pattern_sz - * Byte size of the pattern array. - * @param[in] pattern - * PRM format modify pattern action array. - * @param[in] log_bulk_size - * Number of unique values used with this pattern. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, - size_t pattern_sz, - rte_be64_t pattern[], - uint32_t log_bulk_size, - uint32_t flags); - -/* Destroy direct rule action. - * - * @param[in] action - * The action to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_action_destroy(struct mlx5dr_action *action __rte_unused) -{ - return 0; -} - -/* Poll queue for rule creation and deletions completions. - * - * @param[in] ctx - * The context to which the queue belong to. - * @param[in] queue_id - * The id of the queue to poll. - * @param[in, out] res - * Completion array. - * @param[in] res_nb - * Maximum number of results to return. - * @return negative number on failure, the number of completions otherwise. - */ -__rte_weak int -mlx5dr_send_queue_poll(struct mlx5dr_context *ctx __rte_unused, - uint16_t queue_id __rte_unused, - struct rte_flow_op_result res[] __rte_unused, - uint32_t res_nb __rte_unused) -{ - return 0; -} - -/* Perform an action on the queue - * - * @param[in] ctx - * The context to which the queue belong to. - * @param[in] queue_id - * The id of the queue to perform the action on. - * @param[in] actions - * Actions to perform on the queue. (enum mlx5dr_send_queue_actions) - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_send_queue_action(struct mlx5dr_context *ctx __rte_unused, - uint16_t queue_id __rte_unused, - uint32_t actions __rte_unused) -{ - return 0; -} - -#endif diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index dd3d2bb1a4..2c6acd551c 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -93,6 +93,8 @@ const struct mlx5_flow_driver_ops *flow_drv_ops[] = { [MLX5_FLOW_TYPE_MIN] = &mlx5_flow_null_drv_ops, #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) [MLX5_FLOW_TYPE_DV] = &mlx5_flow_dv_drv_ops, +#endif +#ifdef HAVE_MLX5_HWS_SUPPORT [MLX5_FLOW_TYPE_HW] = &mlx5_flow_hw_drv_ops, #endif [MLX5_FLOW_TYPE_VERBS] = &mlx5_flow_verbs_drv_ops, diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 2002f6ef4b..cde602d3a1 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -17,6 +17,7 @@ #include <mlx5_prm.h> #include "mlx5.h" +#include "hws/mlx5dr.h" /* E-Switch Manager port, used for rte_flow_item_port_id. */ #define MLX5_PORT_ESW_MGR UINT32_MAX @@ -1043,6 +1044,10 @@ struct rte_flow { #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#ifdef PEDANTIC +#pragma GCC diagnostic ignored "-Wpedantic" +#endif + /* HWS flow struct. */ struct rte_flow_hw { uint32_t idx; /* Flow index from indexed pool. */ @@ -1053,9 +1058,13 @@ struct rte_flow_hw { struct mlx5_hrxq *hrxq; /* TIR action. */ }; struct rte_flow_template_table *table; /* The table flow allcated from. */ - struct mlx5dr_rule rule; /* HWS layer data struct. */ + uint8_t rule[0]; /* HWS layer data struct. */ } __rte_packed; +#ifdef PEDANTIC +#pragma GCC diagnostic error "-Wpedantic" +#endif + /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { LIST_ENTRY(mlx5_action_construct_data) next; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 78c741bb91..fecf28c1ca 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -1107,8 +1107,8 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, actions, rule_acts, &acts_num); ret = mlx5dr_rule_create(table->matcher, pattern_template_index, items, - rule_acts, acts_num, - &rule_attr, &flow->rule); + action_template_index, rule_acts, + &rule_attr, (struct mlx5dr_rule *)flow->rule); if (likely(!ret)) return (struct rte_flow *)flow; /* Flow created fail, return the descriptor and flow memory. */ @@ -1171,7 +1171,7 @@ flow_hw_async_flow_destroy(struct rte_eth_dev *dev, job->user_data = user_data; job->flow = fh; rule_attr.user_data = job; - ret = mlx5dr_rule_destroy(&fh->rule, &rule_attr); + ret = mlx5dr_rule_destroy((struct mlx5dr_rule *)fh->rule, &rule_attr); if (likely(!ret)) return 0; priv->hw_q[queue].job_idx++; @@ -1437,7 +1437,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, .data = &flow_attr, }; struct mlx5_indexed_pool_config cfg = { - .size = sizeof(struct rte_flow_hw), + .size = sizeof(struct rte_flow_hw) + mlx5dr_rule_get_handle_size(), .trunk_size = 1 << 12, .per_core_cache = 1 << 13, .need_lock = 1, @@ -1498,7 +1498,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, tbl->its[i] = item_templates[i]; } tbl->matcher = mlx5dr_matcher_create - (tbl->grp->tbl, mt, nb_item_templates, &matcher_attr); + (tbl->grp->tbl, mt, nb_item_templates, NULL, 0, &matcher_attr); if (!tbl->matcher) goto it_error; tbl->nb_item_templates = nb_item_templates; -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 00/18] net/mlx5: Add HW steering low level support 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker ` (22 preceding siblings ...) 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-20 15:57 ` [v6 01/18] net/mlx5: split flow item translation Alex Vesker ` (18 more replies) 23 siblings, 19 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm; +Cc: dev, orika Mellanox ConnetX devices supports packet matching, packet modification and redirection. These functionalities are also referred to as flow-steering. To configure a steering rule, the rule is written to the device owned memory, this memory is accessed and cached by the device when processing a packet. The highlight of this patchset is supporting HW Steering (HWS) which is the new technology supported in new ConnectX devices, HWS allows configuring steering rules directly to the HW using special HW queues with minimal CPU effort. This patchset is the internal low layer implementation for HWS used by the mlx5 PMD. The mlx5dr (direct rule) is layer that bridges between the PMD and the HW by configuring the HW offloads based on the PMD logic v2: Fix check patch and cosmetic changes v3: -Fix unsupported items -Fix compilation with mlx5dv dependency v4: -Fix compile on Windows v5: -Fix compile on old rdma-core or no rdma core v6: -Fix meson style and improve configure -Checkpatch and compilation fixes -Fix action number issue Alex Vesker (8): net/mlx5: Add additional glue functions for HWS net/mlx5/hws: Add HWS send layer net/mlx5/hws: Add HWS definer layer net/mlx5/hws: Add HWS context object net/mlx5/hws: Add HWS table object net/mlx5/hws: Add HWS matcher object net/mlx5/hws: Add HWS rule object net/mlx5/hws: Enable HWS Bing Zhao (2): common/mlx5: query set capability of registers net/mlx5: provide the available tag registers Dariusz Sosnowski (1): net/mlx5: add port to metadata conversion Erez Shitrit (3): net/mlx5/hws: Add HWS command layer net/mlx5/hws: Add HWS pool and buddy net/mlx5/hws: Add HWS action object Hamdan Igbaria (1): net/mlx5/hws: Add HWS debug layer Suanming Mou (3): net/mlx5: split flow item translation net/mlx5: split flow item matcher and value translation net/mlx5: add hardware steering item translation function doc/guides/nics/features/default.ini | 1 + doc/guides/nics/features/mlx5.ini | 1 + doc/guides/nics/mlx5.rst | 5 +- doc/guides/rel_notes/release_22_11.rst | 4 + drivers/common/mlx5/linux/meson.build | 11 +- drivers/common/mlx5/linux/mlx5_glue.c | 121 +- drivers/common/mlx5/linux/mlx5_glue.h | 17 + drivers/common/mlx5/mlx5_devx_cmds.c | 30 + drivers/common/mlx5/mlx5_devx_cmds.h | 2 + drivers/common/mlx5/mlx5_prm.h | 652 ++++- drivers/net/mlx5/hws/meson.build | 18 + drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} | 201 +- drivers/net/mlx5/hws/mlx5dr_action.c | 2237 +++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 253 ++ drivers/net/mlx5/hws/mlx5dr_buddy.c | 200 ++ drivers/net/mlx5/hws/mlx5dr_buddy.h | 22 + drivers/net/mlx5/hws/mlx5dr_cmd.c | 948 +++++++ drivers/net/mlx5/hws/mlx5dr_cmd.h | 230 ++ drivers/net/mlx5/hws/mlx5dr_context.c | 223 ++ drivers/net/mlx5/hws/mlx5dr_context.h | 40 + drivers/net/mlx5/hws/mlx5dr_debug.c | 462 ++++ drivers/net/mlx5/hws/mlx5dr_debug.h | 28 + drivers/net/mlx5/hws/mlx5dr_definer.c | 1968 ++++++++++++++ drivers/net/mlx5/hws/mlx5dr_definer.h | 585 ++++ drivers/net/mlx5/hws/mlx5dr_internal.h | 93 + drivers/net/mlx5/hws/mlx5dr_matcher.c | 919 +++++++ drivers/net/mlx5/hws/mlx5dr_matcher.h | 76 + drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 511 ++++ drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 83 + drivers/net/mlx5/hws/mlx5dr_pool.c | 672 +++++ drivers/net/mlx5/hws/mlx5dr_pool.h | 152 ++ drivers/net/mlx5/hws/mlx5dr_rule.c | 528 ++++ drivers/net/mlx5/hws/mlx5dr_rule.h | 50 + drivers/net/mlx5/hws/mlx5dr_send.c | 844 ++++++ drivers/net/mlx5/hws/mlx5dr_send.h | 275 ++ drivers/net/mlx5/hws/mlx5dr_table.c | 248 ++ drivers/net/mlx5/hws/mlx5dr_table.h | 44 + drivers/net/mlx5/linux/mlx5_os.c | 12 +- drivers/net/mlx5/meson.build | 7 +- drivers/net/mlx5/mlx5.c | 9 +- drivers/net/mlx5/mlx5.h | 8 +- drivers/net/mlx5/mlx5_defs.h | 2 + drivers/net/mlx5/mlx5_devx.c | 2 +- drivers/net/mlx5/mlx5_dr.c | 383 --- drivers/net/mlx5/mlx5_flow.c | 29 +- drivers/net/mlx5/mlx5_flow.h | 174 +- drivers/net/mlx5/mlx5_flow_dv.c | 2567 +++++++++--------- drivers/net/mlx5/mlx5_flow_hw.c | 115 +- 48 files changed, 14368 insertions(+), 1694 deletions(-) create mode 100644 drivers/net/mlx5/hws/meson.build rename drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} (67%) create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_internal.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.h delete mode 100644 drivers/net/mlx5/mlx5_dr.c -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 01/18] net/mlx5: split flow item translation 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-24 6:47 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 02/18] net/mlx5: split flow item matcher and value translation Alex Vesker ` (17 subsequent siblings) 18 siblings, 1 reply; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> In order to share the item translation code with hardware steering mode, this commit splits flow item translation code to a dedicate function. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow_dv.c | 1915 ++++++++++++++++--------------- 1 file changed, 979 insertions(+), 936 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 4bdcb1815b..0f3ff4db51 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -13076,8 +13076,7 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, } /** - * Fill the flow with DV spec, lock free - * (mutex should be acquired by caller). + * Translate the flow item to matcher. * * @param[in] dev * Pointer to rte_eth_dev structure. @@ -13087,8 +13086,8 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * Pointer to the flow attributes. * @param[in] items * Pointer to the list of items. - * @param[in] actions - * Pointer to the list of actions. + * @param[in] matcher + * Pointer to the flow matcher. * @param[out] error * Pointer to the error structure. * @@ -13096,650 +13095,656 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -flow_dv_translate(struct rte_eth_dev *dev, - struct mlx5_flow *dev_flow, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - const struct rte_flow_action actions[], - struct rte_flow_error *error) +flow_dv_translate_items(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + struct mlx5_flow_dv_matcher *matcher, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct mlx5_sh_config *dev_conf = &priv->sh->config; struct rte_flow *flow = dev_flow->flow; struct mlx5_flow_handle *handle = dev_flow->handle; struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); - struct mlx5_flow_rss_desc *rss_desc; + struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc; uint64_t item_flags = 0; uint64_t last_item = 0; - uint64_t action_flags = 0; - struct mlx5_flow_dv_matcher matcher = { - .mask = { - .size = sizeof(matcher.mask.buf), - }, - }; - int actions_n = 0; - bool actions_end = false; - union { - struct mlx5_flow_dv_modify_hdr_resource res; - uint8_t len[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + - sizeof(struct mlx5_modification_cmd) * - (MLX5_MAX_MODIFY_NUM + 1)]; - } mhdr_dummy; - struct mlx5_flow_dv_modify_hdr_resource *mhdr_res = &mhdr_dummy.res; - const struct rte_flow_action_count *count = NULL; - const struct rte_flow_action_age *non_shared_age = NULL; - union flow_dv_attr flow_attr = { .attr = 0 }; - uint32_t tag_be; - union mlx5_flow_tbl_key tbl_key; - uint32_t modify_action_position = UINT32_MAX; - void *match_mask = matcher.mask.buf; + void *match_mask = matcher->mask.buf; void *match_value = dev_flow->dv.value.buf; uint8_t next_protocol = 0xff; - struct rte_vlan_hdr vlan = { 0 }; - struct mlx5_flow_dv_dest_array_resource mdest_res; - struct mlx5_flow_dv_sample_resource sample_res; - void *sample_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; - const struct rte_flow_action_sample *sample = NULL; - struct mlx5_flow_sub_actions_list *sample_act; - uint32_t sample_act_pos = UINT32_MAX; - uint32_t age_act_pos = UINT32_MAX; - uint32_t num_of_dest = 0; - int tmp_actions_n = 0; - uint32_t table; - int ret = 0; - const struct mlx5_flow_tunnel *tunnel = NULL; - struct flow_grp_info grp_info = { - .external = !!dev_flow->external, - .transfer = !!attr->transfer, - .fdb_def_rule = !!priv->fdb_def_rule, - .skip_scale = dev_flow->skip_scale & - (1 << MLX5_SCALE_FLOW_GROUP_BIT), - .std_tbl_fix = true, - }; + uint16_t priority = 0; const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; const struct rte_flow_item *tunnel_item = NULL; const struct rte_flow_item *gre_item = NULL; + int ret = 0; - if (!wks) - return rte_flow_error_set(error, ENOMEM, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "failed to push flow workspace"); - rss_desc = &wks->rss_desc; - memset(&mdest_res, 0, sizeof(struct mlx5_flow_dv_dest_array_resource)); - memset(&sample_res, 0, sizeof(struct mlx5_flow_dv_sample_resource)); - mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : - MLX5DV_FLOW_TABLE_TYPE_NIC_RX; - /* update normal path action resource into last index of array */ - sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM - 1]; - if (is_tunnel_offload_active(dev)) { - if (dev_flow->tunnel) { - RTE_VERIFY(dev_flow->tof_type == - MLX5_TUNNEL_OFFLOAD_MISS_RULE); - tunnel = dev_flow->tunnel; - } else { - tunnel = mlx5_get_tof(items, actions, - &dev_flow->tof_type); - dev_flow->tunnel = tunnel; - } - grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate - (dev, attr, tunnel, dev_flow->tof_type); - } - mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : - MLX5DV_FLOW_TABLE_TYPE_NIC_RX; - ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table, - &grp_info, error); - if (ret) - return ret; - dev_flow->dv.group = table; - if (attr->transfer) - mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; - /* number of actions must be set to 0 in case of dirty stack. */ - mhdr_res->actions_num = 0; - if (is_flow_tunnel_match_rule(dev_flow->tof_type)) { - /* - * do not add decap action if match rule drops packet - * HW rejects rules with decap & drop - * - * if tunnel match rule was inserted before matching tunnel set - * rule flow table used in the match rule must be registered. - * current implementation handles that in the - * flow_dv_match_register() at the function end. - */ - bool add_decap = true; - const struct rte_flow_action *ptr = actions; - - for (; ptr->type != RTE_FLOW_ACTION_TYPE_END; ptr++) { - if (ptr->type == RTE_FLOW_ACTION_TYPE_DROP) { - add_decap = false; - break; - } - } - if (add_decap) { - if (flow_dv_create_action_l2_decap(dev, dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_DECAP; - } - } - for (; !actions_end ; actions++) { - const struct rte_flow_action_queue *queue; - const struct rte_flow_action_rss *rss; - const struct rte_flow_action *action = actions; - const uint8_t *rss_key; - struct mlx5_flow_tbl_resource *tbl; - struct mlx5_aso_age_action *age_act; - struct mlx5_flow_counter *cnt_act; - uint32_t port_id = 0; - struct mlx5_flow_dv_port_id_action_resource port_id_resource; - int action_type = actions->type; - const struct rte_flow_action *found_action = NULL; - uint32_t jump_group = 0; - uint32_t owner_idx; - struct mlx5_aso_ct_action *ct; + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { + int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + int item_type = items->type; - if (!mlx5_flow_os_action_supported(action_type)) + if (!mlx5_flow_os_item_supported(item_type)) return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, - actions, - "action not supported"); - switch (action_type) { - case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET: - action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET; + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "item not supported"); + switch (item_type) { + case RTE_FLOW_ITEM_TYPE_ESP: + flow_dv_translate_item_esp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_ITEM_ESP; break; - case RTE_FLOW_ACTION_TYPE_VOID: + case RTE_FLOW_ITEM_TYPE_PORT_ID: + flow_dv_translate_item_port_id + (dev, match_mask, match_value, items, attr); + last_item = MLX5_FLOW_ITEM_PORT_ID; break; - case RTE_FLOW_ACTION_TYPE_PORT_ID: - case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: - if (flow_dv_translate_action_port_id(dev, action, - &port_id, error)) - return -rte_errno; - port_id_resource.port_id = port_id; - MLX5_ASSERT(!handle->rix_port_id_action); - if (flow_dv_port_id_action_resource_register - (dev, &port_id_resource, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.port_id_action->action; - action_flags |= MLX5_FLOW_ACTION_PORT_ID; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_PORT_ID; - sample_act->action_flags |= MLX5_FLOW_ACTION_PORT_ID; - num_of_dest++; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + flow_dv_translate_item_represented_port + (dev, match_mask, match_value, items, attr); + last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; break; - case RTE_FLOW_ACTION_TYPE_FLAG: - action_flags |= MLX5_FLOW_ACTION_FLAG; - wks->mark = 1; - if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { - struct rte_flow_action_mark mark = { - .id = MLX5_FLOW_MARK_DEFAULT, - }; - - if (flow_dv_convert_action_mark(dev, &mark, - mhdr_res, - error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MARK_EXT; - break; + case RTE_FLOW_ITEM_TYPE_ETH: + flow_dv_translate_item_eth(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = dev_flow->act_flags & + MLX5_FLOW_ACTION_DEFAULT_MISS && + !dev_flow->external ? + MLX5_PRIORITY_MAP_L3 : + MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + flow_dv_translate_item_vlan(dev_flow, + match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | + MLX5_FLOW_LAYER_INNER_VLAN) : + (MLX5_FLOW_LAYER_OUTER_L2 | + MLX5_FLOW_LAYER_OUTER_VLAN); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &item_flags, &tunnel); + flow_dv_translate_item_ipv4(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv4 *) + items->mask)->hdr.next_proto_id) { + next_protocol = + ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + next_protocol &= + ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; } - tag_be = mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT); - /* - * Only one FLAG or MARK is supported per device flow - * right now. So the pointer to the tag resource must be - * zero before the register process. - */ - MLX5_ASSERT(!handle->dvh.rix_tag); - if (flow_dv_tag_resource_register(dev, tag_be, - dev_flow, error)) - return -rte_errno; - MLX5_ASSERT(dev_flow->dv.tag_resource); - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.tag_resource->action; break; - case RTE_FLOW_ACTION_TYPE_MARK: - action_flags |= MLX5_FLOW_ACTION_MARK; - wks->mark = 1; - if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { - const struct rte_flow_action_mark *mark = - (const struct rte_flow_action_mark *) - actions->conf; - - if (flow_dv_convert_action_mark(dev, mark, - mhdr_res, - error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MARK_EXT; - break; + case RTE_FLOW_ITEM_TYPE_IPV6: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &item_flags, &tunnel); + flow_dv_translate_item_ipv6(match_mask, match_value, + items, tunnel, + dev_flow->dv.group); + priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto) { + next_protocol = + ((const struct rte_flow_item_ipv6 *) + items->spec)->hdr.proto; + next_protocol &= + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; } - /* Fall-through */ - case MLX5_RTE_FLOW_ACTION_TYPE_MARK: - /* Legacy (non-extensive) MARK action. */ - tag_be = mlx5_flow_mark_set - (((const struct rte_flow_action_mark *) - (actions->conf))->id); - MLX5_ASSERT(!handle->dvh.rix_tag); - if (flow_dv_tag_resource_register(dev, tag_be, - dev_flow, error)) - return -rte_errno; - MLX5_ASSERT(dev_flow->dv.tag_resource); - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.tag_resource->action; break; - case RTE_FLOW_ACTION_TYPE_SET_META: - if (flow_dv_convert_action_set_meta - (dev, mhdr_res, attr, - (const struct rte_flow_action_set_meta *) - actions->conf, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_META; + case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: + flow_dv_translate_item_ipv6_frag_ext(match_mask, + match_value, + items, tunnel); + last_item = tunnel ? + MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : + MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; + if (items->mask != NULL && + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header) { + next_protocol = + ((const struct rte_flow_item_ipv6_frag_ext *) + items->spec)->hdr.next_header; + next_protocol &= + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } break; - case RTE_FLOW_ACTION_TYPE_SET_TAG: - if (flow_dv_convert_action_set_tag - (dev, mhdr_res, - (const struct rte_flow_action_set_tag *) - actions->conf, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TAG; + case RTE_FLOW_ITEM_TYPE_TCP: + flow_dv_translate_item_tcp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; break; - case RTE_FLOW_ACTION_TYPE_DROP: - action_flags |= MLX5_FLOW_ACTION_DROP; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_DROP; + case RTE_FLOW_ITEM_TYPE_UDP: + flow_dv_translate_item_udp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; break; - case RTE_FLOW_ACTION_TYPE_QUEUE: - queue = actions->conf; - rss_desc->queue_num = 1; - rss_desc->queue[0] = queue->index; - action_flags |= MLX5_FLOW_ACTION_QUEUE; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_QUEUE; - sample_act->action_flags |= MLX5_FLOW_ACTION_QUEUE; - num_of_dest++; + case RTE_FLOW_ITEM_TYPE_GRE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; + gre_item = items; break; - case RTE_FLOW_ACTION_TYPE_RSS: - rss = actions->conf; - memcpy(rss_desc->queue, rss->queue, - rss->queue_num * sizeof(uint16_t)); - rss_desc->queue_num = rss->queue_num; - /* NULL RSS key indicates default RSS key. */ - rss_key = !rss->key ? rss_hash_default_key : rss->key; - memcpy(rss_desc->key, rss_key, MLX5_RSS_HASH_KEY_LEN); - /* - * rss->level and rss.types should be set in advance - * when expanding items for RSS. - */ - action_flags |= MLX5_FLOW_ACTION_RSS; - dev_flow->handle->fate_action = rss_desc->shared_rss ? - MLX5_FLOW_FATE_SHARED_RSS : - MLX5_FLOW_FATE_QUEUE; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + flow_dv_translate_item_gre_key(match_mask, + match_value, items); + last_item = MLX5_FLOW_LAYER_GRE_KEY; break; - case MLX5_RTE_FLOW_ACTION_TYPE_AGE: - owner_idx = (uint32_t)(uintptr_t)action->conf; - age_act = flow_aso_age_get_by_idx(dev, owner_idx); - if (flow->age == 0) { - flow->age = owner_idx; - __atomic_fetch_add(&age_act->refcnt, 1, - __ATOMIC_RELAXED); - } - age_act_pos = actions_n++; - action_flags |= MLX5_FLOW_ACTION_AGE; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_AGE: - non_shared_age = action->conf; - age_act_pos = actions_n++; - action_flags |= MLX5_FLOW_ACTION_AGE; + case RTE_FLOW_ITEM_TYPE_NVGRE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GRE; + tunnel_item = items; break; - case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: - owner_idx = (uint32_t)(uintptr_t)action->conf; - cnt_act = flow_dv_counter_get_by_idx(dev, owner_idx, - NULL); - MLX5_ASSERT(cnt_act != NULL); - /** - * When creating meter drop flow in drop table, the - * counter should not overwrite the rte flow counter. - */ - if (attr->group == MLX5_FLOW_TABLE_LEVEL_METER && - dev_flow->dv.table_id == MLX5_MTR_TABLE_ID_DROP) { - dev_flow->dv.actions[actions_n++] = - cnt_act->action; - } else { - if (flow->counter == 0) { - flow->counter = owner_idx; - __atomic_fetch_add - (&cnt_act->shared_info.refcnt, - 1, __ATOMIC_RELAXED); - } - /* Save information first, will apply later. */ - action_flags |= MLX5_FLOW_ACTION_COUNT; - } + case RTE_FLOW_ITEM_TYPE_VXLAN: + flow_dv_translate_item_vxlan(dev, attr, + match_mask, match_value, + items, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN; break; - case RTE_FLOW_ACTION_TYPE_COUNT: - if (!priv->sh->cdev->config.devx) { - return rte_flow_error_set - (error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, - "count action not supported"); - } - /* Save information first, will apply later. */ - count = action->conf; - action_flags |= MLX5_FLOW_ACTION_COUNT; + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN_GPE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: - dev_flow->dv.actions[actions_n++] = - priv->sh->pop_vlan_action; - action_flags |= MLX5_FLOW_ACTION_OF_POP_VLAN; + case RTE_FLOW_ITEM_TYPE_GENEVE: + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GENEVE; + tunnel_item = items; break; - case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: - if (!(action_flags & - MLX5_FLOW_ACTION_OF_SET_VLAN_VID)) - flow_dev_get_vlan_info_from_items(items, &vlan); - vlan.eth_proto = rte_be_to_cpu_16 - ((((const struct rte_flow_action_of_push_vlan *) - actions->conf)->ethertype)); - found_action = mlx5_flow_find_action - (actions + 1, - RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID); - if (found_action) - mlx5_update_vlan_vid_pcp(found_action, &vlan); - found_action = mlx5_flow_find_action - (actions + 1, - RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP); - if (found_action) - mlx5_update_vlan_vid_pcp(found_action, &vlan); - if (flow_dv_create_action_push_vlan - (dev, attr, &vlan, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.push_vlan_res->action; - action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN; + case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: + ret = flow_dv_translate_item_geneve_opt(dev, match_mask, + match_value, + items, error); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GENEVE TLV option"); + flow->geneve_tlv_option = 1; + last_item = MLX5_FLOW_LAYER_GENEVE_OPT; break; - case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP: - /* of_vlan_push action handled this action */ - MLX5_ASSERT(action_flags & - MLX5_FLOW_ACTION_OF_PUSH_VLAN); + case RTE_FLOW_ITEM_TYPE_MPLS: + flow_dv_translate_item_mpls(match_mask, match_value, + items, last_item, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_MPLS; break; - case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID: - if (action_flags & MLX5_FLOW_ACTION_OF_PUSH_VLAN) - break; - flow_dev_get_vlan_info_from_items(items, &vlan); - mlx5_update_vlan_vid_pcp(actions, &vlan); - /* If no VLAN push - this is a modify header action */ - if (flow_dv_convert_action_modify_vlan_vid - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID; + case RTE_FLOW_ITEM_TYPE_MARK: + flow_dv_translate_item_mark(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_MARK; break; - case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: - case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: - if (flow_dv_create_action_l2_encap(dev, actions, - dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_ENCAP; - if (action_flags & MLX5_FLOW_ACTION_SAMPLE) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ITEM_TYPE_META: + flow_dv_translate_item_meta(dev, match_mask, + match_value, attr, items); + last_item = MLX5_FLOW_ITEM_METADATA; break; - case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: - case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: - if (flow_dv_create_action_l2_decap(dev, dev_flow, - attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - action_flags |= MLX5_FLOW_ACTION_DECAP; + case RTE_FLOW_ITEM_TYPE_ICMP: + flow_dv_translate_item_icmp(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP; break; - case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: - /* Handle encap with preceding decap. */ - if (action_flags & MLX5_FLOW_ACTION_DECAP) { - if (flow_dv_create_action_raw_encap - (dev, actions, dev_flow, attr, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } else { - /* Handle encap without preceding decap. */ - if (flow_dv_create_action_l2_encap - (dev, actions, dev_flow, attr->transfer, - error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } - action_flags |= MLX5_FLOW_ACTION_ENCAP; - if (action_flags & MLX5_FLOW_ACTION_SAMPLE) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ITEM_TYPE_ICMP6: + flow_dv_translate_item_icmp6(match_mask, match_value, + items, tunnel); + priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP6; break; - case RTE_FLOW_ACTION_TYPE_RAW_DECAP: - while ((++action)->type == RTE_FLOW_ACTION_TYPE_VOID) - ; - if (action->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { - if (flow_dv_create_action_l2_decap - (dev, dev_flow, attr->transfer, error)) - return -rte_errno; - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.encap_decap->action; - } - /* If decap is followed by encap, handle it at encap. */ - action_flags |= MLX5_FLOW_ACTION_DECAP; + case RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_item_tag(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_TAG; break; - case MLX5_RTE_FLOW_ACTION_TYPE_JUMP: - dev_flow->dv.actions[actions_n++] = - (void *)(uintptr_t)action->conf; - action_flags |= MLX5_FLOW_ACTION_JUMP; + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_mlx5_item_tag(dev, match_mask, + match_value, items); + last_item = MLX5_FLOW_ITEM_TAG; break; - case RTE_FLOW_ACTION_TYPE_JUMP: - jump_group = ((const struct rte_flow_action_jump *) - action->conf)->group; - grp_info.std_tbl_fix = 0; - if (dev_flow->skip_scale & - (1 << MLX5_SCALE_JUMP_FLOW_GROUP_BIT)) - grp_info.skip_scale = 1; - else - grp_info.skip_scale = 0; - ret = mlx5_flow_group_to_table(dev, tunnel, - jump_group, - &table, - &grp_info, error); + case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + flow_dv_translate_item_tx_queue(dev, match_mask, + match_value, + items); + last_item = MLX5_FLOW_ITEM_TX_QUEUE; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + flow_dv_translate_item_gtp(match_mask, match_value, + items, tunnel); + priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = flow_dv_translate_item_gtp_psc(match_mask, + match_value, + items); if (ret) - return ret; - tbl = flow_dv_tbl_resource_get(dev, table, attr->egress, - attr->transfer, - !!dev_flow->external, - tunnel, jump_group, 0, - 0, error); - if (!tbl) - return rte_flow_error_set - (error, errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "cannot create jump action."); - if (flow_dv_jump_tbl_resource_register - (dev, tbl, dev_flow, error)) { - flow_dv_tbl_resource_release(MLX5_SH(dev), tbl); - return rte_flow_error_set - (error, errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "cannot create jump action."); + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GTP PSC item"); + last_item = MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_ECPRI: + if (!mlx5_flex_parser_ecpri_exist(dev)) { + /* Create it only the first time to be used. */ + ret = mlx5_flex_parser_ecpri_alloc(dev); + if (ret) + return rte_flow_error_set + (error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, + "cannot create eCPRI parser"); + } + flow_dv_translate_item_ecpri(dev, match_mask, + match_value, items, + last_item); + /* No other protocol should follow eCPRI layer. */ + last_item = MLX5_FLOW_LAYER_ECPRI; + break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + flow_dv_translate_item_integrity(items, integrity_items, + &last_item); + break; + case RTE_FLOW_ITEM_TYPE_CONNTRACK: + flow_dv_translate_item_aso_ct(dev, match_mask, + match_value, items); + break; + case RTE_FLOW_ITEM_TYPE_FLEX: + flow_dv_translate_item_flex(dev, match_mask, + match_value, items, + dev_flow, tunnel != 0); + last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : + MLX5_FLOW_ITEM_OUTER_FLEX; + break; + default: + break; + } + item_flags |= last_item; + } + /* + * When E-Switch mode is enabled, we have two cases where we need to + * set the source port manually. + * The first one, is in case of NIC ingress steering rule, and the + * second is E-Switch rule where no port_id item was found. + * In both cases the source port is set according the current port + * in use. + */ + if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && + !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && + !(attr->egress && !attr->transfer)) { + if (flow_dv_translate_item_port_id(dev, match_mask, + match_value, NULL, attr)) + return -rte_errno; + } + if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + flow_dv_translate_item_integrity_post(match_mask, match_value, + integrity_items, + item_flags); + } + if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) + flow_dv_translate_item_vxlan_gpe(match_mask, match_value, + tunnel_item, item_flags); + else if (item_flags & MLX5_FLOW_LAYER_GENEVE) + flow_dv_translate_item_geneve(match_mask, match_value, + tunnel_item, item_flags); + else if (item_flags & MLX5_FLOW_LAYER_GRE) { + if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) + flow_dv_translate_item_gre(match_mask, match_value, + tunnel_item, item_flags); + else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) + flow_dv_translate_item_nvgre(match_mask, match_value, + tunnel_item, item_flags); + else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) + flow_dv_translate_item_gre_option(match_mask, match_value, + tunnel_item, gre_item, item_flags); + else + MLX5_ASSERT(false); + } + matcher->priority = priority; +#ifdef RTE_LIBRTE_MLX5_DEBUG + MLX5_ASSERT(!flow_dv_check_valid_spec(matcher->mask.buf, + dev_flow->dv.value.buf)); +#endif + /* + * Layers may be already initialized from prefix flow if this dev_flow + * is the suffix flow. + */ + handle->layers |= item_flags; + return ret; +} + +/** + * Fill the flow with DV spec, lock free + * (mutex should be acquired by caller). + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in, out] dev_flow + * Pointer to the sub flow. + * @param[in] attr + * Pointer to the flow attributes. + * @param[in] items + * Pointer to the list of items. + * @param[in] actions + * Pointer to the list of actions. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_translate(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item items[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_sh_config *dev_conf = &priv->sh->config; + struct rte_flow *flow = dev_flow->flow; + struct mlx5_flow_handle *handle = dev_flow->handle; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); + struct mlx5_flow_rss_desc *rss_desc; + uint64_t action_flags = 0; + struct mlx5_flow_dv_matcher matcher = { + .mask = { + .size = sizeof(matcher.mask.buf), + }, + }; + int actions_n = 0; + bool actions_end = false; + union { + struct mlx5_flow_dv_modify_hdr_resource res; + uint8_t len[sizeof(struct mlx5_flow_dv_modify_hdr_resource) + + sizeof(struct mlx5_modification_cmd) * + (MLX5_MAX_MODIFY_NUM + 1)]; + } mhdr_dummy; + struct mlx5_flow_dv_modify_hdr_resource *mhdr_res = &mhdr_dummy.res; + const struct rte_flow_action_count *count = NULL; + const struct rte_flow_action_age *non_shared_age = NULL; + union flow_dv_attr flow_attr = { .attr = 0 }; + uint32_t tag_be; + union mlx5_flow_tbl_key tbl_key; + uint32_t modify_action_position = UINT32_MAX; + struct rte_vlan_hdr vlan = { 0 }; + struct mlx5_flow_dv_dest_array_resource mdest_res; + struct mlx5_flow_dv_sample_resource sample_res; + void *sample_actions[MLX5_DV_MAX_NUMBER_OF_ACTIONS] = {0}; + const struct rte_flow_action_sample *sample = NULL; + struct mlx5_flow_sub_actions_list *sample_act; + uint32_t sample_act_pos = UINT32_MAX; + uint32_t age_act_pos = UINT32_MAX; + uint32_t num_of_dest = 0; + int tmp_actions_n = 0; + uint32_t table; + int ret = 0; + const struct mlx5_flow_tunnel *tunnel = NULL; + struct flow_grp_info grp_info = { + .external = !!dev_flow->external, + .transfer = !!attr->transfer, + .fdb_def_rule = !!priv->fdb_def_rule, + .skip_scale = dev_flow->skip_scale & + (1 << MLX5_SCALE_FLOW_GROUP_BIT), + .std_tbl_fix = true, + }; + + if (!wks) + return rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "failed to push flow workspace"); + rss_desc = &wks->rss_desc; + memset(&mdest_res, 0, sizeof(struct mlx5_flow_dv_dest_array_resource)); + memset(&sample_res, 0, sizeof(struct mlx5_flow_dv_sample_resource)); + mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : + MLX5DV_FLOW_TABLE_TYPE_NIC_RX; + /* update normal path action resource into last index of array */ + sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM - 1]; + if (is_tunnel_offload_active(dev)) { + if (dev_flow->tunnel) { + RTE_VERIFY(dev_flow->tof_type == + MLX5_TUNNEL_OFFLOAD_MISS_RULE); + tunnel = dev_flow->tunnel; + } else { + tunnel = mlx5_get_tof(items, actions, + &dev_flow->tof_type); + dev_flow->tunnel = tunnel; + } + grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate + (dev, attr, tunnel, dev_flow->tof_type); + } + mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX : + MLX5DV_FLOW_TABLE_TYPE_NIC_RX; + ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table, + &grp_info, error); + if (ret) + return ret; + dev_flow->dv.group = table; + if (attr->transfer) + mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; + /* number of actions must be set to 0 in case of dirty stack. */ + mhdr_res->actions_num = 0; + if (is_flow_tunnel_match_rule(dev_flow->tof_type)) { + /* + * do not add decap action if match rule drops packet + * HW rejects rules with decap & drop + * + * if tunnel match rule was inserted before matching tunnel set + * rule flow table used in the match rule must be registered. + * current implementation handles that in the + * flow_dv_match_register() at the function end. + */ + bool add_decap = true; + const struct rte_flow_action *ptr = actions; + + for (; ptr->type != RTE_FLOW_ACTION_TYPE_END; ptr++) { + if (ptr->type == RTE_FLOW_ACTION_TYPE_DROP) { + add_decap = false; + break; } - dev_flow->dv.actions[actions_n++] = - dev_flow->dv.jump->action; - action_flags |= MLX5_FLOW_ACTION_JUMP; - dev_flow->handle->fate_action = MLX5_FLOW_FATE_JUMP; - sample_act->action_flags |= MLX5_FLOW_ACTION_JUMP; - num_of_dest++; - break; - case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC: - case RTE_FLOW_ACTION_TYPE_SET_MAC_DST: - if (flow_dv_convert_action_modify_mac - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_MAC_SRC ? - MLX5_FLOW_ACTION_SET_MAC_SRC : - MLX5_FLOW_ACTION_SET_MAC_DST; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: - case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST: - if (flow_dv_convert_action_modify_ipv4 - (mhdr_res, actions, error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC ? - MLX5_FLOW_ACTION_SET_IPV4_SRC : - MLX5_FLOW_ACTION_SET_IPV4_DST; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC: - case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST: - if (flow_dv_convert_action_modify_ipv6 - (mhdr_res, actions, error)) + } + if (add_decap) { + if (flow_dv_create_action_l2_decap(dev, dev_flow, + attr->transfer, + error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC ? - MLX5_FLOW_ACTION_SET_IPV6_SRC : - MLX5_FLOW_ACTION_SET_IPV6_DST; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_DECAP; + } + } + for (; !actions_end ; actions++) { + const struct rte_flow_action_queue *queue; + const struct rte_flow_action_rss *rss; + const struct rte_flow_action *action = actions; + const uint8_t *rss_key; + struct mlx5_flow_tbl_resource *tbl; + struct mlx5_aso_age_action *age_act; + struct mlx5_flow_counter *cnt_act; + uint32_t port_id = 0; + struct mlx5_flow_dv_port_id_action_resource port_id_resource; + int action_type = actions->type; + const struct rte_flow_action *found_action = NULL; + uint32_t jump_group = 0; + uint32_t owner_idx; + struct mlx5_aso_ct_action *ct; + + if (!mlx5_flow_os_action_supported(action_type)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + actions, + "action not supported"); + switch (action_type) { + case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET: + action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET; break; - case RTE_FLOW_ACTION_TYPE_SET_TP_SRC: - case RTE_FLOW_ACTION_TYPE_SET_TP_DST: - if (flow_dv_convert_action_modify_tp - (mhdr_res, actions, items, - &flow_attr, dev_flow, !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) - return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_SET_TP_SRC ? - MLX5_FLOW_ACTION_SET_TP_SRC : - MLX5_FLOW_ACTION_SET_TP_DST; + case RTE_FLOW_ACTION_TYPE_VOID: break; - case RTE_FLOW_ACTION_TYPE_DEC_TTL: - if (flow_dv_convert_action_modify_dec_ttl - (mhdr_res, items, &flow_attr, dev_flow, - !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) + case RTE_FLOW_ACTION_TYPE_PORT_ID: + case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT: + if (flow_dv_translate_action_port_id(dev, action, + &port_id, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_DEC_TTL; - break; - case RTE_FLOW_ACTION_TYPE_SET_TTL: - if (flow_dv_convert_action_modify_ttl - (mhdr_res, actions, items, &flow_attr, - dev_flow, !!(action_flags & - MLX5_FLOW_ACTION_DECAP), error)) + port_id_resource.port_id = port_id; + MLX5_ASSERT(!handle->rix_port_id_action); + if (flow_dv_port_id_action_resource_register + (dev, &port_id_resource, dev_flow, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TTL; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.port_id_action->action; + action_flags |= MLX5_FLOW_ACTION_PORT_ID; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_PORT_ID; + sample_act->action_flags |= MLX5_FLOW_ACTION_PORT_ID; + num_of_dest++; break; - case RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ: - case RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ: - if (flow_dv_convert_action_modify_tcp_seq - (mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_FLAG: + action_flags |= MLX5_FLOW_ACTION_FLAG; + wks->mark = 1; + if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { + struct rte_flow_action_mark mark = { + .id = MLX5_FLOW_MARK_DEFAULT, + }; + + if (flow_dv_convert_action_mark(dev, &mark, + mhdr_res, + error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MARK_EXT; + break; + } + tag_be = mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT); + /* + * Only one FLAG or MARK is supported per device flow + * right now. So the pointer to the tag resource must be + * zero before the register process. + */ + MLX5_ASSERT(!handle->dvh.rix_tag); + if (flow_dv_tag_resource_register(dev, tag_be, + dev_flow, error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ ? - MLX5_FLOW_ACTION_INC_TCP_SEQ : - MLX5_FLOW_ACTION_DEC_TCP_SEQ; + MLX5_ASSERT(dev_flow->dv.tag_resource); + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.tag_resource->action; break; + case RTE_FLOW_ACTION_TYPE_MARK: + action_flags |= MLX5_FLOW_ACTION_MARK; + wks->mark = 1; + if (dev_conf->dv_xmeta_en != MLX5_XMETA_MODE_LEGACY) { + const struct rte_flow_action_mark *mark = + (const struct rte_flow_action_mark *) + actions->conf; - case RTE_FLOW_ACTION_TYPE_INC_TCP_ACK: - case RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK: - if (flow_dv_convert_action_modify_tcp_ack - (mhdr_res, actions, error)) + if (flow_dv_convert_action_mark(dev, mark, + mhdr_res, + error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MARK_EXT; + break; + } + /* Fall-through */ + case MLX5_RTE_FLOW_ACTION_TYPE_MARK: + /* Legacy (non-extensive) MARK action. */ + tag_be = mlx5_flow_mark_set + (((const struct rte_flow_action_mark *) + (actions->conf))->id); + MLX5_ASSERT(!handle->dvh.rix_tag); + if (flow_dv_tag_resource_register(dev, tag_be, + dev_flow, error)) return -rte_errno; - action_flags |= actions->type == - RTE_FLOW_ACTION_TYPE_INC_TCP_ACK ? - MLX5_FLOW_ACTION_INC_TCP_ACK : - MLX5_FLOW_ACTION_DEC_TCP_ACK; + MLX5_ASSERT(dev_flow->dv.tag_resource); + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.tag_resource->action; break; - case MLX5_RTE_FLOW_ACTION_TYPE_TAG: - if (flow_dv_convert_action_set_reg - (mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_SET_META: + if (flow_dv_convert_action_set_meta + (dev, mhdr_res, attr, + (const struct rte_flow_action_set_meta *) + actions->conf, error)) return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_TAG; + action_flags |= MLX5_FLOW_ACTION_SET_META; break; - case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG: - if (flow_dv_convert_action_copy_mreg - (dev, mhdr_res, actions, error)) + case RTE_FLOW_ACTION_TYPE_SET_TAG: + if (flow_dv_convert_action_set_tag + (dev, mhdr_res, + (const struct rte_flow_action_set_tag *) + actions->conf, error)) return -rte_errno; action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case MLX5_RTE_FLOW_ACTION_TYPE_DEFAULT_MISS: - action_flags |= MLX5_FLOW_ACTION_DEFAULT_MISS; - dev_flow->handle->fate_action = - MLX5_FLOW_FATE_DEFAULT_MISS; - break; - case RTE_FLOW_ACTION_TYPE_METER: - if (!wks->fm) - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, "Failed to get meter in flow."); - /* Set the meter action. */ - dev_flow->dv.actions[actions_n++] = - wks->fm->meter_action_g; - action_flags |= MLX5_FLOW_ACTION_METER; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV4_DSCP: - if (flow_dv_convert_action_modify_ipv4_dscp(mhdr_res, - actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_IPV4_DSCP; - break; - case RTE_FLOW_ACTION_TYPE_SET_IPV6_DSCP: - if (flow_dv_convert_action_modify_ipv6_dscp(mhdr_res, - actions, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_SET_IPV6_DSCP; + case RTE_FLOW_ACTION_TYPE_DROP: + action_flags |= MLX5_FLOW_ACTION_DROP; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_DROP; break; - case RTE_FLOW_ACTION_TYPE_SAMPLE: - sample_act_pos = actions_n; - sample = (const struct rte_flow_action_sample *) - action->conf; - actions_n++; - action_flags |= MLX5_FLOW_ACTION_SAMPLE; - /* put encap action into group if work with port id */ - if ((action_flags & MLX5_FLOW_ACTION_ENCAP) && - (action_flags & MLX5_FLOW_ACTION_PORT_ID)) - sample_act->action_flags |= - MLX5_FLOW_ACTION_ENCAP; + case RTE_FLOW_ACTION_TYPE_QUEUE: + queue = actions->conf; + rss_desc->queue_num = 1; + rss_desc->queue[0] = queue->index; + action_flags |= MLX5_FLOW_ACTION_QUEUE; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_QUEUE; + sample_act->action_flags |= MLX5_FLOW_ACTION_QUEUE; + num_of_dest++; break; - case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: - if (flow_dv_convert_action_modify_field - (dev, mhdr_res, actions, attr, error)) - return -rte_errno; - action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; + case RTE_FLOW_ACTION_TYPE_RSS: + rss = actions->conf; + memcpy(rss_desc->queue, rss->queue, + rss->queue_num * sizeof(uint16_t)); + rss_desc->queue_num = rss->queue_num; + /* NULL RSS key indicates default RSS key. */ + rss_key = !rss->key ? rss_hash_default_key : rss->key; + memcpy(rss_desc->key, rss_key, MLX5_RSS_HASH_KEY_LEN); + /* + * rss->level and rss.types should be set in advance + * when expanding items for RSS. + */ + action_flags |= MLX5_FLOW_ACTION_RSS; + dev_flow->handle->fate_action = rss_desc->shared_rss ? + MLX5_FLOW_FATE_SHARED_RSS : + MLX5_FLOW_FATE_QUEUE; break; - case RTE_FLOW_ACTION_TYPE_CONNTRACK: + case MLX5_RTE_FLOW_ACTION_TYPE_AGE: owner_idx = (uint32_t)(uintptr_t)action->conf; - ct = flow_aso_ct_get_by_idx(dev, owner_idx); - if (!ct) - return rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "Failed to get CT object."); - if (mlx5_aso_ct_available(priv->sh, ct)) - return rte_flow_error_set(error, rte_errno, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "CT is unavailable."); - if (ct->is_original) - dev_flow->dv.actions[actions_n] = - ct->dr_action_orig; - else - dev_flow->dv.actions[actions_n] = - ct->dr_action_rply; - if (flow->ct == 0) { - flow->indirect_type = - MLX5_INDIRECT_ACTION_TYPE_CT; - flow->ct = owner_idx; - __atomic_fetch_add(&ct->refcnt, 1, + age_act = flow_aso_age_get_by_idx(dev, owner_idx); + if (flow->age == 0) { + flow->age = owner_idx; + __atomic_fetch_add(&age_act->refcnt, 1, __ATOMIC_RELAXED); } - actions_n++; - action_flags |= MLX5_FLOW_ACTION_CT; + age_act_pos = actions_n++; + action_flags |= MLX5_FLOW_ACTION_AGE; break; case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL: dev_flow->dv.actions[actions_n] = @@ -13752,396 +13757,435 @@ flow_dv_translate(struct rte_eth_dev *dev, dev_flow->handle->fate_action = MLX5_FLOW_FATE_SEND_TO_KERNEL; break; - case RTE_FLOW_ACTION_TYPE_END: - actions_end = true; - if (mhdr_res->actions_num) { - /* create modify action if needed. */ - if (flow_dv_modify_hdr_resource_register - (dev, mhdr_res, dev_flow, error)) - return -rte_errno; - dev_flow->dv.actions[modify_action_position] = - handle->dvh.modify_hdr->action; - } - /* - * Handle AGE and COUNT action by single HW counter - * when they are not shared. + case RTE_FLOW_ACTION_TYPE_AGE: + non_shared_age = action->conf; + age_act_pos = actions_n++; + action_flags |= MLX5_FLOW_ACTION_AGE; + break; + case MLX5_RTE_FLOW_ACTION_TYPE_COUNT: + owner_idx = (uint32_t)(uintptr_t)action->conf; + cnt_act = flow_dv_counter_get_by_idx(dev, owner_idx, + NULL); + MLX5_ASSERT(cnt_act != NULL); + /** + * When creating meter drop flow in drop table, the + * counter should not overwrite the rte flow counter. */ - if (action_flags & MLX5_FLOW_ACTION_AGE) { - if ((non_shared_age && count) || - !flow_hit_aso_supported(priv->sh, attr)) { - /* Creates age by counters. */ - cnt_act = flow_dv_prepare_counter - (dev, dev_flow, - flow, count, - non_shared_age, - error); - if (!cnt_act) - return -rte_errno; - dev_flow->dv.actions[age_act_pos] = - cnt_act->action; - break; - } - if (!flow->age && non_shared_age) { - flow->age = flow_dv_aso_age_alloc - (dev, error); - if (!flow->age) - return -rte_errno; - flow_dv_aso_age_params_init - (dev, flow->age, - non_shared_age->context ? - non_shared_age->context : - (void *)(uintptr_t) - (dev_flow->flow_idx), - non_shared_age->timeout); - } - age_act = flow_aso_age_get_by_idx(dev, - flow->age); - dev_flow->dv.actions[age_act_pos] = - age_act->dr_action; - } - if (action_flags & MLX5_FLOW_ACTION_COUNT) { - /* - * Create one count action, to be used - * by all sub-flows. - */ - cnt_act = flow_dv_prepare_counter(dev, dev_flow, - flow, count, - NULL, error); - if (!cnt_act) - return -rte_errno; + if (attr->group == MLX5_FLOW_TABLE_LEVEL_METER && + dev_flow->dv.table_id == MLX5_MTR_TABLE_ID_DROP) { dev_flow->dv.actions[actions_n++] = - cnt_act->action; + cnt_act->action; + } else { + if (flow->counter == 0) { + flow->counter = owner_idx; + __atomic_fetch_add + (&cnt_act->shared_info.refcnt, + 1, __ATOMIC_RELAXED); + } + /* Save information first, will apply later. */ + action_flags |= MLX5_FLOW_ACTION_COUNT; } - default: break; - } - if (mhdr_res->actions_num && - modify_action_position == UINT32_MAX) - modify_action_position = actions_n++; - } - for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { - int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); - int item_type = items->type; - - if (!mlx5_flow_os_item_supported(item_type)) - return rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "item not supported"); - switch (item_type) { - case RTE_FLOW_ITEM_TYPE_ESP: - flow_dv_translate_item_esp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_ITEM_ESP; + case RTE_FLOW_ACTION_TYPE_COUNT: + if (!priv->sh->cdev->config.devx) { + return rte_flow_error_set + (error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "count action not supported"); + } + /* Save information first, will apply later. */ + count = action->conf; + action_flags |= MLX5_FLOW_ACTION_COUNT; break; - case RTE_FLOW_ITEM_TYPE_PORT_ID: - flow_dv_translate_item_port_id - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_PORT_ID; + case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN: + dev_flow->dv.actions[actions_n++] = + priv->sh->pop_vlan_action; + action_flags |= MLX5_FLOW_ACTION_OF_POP_VLAN; break; - case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: - flow_dv_translate_item_represented_port - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; + case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN: + if (!(action_flags & + MLX5_FLOW_ACTION_OF_SET_VLAN_VID)) + flow_dev_get_vlan_info_from_items(items, &vlan); + vlan.eth_proto = rte_be_to_cpu_16 + ((((const struct rte_flow_action_of_push_vlan *) + actions->conf)->ethertype)); + found_action = mlx5_flow_find_action + (actions + 1, + RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID); + if (found_action) + mlx5_update_vlan_vid_pcp(found_action, &vlan); + found_action = mlx5_flow_find_action + (actions + 1, + RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP); + if (found_action) + mlx5_update_vlan_vid_pcp(found_action, &vlan); + if (flow_dv_create_action_push_vlan + (dev, attr, &vlan, dev_flow, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.push_vlan_res->action; + action_flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN; break; - case RTE_FLOW_ITEM_TYPE_ETH: - flow_dv_translate_item_eth(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = action_flags & - MLX5_FLOW_ACTION_DEFAULT_MISS && - !dev_flow->external ? - MLX5_PRIORITY_MAP_L3 : - MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : - MLX5_FLOW_LAYER_OUTER_L2; + case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP: + /* of_vlan_push action handled this action */ + MLX5_ASSERT(action_flags & + MLX5_FLOW_ACTION_OF_PUSH_VLAN); break; - case RTE_FLOW_ITEM_TYPE_VLAN: - flow_dv_translate_item_vlan(dev_flow, - match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | - MLX5_FLOW_LAYER_INNER_VLAN) : - (MLX5_FLOW_LAYER_OUTER_L2 | - MLX5_FLOW_LAYER_OUTER_VLAN); + case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID: + if (action_flags & MLX5_FLOW_ACTION_OF_PUSH_VLAN) + break; + flow_dev_get_vlan_info_from_items(items, &vlan); + mlx5_update_vlan_vid_pcp(actions, &vlan); + /* If no VLAN push - this is a modify header action */ + if (flow_dv_convert_action_modify_vlan_vid + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID; break; - case RTE_FLOW_ITEM_TYPE_IPV4: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv4(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : - MLX5_FLOW_LAYER_OUTER_L3_IPV4; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv4 *) - items->mask)->hdr.next_proto_id) { - next_protocol = - ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - next_protocol &= - ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP: + if (flow_dv_create_action_l2_encap(dev, actions, + dev_flow, + attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_ENCAP; + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_IPV6: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv6(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - matcher.priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : - MLX5_FLOW_LAYER_OUTER_L3_IPV6; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto) { - next_protocol = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol &= - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP: + case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP: + if (flow_dv_create_action_l2_decap(dev, dev_flow, + attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + action_flags |= MLX5_FLOW_ACTION_DECAP; break; - case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: - flow_dv_translate_item_ipv6_frag_ext(match_mask, - match_value, - items, tunnel); - last_item = tunnel ? - MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : - MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header) { - next_protocol = - ((const struct rte_flow_item_ipv6_frag_ext *) - items->spec)->hdr.next_header; - next_protocol &= - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header; + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP: + /* Handle encap with preceding decap. */ + if (action_flags & MLX5_FLOW_ACTION_DECAP) { + if (flow_dv_create_action_raw_encap + (dev, actions, dev_flow, attr, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; } else { - /* Reset for inner layer. */ - next_protocol = 0xff; + /* Handle encap without preceding decap. */ + if (flow_dv_create_action_l2_encap + (dev, actions, dev_flow, attr->transfer, + error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; } + action_flags |= MLX5_FLOW_ACTION_ENCAP; + if (action_flags & MLX5_FLOW_ACTION_SAMPLE) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_TCP: - flow_dv_translate_item_tcp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : - MLX5_FLOW_LAYER_OUTER_L4_TCP; - break; - case RTE_FLOW_ITEM_TYPE_UDP: - flow_dv_translate_item_udp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : - MLX5_FLOW_LAYER_OUTER_L4_UDP; - break; - case RTE_FLOW_ITEM_TYPE_GRE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - gre_item = items; + case RTE_FLOW_ACTION_TYPE_RAW_DECAP: + while ((++action)->type == RTE_FLOW_ACTION_TYPE_VOID) + ; + if (action->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) { + if (flow_dv_create_action_l2_decap + (dev, dev_flow, attr->transfer, error)) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.encap_decap->action; + } + /* If decap is followed by encap, handle it at encap. */ + action_flags |= MLX5_FLOW_ACTION_DECAP; break; - case RTE_FLOW_ITEM_TYPE_GRE_KEY: - flow_dv_translate_item_gre_key(match_mask, - match_value, items); - last_item = MLX5_FLOW_LAYER_GRE_KEY; + case MLX5_RTE_FLOW_ACTION_TYPE_JUMP: + dev_flow->dv.actions[actions_n++] = + (void *)(uintptr_t)action->conf; + action_flags |= MLX5_FLOW_ACTION_JUMP; break; - case RTE_FLOW_ITEM_TYPE_GRE_OPTION: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_JUMP: + jump_group = ((const struct rte_flow_action_jump *) + action->conf)->group; + grp_info.std_tbl_fix = 0; + if (dev_flow->skip_scale & + (1 << MLX5_SCALE_JUMP_FLOW_GROUP_BIT)) + grp_info.skip_scale = 1; + else + grp_info.skip_scale = 0; + ret = mlx5_flow_group_to_table(dev, tunnel, + jump_group, + &table, + &grp_info, error); + if (ret) + return ret; + tbl = flow_dv_tbl_resource_get(dev, table, attr->egress, + attr->transfer, + !!dev_flow->external, + tunnel, jump_group, 0, + 0, error); + if (!tbl) + return rte_flow_error_set + (error, errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create jump action."); + if (flow_dv_jump_tbl_resource_register + (dev, tbl, dev_flow, error)) { + flow_dv_tbl_resource_release(MLX5_SH(dev), tbl); + return rte_flow_error_set + (error, errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "cannot create jump action."); + } + dev_flow->dv.actions[actions_n++] = + dev_flow->dv.jump->action; + action_flags |= MLX5_FLOW_ACTION_JUMP; + dev_flow->handle->fate_action = MLX5_FLOW_FATE_JUMP; + sample_act->action_flags |= MLX5_FLOW_ACTION_JUMP; + num_of_dest++; break; - case RTE_FLOW_ITEM_TYPE_NVGRE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC: + case RTE_FLOW_ACTION_TYPE_SET_MAC_DST: + if (flow_dv_convert_action_modify_mac + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_MAC_SRC ? + MLX5_FLOW_ACTION_SET_MAC_SRC : + MLX5_FLOW_ACTION_SET_MAC_DST; break; - case RTE_FLOW_ITEM_TYPE_VXLAN: - flow_dv_translate_item_vxlan(dev, attr, - match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN; + case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC: + case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST: + if (flow_dv_convert_action_modify_ipv4 + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC ? + MLX5_FLOW_ACTION_SET_IPV4_SRC : + MLX5_FLOW_ACTION_SET_IPV4_DST; break; - case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN_GPE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC: + case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST: + if (flow_dv_convert_action_modify_ipv6 + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC ? + MLX5_FLOW_ACTION_SET_IPV6_SRC : + MLX5_FLOW_ACTION_SET_IPV6_DST; break; - case RTE_FLOW_ITEM_TYPE_GENEVE: - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GENEVE; - tunnel_item = items; + case RTE_FLOW_ACTION_TYPE_SET_TP_SRC: + case RTE_FLOW_ACTION_TYPE_SET_TP_DST: + if (flow_dv_convert_action_modify_tp + (mhdr_res, actions, items, + &flow_attr, dev_flow, !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_SET_TP_SRC ? + MLX5_FLOW_ACTION_SET_TP_SRC : + MLX5_FLOW_ACTION_SET_TP_DST; break; - case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: - ret = flow_dv_translate_item_geneve_opt(dev, match_mask, - match_value, - items, error); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GENEVE TLV option"); - flow->geneve_tlv_option = 1; - last_item = MLX5_FLOW_LAYER_GENEVE_OPT; + case RTE_FLOW_ACTION_TYPE_DEC_TTL: + if (flow_dv_convert_action_modify_dec_ttl + (mhdr_res, items, &flow_attr, dev_flow, + !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_DEC_TTL; break; - case RTE_FLOW_ITEM_TYPE_MPLS: - flow_dv_translate_item_mpls(match_mask, match_value, - items, last_item, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_MPLS; + case RTE_FLOW_ACTION_TYPE_SET_TTL: + if (flow_dv_convert_action_modify_ttl + (mhdr_res, actions, items, &flow_attr, + dev_flow, !!(action_flags & + MLX5_FLOW_ACTION_DECAP), error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TTL; break; - case RTE_FLOW_ITEM_TYPE_MARK: - flow_dv_translate_item_mark(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_MARK; + case RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ: + case RTE_FLOW_ACTION_TYPE_DEC_TCP_SEQ: + if (flow_dv_convert_action_modify_tcp_seq + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_INC_TCP_SEQ ? + MLX5_FLOW_ACTION_INC_TCP_SEQ : + MLX5_FLOW_ACTION_DEC_TCP_SEQ; break; - case RTE_FLOW_ITEM_TYPE_META: - flow_dv_translate_item_meta(dev, match_mask, - match_value, attr, items); - last_item = MLX5_FLOW_ITEM_METADATA; + + case RTE_FLOW_ACTION_TYPE_INC_TCP_ACK: + case RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK: + if (flow_dv_convert_action_modify_tcp_ack + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= actions->type == + RTE_FLOW_ACTION_TYPE_INC_TCP_ACK ? + MLX5_FLOW_ACTION_INC_TCP_ACK : + MLX5_FLOW_ACTION_DEC_TCP_ACK; break; - case RTE_FLOW_ITEM_TYPE_ICMP: - flow_dv_translate_item_icmp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP; + case MLX5_RTE_FLOW_ACTION_TYPE_TAG: + if (flow_dv_convert_action_set_reg + (mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case RTE_FLOW_ITEM_TYPE_ICMP6: - flow_dv_translate_item_icmp6(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP6; + case MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG: + if (flow_dv_convert_action_copy_mreg + (dev, mhdr_res, actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_TAG; break; - case RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; + case MLX5_RTE_FLOW_ACTION_TYPE_DEFAULT_MISS: + action_flags |= MLX5_FLOW_ACTION_DEFAULT_MISS; + dev_flow->handle->fate_action = + MLX5_FLOW_FATE_DEFAULT_MISS; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_mlx5_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; + case RTE_FLOW_ACTION_TYPE_METER: + if (!wks->fm) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "Failed to get meter in flow."); + /* Set the meter action. */ + dev_flow->dv.actions[actions_n++] = + wks->fm->meter_action_g; + action_flags |= MLX5_FLOW_ACTION_METER; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - flow_dv_translate_item_tx_queue(dev, match_mask, - match_value, - items); - last_item = MLX5_FLOW_ITEM_TX_QUEUE; + case RTE_FLOW_ACTION_TYPE_SET_IPV4_DSCP: + if (flow_dv_convert_action_modify_ipv4_dscp(mhdr_res, + actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_IPV4_DSCP; break; - case RTE_FLOW_ITEM_TYPE_GTP: - flow_dv_translate_item_gtp(match_mask, match_value, - items, tunnel); - matcher.priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GTP; + case RTE_FLOW_ACTION_TYPE_SET_IPV6_DSCP: + if (flow_dv_convert_action_modify_ipv6_dscp(mhdr_res, + actions, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_SET_IPV6_DSCP; break; - case RTE_FLOW_ITEM_TYPE_GTP_PSC: - ret = flow_dv_translate_item_gtp_psc(match_mask, - match_value, - items); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GTP PSC item"); - last_item = MLX5_FLOW_LAYER_GTP_PSC; + case RTE_FLOW_ACTION_TYPE_SAMPLE: + sample_act_pos = actions_n; + sample = (const struct rte_flow_action_sample *) + action->conf; + actions_n++; + action_flags |= MLX5_FLOW_ACTION_SAMPLE; + /* put encap action into group if work with port id */ + if ((action_flags & MLX5_FLOW_ACTION_ENCAP) && + (action_flags & MLX5_FLOW_ACTION_PORT_ID)) + sample_act->action_flags |= + MLX5_FLOW_ACTION_ENCAP; break; - case RTE_FLOW_ITEM_TYPE_ECPRI: - if (!mlx5_flex_parser_ecpri_exist(dev)) { - /* Create it only the first time to be used. */ - ret = mlx5_flex_parser_ecpri_alloc(dev); - if (ret) - return rte_flow_error_set - (error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, + case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD: + if (flow_dv_convert_action_modify_field + (dev, mhdr_res, actions, attr, error)) + return -rte_errno; + action_flags |= MLX5_FLOW_ACTION_MODIFY_FIELD; + break; + case RTE_FLOW_ACTION_TYPE_CONNTRACK: + owner_idx = (uint32_t)(uintptr_t)action->conf; + ct = flow_aso_ct_get_by_idx(dev, owner_idx); + if (!ct) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, - "cannot create eCPRI parser"); + "Failed to get CT object."); + if (mlx5_aso_ct_available(priv->sh, ct)) + return rte_flow_error_set(error, rte_errno, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "CT is unavailable."); + if (ct->is_original) + dev_flow->dv.actions[actions_n] = + ct->dr_action_orig; + else + dev_flow->dv.actions[actions_n] = + ct->dr_action_rply; + if (flow->ct == 0) { + flow->indirect_type = + MLX5_INDIRECT_ACTION_TYPE_CT; + flow->ct = owner_idx; + __atomic_fetch_add(&ct->refcnt, 1, + __ATOMIC_RELAXED); } - flow_dv_translate_item_ecpri(dev, match_mask, - match_value, items, - last_item); - /* No other protocol should follow eCPRI layer. */ - last_item = MLX5_FLOW_LAYER_ECPRI; - break; - case RTE_FLOW_ITEM_TYPE_INTEGRITY: - flow_dv_translate_item_integrity(items, integrity_items, - &last_item); - break; - case RTE_FLOW_ITEM_TYPE_CONNTRACK: - flow_dv_translate_item_aso_ct(dev, match_mask, - match_value, items); - break; - case RTE_FLOW_ITEM_TYPE_FLEX: - flow_dv_translate_item_flex(dev, match_mask, - match_value, items, - dev_flow, tunnel != 0); - last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : - MLX5_FLOW_ITEM_OUTER_FLEX; + actions_n++; + action_flags |= MLX5_FLOW_ACTION_CT; break; + case RTE_FLOW_ACTION_TYPE_END: + actions_end = true; + if (mhdr_res->actions_num) { + /* create modify action if needed. */ + if (flow_dv_modify_hdr_resource_register + (dev, mhdr_res, dev_flow, error)) + return -rte_errno; + dev_flow->dv.actions[modify_action_position] = + handle->dvh.modify_hdr->action; + } + /* + * Handle AGE and COUNT action by single HW counter + * when they are not shared. + */ + if (action_flags & MLX5_FLOW_ACTION_AGE) { + if ((non_shared_age && count) || + !flow_hit_aso_supported(priv->sh, attr)) { + /* Creates age by counters. */ + cnt_act = flow_dv_prepare_counter + (dev, dev_flow, + flow, count, + non_shared_age, + error); + if (!cnt_act) + return -rte_errno; + dev_flow->dv.actions[age_act_pos] = + cnt_act->action; + break; + } + if (!flow->age && non_shared_age) { + flow->age = flow_dv_aso_age_alloc + (dev, error); + if (!flow->age) + return -rte_errno; + flow_dv_aso_age_params_init + (dev, flow->age, + non_shared_age->context ? + non_shared_age->context : + (void *)(uintptr_t) + (dev_flow->flow_idx), + non_shared_age->timeout); + } + age_act = flow_aso_age_get_by_idx(dev, + flow->age); + dev_flow->dv.actions[age_act_pos] = + age_act->dr_action; + } + if (action_flags & MLX5_FLOW_ACTION_COUNT) { + /* + * Create one count action, to be used + * by all sub-flows. + */ + cnt_act = flow_dv_prepare_counter(dev, dev_flow, + flow, count, + NULL, error); + if (!cnt_act) + return -rte_errno; + dev_flow->dv.actions[actions_n++] = + cnt_act->action; + } default: break; } - item_flags |= last_item; - } - /* - * When E-Switch mode is enabled, we have two cases where we need to - * set the source port manually. - * The first one, is in case of NIC ingress steering rule, and the - * second is E-Switch rule where no port_id item was found. - * In both cases the source port is set according the current port - * in use. - */ - if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && - !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && - !(attr->egress && !attr->transfer)) { - if (flow_dv_translate_item_port_id(dev, match_mask, - match_value, NULL, attr)) - return -rte_errno; - } - if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { - flow_dv_translate_item_integrity_post(match_mask, match_value, - integrity_items, - item_flags); - } - if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) - flow_dv_translate_item_vxlan_gpe(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GENEVE) - flow_dv_translate_item_geneve(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GRE) { - if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) - flow_dv_translate_item_gre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) - flow_dv_translate_item_nvgre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) - flow_dv_translate_item_gre_option(match_mask, match_value, - tunnel_item, gre_item, item_flags); - else - MLX5_ASSERT(false); + if (mhdr_res->actions_num && + modify_action_position == UINT32_MAX) + modify_action_position = actions_n++; } -#ifdef RTE_LIBRTE_MLX5_DEBUG - MLX5_ASSERT(!flow_dv_check_valid_spec(matcher.mask.buf, - dev_flow->dv.value.buf)); -#endif - /* - * Layers may be already initialized from prefix flow if this dev_flow - * is the suffix flow. - */ - handle->layers |= item_flags; + dev_flow->act_flags = action_flags; + ret = flow_dv_translate_items(dev, dev_flow, attr, items, &matcher, + error); + if (ret) + return -rte_errno; if (action_flags & MLX5_FLOW_ACTION_RSS) flow_dv_hashfields_set(dev_flow->handle->layers, rss_desc, @@ -14211,7 +14255,6 @@ flow_dv_translate(struct rte_eth_dev *dev, actions_n = tmp_actions_n; } dev_flow->dv.actions_n = actions_n; - dev_flow->act_flags = action_flags; if (wks->skip_matcher_reg) return 0; /* Register matcher. */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 01/18] net/mlx5: split flow item translation 2022-10-20 15:57 ` [v6 01/18] net/mlx5: split flow item translation Alex Vesker @ 2022-10-24 6:47 ` Slava Ovsiienko 0 siblings, 0 replies; 134+ messages in thread From: Slava Ovsiienko @ 2022-10-24 6:47 UTC (permalink / raw) To: Alex Vesker, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 18:58 > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad > <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com> > Subject: [v6 01/18] net/mlx5: split flow item translation > > From: Suanming Mou <suanmingm@nvidia.com> > > In order to share the item translation code with hardware steering > mode, this commit splits flow item translation code to a dedicate > function. > > Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 02/18] net/mlx5: split flow item matcher and value translation 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker 2022-10-20 15:57 ` [v6 01/18] net/mlx5: split flow item translation Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-24 6:49 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 03/18] net/mlx5: add hardware steering item translation function Alex Vesker ` (16 subsequent siblings) 18 siblings, 1 reply; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> As hardware steering mode translates flow matcher and value in two different stages, split the flow item matcher and value translation to help reuse the code. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow.h | 32 + drivers/net/mlx5/mlx5_flow_dv.c | 2314 +++++++++++++++---------------- 2 files changed, 1185 insertions(+), 1161 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 8e97fa188a..7e5ade52cb 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1267,6 +1267,38 @@ struct mlx5_flow_workspace { uint32_t skip_matcher_reg:1; /* Indicates if need to skip matcher register in translate. */ uint32_t mark:1; /* Indicates if flow contains mark action. */ + uint32_t vport_meta_tag; /* Used for vport index match. */ +}; + +/* Matcher translate type. */ +enum MLX5_SET_MATCHER { + MLX5_SET_MATCHER_SW_V = 1 << 0, + MLX5_SET_MATCHER_SW_M = 1 << 1, + MLX5_SET_MATCHER_HS_V = 1 << 2, + MLX5_SET_MATCHER_HS_M = 1 << 3, +}; + +#define MLX5_SET_MATCHER_SW (MLX5_SET_MATCHER_SW_V | MLX5_SET_MATCHER_SW_M) +#define MLX5_SET_MATCHER_HS (MLX5_SET_MATCHER_HS_V | MLX5_SET_MATCHER_HS_M) +#define MLX5_SET_MATCHER_V (MLX5_SET_MATCHER_SW_V | MLX5_SET_MATCHER_HS_V) +#define MLX5_SET_MATCHER_M (MLX5_SET_MATCHER_SW_M | MLX5_SET_MATCHER_HS_M) + +/* Flow matcher workspace intermediate data. */ +struct mlx5_dv_matcher_workspace { + uint8_t priority; /* Flow priority. */ + uint64_t last_item; /* Last item in pattern. */ + uint64_t item_flags; /* Flow item pattern flags. */ + uint64_t action_flags; /* Flow action flags. */ + bool external; /* External flow or not. */ + uint32_t vlan_tag:12; /* Flow item VLAN tag. */ + uint8_t next_protocol; /* Tunnel next protocol */ + uint32_t geneve_tlv_option; /* Flow item Geneve TLV option. */ + uint32_t group; /* Flow group. */ + uint16_t udp_dport; /* Flow item UDP port. */ + const struct rte_flow_attr *attr; /* Flow attribute. */ + struct mlx5_flow_rss_desc *rss_desc; /* RSS descriptor. */ + const struct rte_flow_item *tunnel_item; /* Flow tunnel item. */ + const struct rte_flow_item *gre_item; /* Flow GRE item. */ }; struct mlx5_flow_split_info { diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 0f3ff4db51..944db9c3e4 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -63,6 +63,25 @@ #define MLX5DV_FLOW_VLAN_PCP_MASK_BE RTE_BE16(MLX5DV_FLOW_VLAN_PCP_MASK) #define MLX5DV_FLOW_VLAN_VID_MASK_BE RTE_BE16(MLX5DV_FLOW_VLAN_VID_MASK) +#define MLX5_ITEM_VALID(item, key_type) \ + (((MLX5_SET_MATCHER_SW & (key_type)) && !((item)->spec)) || \ + ((MLX5_SET_MATCHER_HS_V == (key_type)) && !((item)->spec)) || \ + ((MLX5_SET_MATCHER_HS_M == (key_type)) && !((item)->mask))) + +#define MLX5_ITEM_UPDATE(item, key_type, v, m, gm) \ + do { \ + if ((key_type) == MLX5_SET_MATCHER_SW_V) { \ + v = (item)->spec; \ + m = (item)->mask ? (item)->mask : (gm); \ + } else if ((key_type) == MLX5_SET_MATCHER_HS_V) { \ + v = (item)->spec; \ + m = (v); \ + } else { \ + v = (item)->mask ? (item)->mask : (gm); \ + m = (v); \ + } \ + } while (0) + union flow_dv_attr { struct { uint32_t valid:1; @@ -8325,70 +8344,61 @@ flow_dv_check_valid_spec(void *match_mask, void *match_value) static inline void flow_dv_set_match_ip_version(uint32_t group, void *headers_v, - void *headers_m, + uint32_t key_type, uint8_t ip_version) { - if (group == 0) - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, 0xf); + if (group == 0 && (key_type & MLX5_SET_MATCHER_M)) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, 0xf); else - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_version, + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, ip_version); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_version, ip_version); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ethertype, 0); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ethertype, 0); } /** - * Add Ethernet item to matcher and to the value. + * Add Ethernet item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] grpup + * Flow matcher group. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_eth(void *matcher, void *key, - const struct rte_flow_item *item, int inner, - uint32_t group) +flow_dv_translate_item_eth(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_eth *eth_m = item->mask; - const struct rte_flow_item_eth *eth_v = item->spec; + const struct rte_flow_item_eth *eth_vv = item->spec; + const struct rte_flow_item_eth *eth_m; + const struct rte_flow_item_eth *eth_v; const struct rte_flow_item_eth nic_mask = { .dst.addr_bytes = "\xff\xff\xff\xff\xff\xff", .src.addr_bytes = "\xff\xff\xff\xff\xff\xff", .type = RTE_BE16(0xffff), .has_vlan = 0, }; - void *hdrs_m; void *hdrs_v; char *l24_v; unsigned int i; - if (!eth_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!eth_m) - eth_m = &nic_mask; - if (inner) { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); + MLX5_ITEM_UPDATE(item, key_type, eth_v, eth_m, &nic_mask); + if (!eth_vv) + eth_vv = eth_v; + if (inner) hdrs_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); + else hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, dmac_47_16), - ð_m->dst, sizeof(eth_m->dst)); /* The value must be in the range of the mask. */ l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, dmac_47_16); for (i = 0; i < sizeof(eth_m->dst); ++i) l24_v[i] = eth_m->dst.addr_bytes[i] & eth_v->dst.addr_bytes[i]; - memcpy(MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, smac_47_16), - ð_m->src, sizeof(eth_m->src)); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, smac_47_16); /* The value must be in the range of the mask. */ for (i = 0; i < sizeof(eth_m->dst); ++i) @@ -8402,145 +8412,149 @@ flow_dv_translate_item_eth(void *matcher, void *key, * eCPRI over Ether layer will use type value 0xAEFE. */ if (eth_m->type == 0xFFFF) { + rte_be16_t type = eth_v->type; + + /* + * When set the matcher mask, refer to the original spec + * value. + */ + if (key_type == MLX5_SET_MATCHER_SW_M) { + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + type = eth_vv->type; + } /* Set cvlan_tag mask for any single\multi\un-tagged case. */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); - switch (eth_v->type) { + switch (type) { case RTE_BE16(RTE_ETHER_TYPE_VLAN): MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); return; case RTE_BE16(RTE_ETHER_TYPE_QINQ): - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); return; case RTE_BE16(RTE_ETHER_TYPE_IPV4): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 4); + flow_dv_set_match_ip_version(group, hdrs_v, key_type, + 4); return; case RTE_BE16(RTE_ETHER_TYPE_IPV6): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 6); + flow_dv_set_match_ip_version(group, hdrs_v, key_type, + 6); return; default: break; } } - if (eth_m->has_vlan) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); - if (eth_v->has_vlan) { - /* - * Here, when also has_more_vlan field in VLAN item is - * not set, only single-tagged packets will be matched. - */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + /* + * Only SW steering value should refer to the mask value. + * Other cases are using the fake masks, just ignore the mask. + */ + if (eth_v->has_vlan && eth_m->has_vlan) { + /* + * Here, when also has_more_vlan field in VLAN item is + * not set, only single-tagged packets will be matched. + */ + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); + if (key_type != MLX5_SET_MATCHER_HS_M && eth_vv->has_vlan) return; - } } - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, ethertype, - rte_be_to_cpu_16(eth_m->type)); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype); *(uint16_t *)(l24_v) = eth_m->type & eth_v->type; } /** - * Add VLAN item to matcher and to the value. + * Add VLAN item to the value. * - * @param[in, out] dev_flow - * Flow descriptor. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] wks + * Item workspace. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow, - void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_vlan(void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_vlan *vlan_m = item->mask; - const struct rte_flow_item_vlan *vlan_v = item->spec; - void *hdrs_m; + const struct rte_flow_item_vlan *vlan_m; + const struct rte_flow_item_vlan *vlan_v; + const struct rte_flow_item_vlan *vlan_vv = item->spec; void *hdrs_v; - uint16_t tci_m; uint16_t tci_v; if (inner) { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); } else { - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); /* * This is workaround, masks are not supported, * and pre-validated. */ - if (vlan_v) - dev_flow->handle->vf_vlan.tag = - rte_be_to_cpu_16(vlan_v->tci) & 0x0fff; + if (vlan_vv) + wks->vlan_tag = rte_be_to_cpu_16(vlan_vv->tci) & 0x0fff; } /* * When VLAN item exists in flow, mark packet as tagged, * even if TCI is not specified. */ - if (!MLX5_GET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag)) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, cvlan_tag, 1); + if (!MLX5_GET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag)) MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 1); - } - if (!vlan_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!vlan_m) - vlan_m = &rte_flow_item_vlan_mask; - tci_m = rte_be_to_cpu_16(vlan_m->tci); + MLX5_ITEM_UPDATE(item, key_type, vlan_v, vlan_m, + &rte_flow_item_vlan_mask); tci_v = rte_be_to_cpu_16(vlan_m->tci & vlan_v->tci); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_vid, tci_m); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_vid, tci_v); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_cfi, tci_m >> 12); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_cfi, tci_v >> 12); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, first_prio, tci_m >> 13); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, first_prio, tci_v >> 13); /* * HW is optimized for IPv4/IPv6. In such cases, avoid setting * ethertype, and use ip_version field instead. */ if (vlan_m->inner_type == 0xFFFF) { - switch (vlan_v->inner_type) { + rte_be16_t inner_type = vlan_v->inner_type; + + /* + * When set the matcher mask, refer to the original spec + * value. + */ + if (key_type == MLX5_SET_MATCHER_SW_M) + inner_type = vlan_vv->inner_type; + switch (inner_type) { case RTE_BE16(RTE_ETHER_TYPE_VLAN): - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); + if (key_type & MLX5_SET_MATCHER_V) + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, + cvlan_tag, 0); return; case RTE_BE16(RTE_ETHER_TYPE_IPV4): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 4); + flow_dv_set_match_ip_version + (wks->group, hdrs_v, key_type, 4); return; case RTE_BE16(RTE_ETHER_TYPE_IPV6): - flow_dv_set_match_ip_version(group, hdrs_v, hdrs_m, 6); + flow_dv_set_match_ip_version + (wks->group, hdrs_v, key_type, 6); return; default: break; } } if (vlan_m->has_more_vlan && vlan_v->has_more_vlan) { - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, svlan_tag, 1); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, svlan_tag, 1); /* Only one vlan_tag bit can be set. */ - MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); + if (key_type & MLX5_SET_MATCHER_V) + MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, cvlan_tag, 0); return; } - MLX5_SET(fte_match_set_lyr_2_4, hdrs_m, ethertype, - rte_be_to_cpu_16(vlan_m->inner_type)); MLX5_SET(fte_match_set_lyr_2_4, hdrs_v, ethertype, rte_be_to_cpu_16(vlan_m->inner_type & vlan_v->inner_type)); } /** - * Add IPV4 item to matcher and to the value. + * Add IPV4 item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8549,14 +8563,15 @@ flow_dv_translate_item_vlan(struct mlx5_flow *dev_flow, * Item is inner pattern. * @param[in] group * The group to insert the rule. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv4(void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_ipv4(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_ipv4 *ipv4_m = item->mask; - const struct rte_flow_item_ipv4 *ipv4_v = item->spec; + const struct rte_flow_item_ipv4 *ipv4_m; + const struct rte_flow_item_ipv4 *ipv4_v; const struct rte_flow_item_ipv4 nic_mask = { .hdr = { .src_addr = RTE_BE32(0xffffffff), @@ -8566,68 +8581,41 @@ flow_dv_translate_item_ipv4(void *matcher, void *key, .time_to_live = 0xff, }, }; - void *headers_m; void *headers_v; - char *l24_m; char *l24_v; - uint8_t tos, ihl_m, ihl_v; + uint8_t tos; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - flow_dv_set_match_ip_version(group, headers_v, headers_m, 4); - if (!ipv4_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + flow_dv_set_match_ip_version(group, headers_v, key_type, 4); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv4_m) - ipv4_m = &nic_mask; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - dst_ipv4_dst_ipv6.ipv4_layout.ipv4); + MLX5_ITEM_UPDATE(item, key_type, ipv4_v, ipv4_m, &nic_mask); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, dst_ipv4_dst_ipv6.ipv4_layout.ipv4); - *(uint32_t *)l24_m = ipv4_m->hdr.dst_addr; *(uint32_t *)l24_v = ipv4_m->hdr.dst_addr & ipv4_v->hdr.dst_addr; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - src_ipv4_src_ipv6.ipv4_layout.ipv4); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, src_ipv4_src_ipv6.ipv4_layout.ipv4); - *(uint32_t *)l24_m = ipv4_m->hdr.src_addr; *(uint32_t *)l24_v = ipv4_m->hdr.src_addr & ipv4_v->hdr.src_addr; tos = ipv4_m->hdr.type_of_service & ipv4_v->hdr.type_of_service; - ihl_m = ipv4_m->hdr.version_ihl & RTE_IPV4_HDR_IHL_MASK; - ihl_v = ipv4_v->hdr.version_ihl & RTE_IPV4_HDR_IHL_MASK; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_ihl, ihl_m); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_ihl, ihl_m & ihl_v); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ecn, - ipv4_m->hdr.type_of_service); + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_ihl, + ipv4_v->hdr.ihl & ipv4_m->hdr.ihl); + if (key_type == MLX5_SET_MATCHER_SW_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, + ipv4_v->hdr.type_of_service); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, tos); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_dscp, - ipv4_m->hdr.type_of_service >> 2); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp, tos >> 2); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv4_m->hdr.next_proto_id); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv4_v->hdr.next_proto_id & ipv4_m->hdr.next_proto_id); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ttl_hoplimit, - ipv4_m->hdr.time_to_live); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit, ipv4_v->hdr.time_to_live & ipv4_m->hdr.time_to_live); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, - !!(ipv4_m->hdr.fragment_offset)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, !!(ipv4_v->hdr.fragment_offset & ipv4_m->hdr.fragment_offset)); } /** - * Add IPV6 item to matcher and to the value. + * Add IPV6 item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8636,14 +8624,15 @@ flow_dv_translate_item_ipv4(void *matcher, void *key, * Item is inner pattern. * @param[in] group * The group to insert the rule. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv6(void *matcher, void *key, - const struct rte_flow_item *item, - int inner, uint32_t group) +flow_dv_translate_item_ipv6(void *key, const struct rte_flow_item *item, + int inner, uint32_t group, uint32_t key_type) { - const struct rte_flow_item_ipv6 *ipv6_m = item->mask; - const struct rte_flow_item_ipv6 *ipv6_v = item->spec; + const struct rte_flow_item_ipv6 *ipv6_m; + const struct rte_flow_item_ipv6 *ipv6_v; const struct rte_flow_item_ipv6 nic_mask = { .hdr = { .src_addr = @@ -8657,287 +8646,217 @@ flow_dv_translate_item_ipv6(void *matcher, void *key, .hop_limits = 0xff, }, }; - void *headers_m; void *headers_v; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - char *l24_m; char *l24_v; - uint32_t vtc_m; uint32_t vtc_v; int i; int size; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - flow_dv_set_match_ip_version(group, headers_v, headers_m, 6); - if (!ipv6_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + flow_dv_set_match_ip_version(group, headers_v, key_type, 6); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv6_m) - ipv6_m = &nic_mask; + MLX5_ITEM_UPDATE(item, key_type, ipv6_v, ipv6_m, &nic_mask); size = sizeof(ipv6_m->hdr.dst_addr); - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - dst_ipv4_dst_ipv6.ipv6_layout.ipv6); l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, dst_ipv4_dst_ipv6.ipv6_layout.ipv6); - memcpy(l24_m, ipv6_m->hdr.dst_addr, size); for (i = 0; i < size; ++i) - l24_v[i] = l24_m[i] & ipv6_v->hdr.dst_addr[i]; - l24_m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_m, - src_ipv4_src_ipv6.ipv6_layout.ipv6); + l24_v[i] = ipv6_m->hdr.dst_addr[i] & ipv6_v->hdr.dst_addr[i]; l24_v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, headers_v, src_ipv4_src_ipv6.ipv6_layout.ipv6); - memcpy(l24_m, ipv6_m->hdr.src_addr, size); for (i = 0; i < size; ++i) - l24_v[i] = l24_m[i] & ipv6_v->hdr.src_addr[i]; + l24_v[i] = ipv6_m->hdr.src_addr[i] & ipv6_v->hdr.src_addr[i]; /* TOS. */ - vtc_m = rte_be_to_cpu_32(ipv6_m->hdr.vtc_flow); vtc_v = rte_be_to_cpu_32(ipv6_m->hdr.vtc_flow & ipv6_v->hdr.vtc_flow); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ecn, vtc_m >> 20); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, vtc_v >> 20); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_dscp, vtc_m >> 22); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_dscp, vtc_v >> 22); /* Label. */ - if (inner) { - MLX5_SET(fte_match_set_misc, misc_m, inner_ipv6_flow_label, - vtc_m); + if (inner) MLX5_SET(fte_match_set_misc, misc_v, inner_ipv6_flow_label, vtc_v); - } else { - MLX5_SET(fte_match_set_misc, misc_m, outer_ipv6_flow_label, - vtc_m); + else MLX5_SET(fte_match_set_misc, misc_v, outer_ipv6_flow_label, vtc_v); - } /* Protocol. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv6_m->hdr.proto); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv6_v->hdr.proto & ipv6_m->hdr.proto); /* Hop limit. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ttl_hoplimit, - ipv6_m->hdr.hop_limits); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ttl_hoplimit, ipv6_v->hdr.hop_limits & ipv6_m->hdr.hop_limits); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, - !!(ipv6_m->has_frag_ext)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, !!(ipv6_v->has_frag_ext & ipv6_m->has_frag_ext)); } /** - * Add IPV6 fragment extension item to matcher and to the value. + * Add IPV6 fragment extension item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ipv6_frag_ext(void *matcher, void *key, +flow_dv_translate_item_ipv6_frag_ext(void *key, const struct rte_flow_item *item, - int inner) + int inner, uint32_t key_type) { - const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_m = item->mask; - const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_v = item->spec; + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_m; + const struct rte_flow_item_ipv6_frag_ext *ipv6_frag_ext_v; const struct rte_flow_item_ipv6_frag_ext nic_mask = { .hdr = { .next_header = 0xff, .frag_data = RTE_BE16(0xffff), }, }; - void *headers_m; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); /* IPv6 fragment extension item exists, so packet is IP fragment. */ - MLX5_SET(fte_match_set_lyr_2_4, headers_m, frag, 1); MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, 1); - if (!ipv6_frag_ext_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ipv6_frag_ext_m) - ipv6_frag_ext_m = &nic_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, - ipv6_frag_ext_m->hdr.next_header); + MLX5_ITEM_UPDATE(item, key_type, ipv6_frag_ext_v, + ipv6_frag_ext_m, &nic_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, ipv6_frag_ext_v->hdr.next_header & ipv6_frag_ext_m->hdr.next_header); } /** - * Add TCP item to matcher and to the value. + * Add TCP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_tcp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_tcp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_tcp *tcp_m = item->mask; - const struct rte_flow_item_tcp *tcp_v = item->spec; - void *headers_m; + const struct rte_flow_item_tcp *tcp_m; + const struct rte_flow_item_tcp *tcp_v; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_TCP); - if (!tcp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_TCP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!tcp_m) - tcp_m = &rte_flow_item_tcp_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_sport, - rte_be_to_cpu_16(tcp_m->hdr.src_port)); + MLX5_ITEM_UPDATE(item, key_type, tcp_v, tcp_m, + &rte_flow_item_tcp_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_sport, rte_be_to_cpu_16(tcp_v->hdr.src_port & tcp_m->hdr.src_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_dport, - rte_be_to_cpu_16(tcp_m->hdr.dst_port)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_dport, rte_be_to_cpu_16(tcp_v->hdr.dst_port & tcp_m->hdr.dst_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, tcp_flags, - tcp_m->hdr.tcp_flags); MLX5_SET(fte_match_set_lyr_2_4, headers_v, tcp_flags, - (tcp_v->hdr.tcp_flags & tcp_m->hdr.tcp_flags)); + tcp_v->hdr.tcp_flags & tcp_m->hdr.tcp_flags); } /** - * Add ESP item to matcher and to the value. + * Add ESP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_esp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_esp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_esp *esp_m = item->mask; - const struct rte_flow_item_esp *esp_v = item->spec; - void *headers_m; + const struct rte_flow_item_esp *esp_m; + const struct rte_flow_item_esp *esp_v; void *headers_v; - char *spi_m; char *spi_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ESP); - if (!esp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_ESP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!esp_m) - esp_m = &rte_flow_item_esp_mask; - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + MLX5_ITEM_UPDATE(item, key_type, esp_v, esp_m, + &rte_flow_item_esp_mask); headers_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - if (inner) { - spi_m = MLX5_ADDR_OF(fte_match_set_misc, headers_m, inner_esp_spi); - spi_v = MLX5_ADDR_OF(fte_match_set_misc, headers_v, inner_esp_spi); - } else { - spi_m = MLX5_ADDR_OF(fte_match_set_misc, headers_m, outer_esp_spi); - spi_v = MLX5_ADDR_OF(fte_match_set_misc, headers_v, outer_esp_spi); - } - *(uint32_t *)spi_m = esp_m->hdr.spi; + spi_v = inner ? MLX5_ADDR_OF(fte_match_set_misc, headers_v, + inner_esp_spi) : MLX5_ADDR_OF(fte_match_set_misc + , headers_v, outer_esp_spi); *(uint32_t *)spi_v = esp_m->hdr.spi & esp_v->hdr.spi; } /** - * Add UDP item to matcher and to the value. + * Add UDP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_udp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_udp(void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_udp *udp_m = item->mask; - const struct rte_flow_item_udp *udp_v = item->spec; - void *headers_m; + const struct rte_flow_item_udp *udp_m; + const struct rte_flow_item_udp *udp_v; void *headers_v; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_UDP); - if (!udp_v) + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_UDP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!udp_m) - udp_m = &rte_flow_item_udp_mask; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_sport, - rte_be_to_cpu_16(udp_m->hdr.src_port)); + MLX5_ITEM_UPDATE(item, key_type, udp_v, udp_m, + &rte_flow_item_udp_mask); MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_sport, rte_be_to_cpu_16(udp_v->hdr.src_port & udp_m->hdr.src_port)); - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, - rte_be_to_cpu_16(udp_m->hdr.dst_port)); MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, rte_be_to_cpu_16(udp_v->hdr.dst_port & udp_m->hdr.dst_port)); + /* Force get UDP dport in case to be used in VXLAN translate. */ + if (key_type & MLX5_SET_MATCHER_SW) { + udp_v = item->spec; + wks->udp_dport = rte_be_to_cpu_16(udp_v->hdr.dst_port & + udp_m->hdr.dst_port); + } } /** - * Add GRE optional Key item to matcher and to the value. + * Add GRE optional Key item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -8946,55 +8865,46 @@ flow_dv_translate_item_udp(void *matcher, void *key, * Item is inner pattern. */ static void -flow_dv_translate_item_gre_key(void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_gre_key(void *key, const struct rte_flow_item *item, + uint32_t key_type) { - const rte_be32_t *key_m = item->mask; - const rte_be32_t *key_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const rte_be32_t *key_m; + const rte_be32_t *key_v; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); rte_be32_t gre_key_default_mask = RTE_BE32(UINT32_MAX); /* GRE K bit must be on and should already be validated */ - MLX5_SET(fte_match_set_misc, misc_m, gre_k_present, 1); MLX5_SET(fte_match_set_misc, misc_v, gre_k_present, 1); - if (!key_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!key_m) - key_m = &gre_key_default_mask; - MLX5_SET(fte_match_set_misc, misc_m, gre_key_h, - rte_be_to_cpu_32(*key_m) >> 8); + MLX5_ITEM_UPDATE(item, key_type, key_v, key_m, + &gre_key_default_mask); MLX5_SET(fte_match_set_misc, misc_v, gre_key_h, rte_be_to_cpu_32((*key_v) & (*key_m)) >> 8); - MLX5_SET(fte_match_set_misc, misc_m, gre_key_l, - rte_be_to_cpu_32(*key_m) & 0xFF); MLX5_SET(fte_match_set_misc, misc_v, gre_key_l, rte_be_to_cpu_32((*key_v) & (*key_m)) & 0xFF); } /** - * Add GRE item to matcher and to the value. + * Add GRE item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gre(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t pattern_flags) +flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item, + uint64_t pattern_flags, uint32_t key_type) { static const struct rte_flow_item_gre empty_gre = {0,}; const struct rte_flow_item_gre *gre_m = item->mask; const struct rte_flow_item_gre *gre_v = item->spec; - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); struct { union { @@ -9012,8 +8922,11 @@ flow_dv_translate_item_gre(void *matcher, void *key, } gre_crks_rsvd0_ver_m, gre_crks_rsvd0_ver_v; uint16_t protocol_m, protocol_v; - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_GRE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, 0xff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, + IPPROTO_GRE); if (!gre_v) { gre_v = &empty_gre; gre_m = &empty_gre; @@ -9021,20 +8934,18 @@ flow_dv_translate_item_gre(void *matcher, void *key, if (!gre_m) gre_m = &rte_flow_item_gre_mask; } + if (key_type & MLX5_SET_MATCHER_M) + gre_v = gre_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + gre_m = gre_v; gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->c_rsvd0_ver); gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->c_rsvd0_ver); - MLX5_SET(fte_match_set_misc, misc_m, gre_c_present, - gre_crks_rsvd0_ver_m.c_present); MLX5_SET(fte_match_set_misc, misc_v, gre_c_present, gre_crks_rsvd0_ver_v.c_present & gre_crks_rsvd0_ver_m.c_present); - MLX5_SET(fte_match_set_misc, misc_m, gre_k_present, - gre_crks_rsvd0_ver_m.k_present); MLX5_SET(fte_match_set_misc, misc_v, gre_k_present, gre_crks_rsvd0_ver_v.k_present & gre_crks_rsvd0_ver_m.k_present); - MLX5_SET(fte_match_set_misc, misc_m, gre_s_present, - gre_crks_rsvd0_ver_m.s_present); MLX5_SET(fte_match_set_misc, misc_v, gre_s_present, gre_crks_rsvd0_ver_v.s_present & gre_crks_rsvd0_ver_m.s_present); @@ -9045,17 +8956,17 @@ flow_dv_translate_item_gre(void *matcher, void *key, protocol_v = mlx5_translate_tunnel_etypes(pattern_flags); if (protocol_v) protocol_m = 0xFFFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + protocol_v = protocol_m; } - MLX5_SET(fte_match_set_misc, misc_m, gre_protocol, protocol_m); MLX5_SET(fte_match_set_misc, misc_v, gre_protocol, protocol_m & protocol_v); } /** - * Add GRE optional items to matcher and to the value. + * Add GRE optional items to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -9064,13 +8975,16 @@ flow_dv_translate_item_gre(void *matcher, void *key, * Pointer to gre_item. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gre_option(void *matcher, void *key, +flow_dv_translate_item_gre_option(void *key, const struct rte_flow_item *item, const struct rte_flow_item *gre_item, - uint64_t pattern_flags) + uint64_t pattern_flags, uint32_t key_type) { + void *misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); const struct rte_flow_item_gre_opt *option_m = item->mask; const struct rte_flow_item_gre_opt *option_v = item->spec; const struct rte_flow_item_gre *gre_m = gre_item->mask; @@ -9079,8 +8993,6 @@ flow_dv_translate_item_gre_option(void *matcher, void *key, struct rte_flow_item gre_key_item; uint16_t c_rsvd0_ver_m, c_rsvd0_ver_v; uint16_t protocol_m, protocol_v; - void *misc5_m; - void *misc5_v; /* * If only match key field, keep using misc for matching. @@ -9089,11 +9001,10 @@ flow_dv_translate_item_gre_option(void *matcher, void *key, */ if (!(option_m->sequence.sequence || option_m->checksum_rsvd.checksum)) { - flow_dv_translate_item_gre(matcher, key, gre_item, - pattern_flags); + flow_dv_translate_item_gre(key, gre_item, pattern_flags, key_type); gre_key_item.spec = &option_v->key.key; gre_key_item.mask = &option_m->key.key; - flow_dv_translate_item_gre_key(matcher, key, &gre_key_item); + flow_dv_translate_item_gre_key(key, &gre_key_item, key_type); return; } if (!gre_v) { @@ -9128,57 +9039,49 @@ flow_dv_translate_item_gre_option(void *matcher, void *key, c_rsvd0_ver_v |= RTE_BE16(0x8000); c_rsvd0_ver_m |= RTE_BE16(0x8000); } + if (key_type & MLX5_SET_MATCHER_M) { + c_rsvd0_ver_v = c_rsvd0_ver_m; + protocol_v = protocol_m; + option_v = option_m; + } /* * Hardware parses GRE optional field into the fixed location, * do not need to adjust the tunnel dword indices. */ - misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); - misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_0, rte_be_to_cpu_32((c_rsvd0_ver_v | protocol_v << 16) & (c_rsvd0_ver_m | protocol_m << 16))); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_0, - rte_be_to_cpu_32(c_rsvd0_ver_m | protocol_m << 16)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_1, rte_be_to_cpu_32(option_v->checksum_rsvd.checksum & option_m->checksum_rsvd.checksum)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_1, - rte_be_to_cpu_32(option_m->checksum_rsvd.checksum)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_2, rte_be_to_cpu_32(option_v->key.key & option_m->key.key)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_2, - rte_be_to_cpu_32(option_m->key.key)); MLX5_SET(fte_match_set_misc5, misc5_v, tunnel_header_3, rte_be_to_cpu_32(option_v->sequence.sequence & option_m->sequence.sequence)); - MLX5_SET(fte_match_set_misc5, misc5_m, tunnel_header_3, - rte_be_to_cpu_32(option_m->sequence.sequence)); } /** * Add NVGRE item to matcher and to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] pattern_flags * Accumulated pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_nvgre(void *matcher, void *key, - const struct rte_flow_item *item, - unsigned long pattern_flags) +flow_dv_translate_item_nvgre(void *key, const struct rte_flow_item *item, + unsigned long pattern_flags, uint32_t key_type) { - const struct rte_flow_item_nvgre *nvgre_m = item->mask; - const struct rte_flow_item_nvgre *nvgre_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const struct rte_flow_item_nvgre *nvgre_m; + const struct rte_flow_item_nvgre *nvgre_v; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); const char *tni_flow_id_m; const char *tni_flow_id_v; - char *gre_key_m; char *gre_key_v; int size; int i; @@ -9197,158 +9100,145 @@ flow_dv_translate_item_nvgre(void *matcher, void *key, .mask = &gre_mask, .last = NULL, }; - flow_dv_translate_item_gre(matcher, key, &gre_item, pattern_flags); - if (!nvgre_v) + flow_dv_translate_item_gre(key, &gre_item, pattern_flags, key_type); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!nvgre_m) - nvgre_m = &rte_flow_item_nvgre_mask; + MLX5_ITEM_UPDATE(item, key_type, nvgre_v, nvgre_m, + &rte_flow_item_nvgre_mask); tni_flow_id_m = (const char *)nvgre_m->tni; tni_flow_id_v = (const char *)nvgre_v->tni; size = sizeof(nvgre_m->tni) + sizeof(nvgre_m->flow_id); - gre_key_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, gre_key_h); gre_key_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, gre_key_h); - memcpy(gre_key_m, tni_flow_id_m, size); for (i = 0; i < size; ++i) - gre_key_v[i] = gre_key_m[i] & tni_flow_id_v[i]; + gre_key_v[i] = tni_flow_id_m[i] & tni_flow_id_v[i]; } /** - * Add VXLAN item to matcher and to the value. + * Add VXLAN item to the value. * * @param[in] dev * Pointer to the Ethernet device structure. * @param[in] attr * Flow rule attributes. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] wks + * Matcher workspace. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_vxlan(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, - void *matcher, void *key, - const struct rte_flow_item *item, - int inner) + void *key, const struct rte_flow_item *item, + int inner, struct mlx5_dv_matcher_workspace *wks, + uint32_t key_type) { - const struct rte_flow_item_vxlan *vxlan_m = item->mask; - const struct rte_flow_item_vxlan *vxlan_v = item->spec; - void *headers_m; + const struct rte_flow_item_vxlan *vxlan_m; + const struct rte_flow_item_vxlan *vxlan_v; + const struct rte_flow_item_vxlan *vxlan_vv = item->spec; void *headers_v; - void *misc5_m; + void *misc_v; void *misc5_v; + uint32_t tunnel_v; uint32_t *tunnel_header_v; - uint32_t *tunnel_header_m; + char *vni_v; uint16_t dport; + int size; + int i; struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_item_vxlan nic_mask = { .vni = "\xff\xff\xff", .rsvd1 = 0xff, }; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); dport = item->type == RTE_FLOW_ITEM_TYPE_VXLAN ? MLX5_UDP_PORT_VXLAN : MLX5_UDP_PORT_VXLAN_GPE; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, dport); - } - dport = MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport); - if (!vxlan_v) - return; - if (!vxlan_m) { - if ((!attr->group && !priv->sh->tunnel_header_0_1) || - (attr->group && !priv->sh->misc5_cap)) - vxlan_m = &rte_flow_item_vxlan_mask; + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xFFFF); else - vxlan_m = &nic_mask; + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, dport); } + /* + * Read the UDP dport to check if the value satisfies the VXLAN + * matching with MISC5 for CX5. + */ + if (wks->udp_dport) + dport = wks->udp_dport; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, vxlan_v, vxlan_m, &nic_mask); + if (item->mask == &nic_mask && + ((!attr->group && !priv->sh->tunnel_header_0_1) || + (attr->group && !priv->sh->misc5_cap))) + vxlan_m = &rte_flow_item_vxlan_mask; if ((priv->sh->steering_format_version == - MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 && - dport != MLX5_UDP_PORT_VXLAN) || - (!attr->group && !attr->transfer && !priv->sh->tunnel_header_0_1) || + MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 && + dport != MLX5_UDP_PORT_VXLAN) || + (!attr->group && !attr->transfer) || ((attr->group || attr->transfer) && !priv->sh->misc5_cap)) { - void *misc_m; - void *misc_v; - char *vni_m; - char *vni_v; - int size; - int i; - misc_m = MLX5_ADDR_OF(fte_match_param, - matcher, misc_parameters); misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); size = sizeof(vxlan_m->vni); - vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, vxlan_vni); vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, vxlan_vni); - memcpy(vni_m, vxlan_m->vni, size); for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & vxlan_v->vni[i]; + vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i]; return; } - misc5_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_5); - misc5_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_5); tunnel_header_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5, misc5_v, tunnel_header_1); - tunnel_header_m = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc5, - misc5_m, - tunnel_header_1); - *tunnel_header_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) | - (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 | - (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16; - if (*tunnel_header_v) - *tunnel_header_m = vxlan_m->vni[0] | - vxlan_m->vni[1] << 8 | - vxlan_m->vni[2] << 16; - else - *tunnel_header_m = 0x0; - *tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24; - if (vxlan_v->rsvd1 & vxlan_m->rsvd1) - *tunnel_header_m |= vxlan_m->rsvd1 << 24; + tunnel_v = (vxlan_v->vni[0] & vxlan_m->vni[0]) | + (vxlan_v->vni[1] & vxlan_m->vni[1]) << 8 | + (vxlan_v->vni[2] & vxlan_m->vni[2]) << 16; + *tunnel_header_v = tunnel_v; + if (key_type == MLX5_SET_MATCHER_SW_M) { + tunnel_v = (vxlan_vv->vni[0] & vxlan_m->vni[0]) | + (vxlan_vv->vni[1] & vxlan_m->vni[1]) << 8 | + (vxlan_vv->vni[2] & vxlan_m->vni[2]) << 16; + if (!tunnel_v) + *tunnel_header_v = 0x0; + if (vxlan_vv->rsvd1 & vxlan_m->rsvd1) + *tunnel_header_v |= vxlan_v->rsvd1 << 24; + } else { + *tunnel_header_v |= (vxlan_v->rsvd1 & vxlan_m->rsvd1) << 24; + } } /** - * Add VXLAN-GPE item to matcher and to the value. + * Add VXLAN-GPE item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] pattern_flags + * Item pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, - const struct rte_flow_item *item, - const uint64_t pattern_flags) +flow_dv_translate_item_vxlan_gpe(void *key, const struct rte_flow_item *item, + const uint64_t pattern_flags, + uint32_t key_type) { static const struct rte_flow_item_vxlan_gpe dummy_vxlan_gpe_hdr = {0, }; const struct rte_flow_item_vxlan_gpe *vxlan_m = item->mask; const struct rte_flow_item_vxlan_gpe *vxlan_v = item->spec; /* The item was validated to be on the outer side */ - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = - MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_3); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - char *vni_m = - MLX5_ADDR_OF(fte_match_set_misc3, misc_m, outer_vxlan_gpe_vni); char *vni_v = MLX5_ADDR_OF(fte_match_set_misc3, misc_v, outer_vxlan_gpe_vni); int i, size = sizeof(vxlan_m->vni); @@ -9357,9 +9247,12 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, uint8_t m_protocol, v_protocol; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_VXLAN_GPE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + MLX5_UDP_PORT_VXLAN_GPE); } if (!vxlan_v) { vxlan_v = &dummy_vxlan_gpe_hdr; @@ -9368,15 +9261,18 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, if (!vxlan_m) vxlan_m = &rte_flow_item_vxlan_gpe_mask; } - memcpy(vni_m, vxlan_m->vni, size); + if (key_type & MLX5_SET_MATCHER_M) + vxlan_v = vxlan_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + vxlan_m = vxlan_v; for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & vxlan_v->vni[i]; + vni_v[i] = vxlan_m->vni[i] & vxlan_v->vni[i]; if (vxlan_m->flags) { flags_m = vxlan_m->flags; flags_v = vxlan_v->flags; } - MLX5_SET(fte_match_set_misc3, misc_m, outer_vxlan_gpe_flags, flags_m); - MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags, flags_v); + MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_flags, + flags_m & flags_v); m_protocol = vxlan_m->protocol; v_protocol = vxlan_v->protocol; if (!m_protocol) { @@ -9389,50 +9285,50 @@ flow_dv_translate_item_vxlan_gpe(void *matcher, void *key, v_protocol = RTE_VXLAN_GPE_TYPE_IPV6; if (v_protocol) m_protocol = 0xFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + v_protocol = m_protocol; } - MLX5_SET(fte_match_set_misc3, misc_m, - outer_vxlan_gpe_next_protocol, m_protocol); MLX5_SET(fte_match_set_misc3, misc_v, outer_vxlan_gpe_next_protocol, m_protocol & v_protocol); } /** - * Add Geneve item to matcher and to the value. + * Add Geneve item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] pattern_flags + * Item pattern flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_geneve(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t pattern_flags) +flow_dv_translate_item_geneve(void *key, const struct rte_flow_item *item, + uint64_t pattern_flags, uint32_t key_type) { static const struct rte_flow_item_geneve empty_geneve = {0,}; const struct rte_flow_item_geneve *geneve_m = item->mask; const struct rte_flow_item_geneve *geneve_v = item->spec; /* GENEVE flow item validation allows single tunnel item */ - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); uint16_t gbhdr_m; uint16_t gbhdr_v; - char *vni_m = MLX5_ADDR_OF(fte_match_set_misc, misc_m, geneve_vni); char *vni_v = MLX5_ADDR_OF(fte_match_set_misc, misc_v, geneve_vni); size_t size = sizeof(geneve_m->vni), i; uint16_t protocol_m, protocol_v; if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_GENEVE); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, + MLX5_UDP_PORT_GENEVE); } if (!geneve_v) { geneve_v = &empty_geneve; @@ -9441,17 +9337,16 @@ flow_dv_translate_item_geneve(void *matcher, void *key, if (!geneve_m) geneve_m = &rte_flow_item_geneve_mask; } - memcpy(vni_m, geneve_m->vni, size); + if (key_type & MLX5_SET_MATCHER_M) + geneve_v = geneve_m; + else if (key_type == MLX5_SET_MATCHER_HS_V) + geneve_m = geneve_v; for (i = 0; i < size; ++i) - vni_v[i] = vni_m[i] & geneve_v->vni[i]; + vni_v[i] = geneve_m->vni[i] & geneve_v->vni[i]; gbhdr_m = rte_be_to_cpu_16(geneve_m->ver_opt_len_o_c_rsvd0); gbhdr_v = rte_be_to_cpu_16(geneve_v->ver_opt_len_o_c_rsvd0); - MLX5_SET(fte_match_set_misc, misc_m, geneve_oam, - MLX5_GENEVE_OAMF_VAL(gbhdr_m)); MLX5_SET(fte_match_set_misc, misc_v, geneve_oam, MLX5_GENEVE_OAMF_VAL(gbhdr_v) & MLX5_GENEVE_OAMF_VAL(gbhdr_m)); - MLX5_SET(fte_match_set_misc, misc_m, geneve_opt_len, - MLX5_GENEVE_OPTLEN_VAL(gbhdr_m)); MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, MLX5_GENEVE_OPTLEN_VAL(gbhdr_v) & MLX5_GENEVE_OPTLEN_VAL(gbhdr_m)); @@ -9462,8 +9357,10 @@ flow_dv_translate_item_geneve(void *matcher, void *key, protocol_v = mlx5_translate_tunnel_etypes(pattern_flags); if (protocol_v) protocol_m = 0xFFFF; + /* Restore the value to mask in mask case. */ + if (key_type & MLX5_SET_MATCHER_M) + protocol_v = protocol_m; } - MLX5_SET(fte_match_set_misc, misc_m, geneve_protocol_type, protocol_m); MLX5_SET(fte_match_set_misc, misc_v, geneve_protocol_type, protocol_m & protocol_v); } @@ -9473,10 +9370,8 @@ flow_dv_translate_item_geneve(void *matcher, void *key, * * @param dev[in, out] * Pointer to rte_eth_dev structure. - * @param[in, out] tag_be24 - * Tag value in big endian then R-shift 8. - * @parm[in, out] dev_flow - * Pointer to the dev_flow. + * @param[in] item + * Flow pattern to translate. * @param[out] error * pointer to error structure. * @@ -9553,38 +9448,38 @@ flow_dev_geneve_tlv_option_resource_register(struct rte_eth_dev *dev, } /** - * Add Geneve TLV option item to matcher. + * Add Geneve TLV option item to value. * * @param[in, out] dev * Pointer to rte_eth_dev structure. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. * @param[out] error * Pointer to error structure. */ static int -flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, +flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type, struct rte_flow_error *error) { - const struct rte_flow_item_geneve_opt *geneve_opt_m = item->mask; - const struct rte_flow_item_geneve_opt *geneve_opt_v = item->spec; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); + const struct rte_flow_item_geneve_opt *geneve_opt_m; + const struct rte_flow_item_geneve_opt *geneve_opt_v; + const struct rte_flow_item_geneve_opt *geneve_opt_vv = item->spec; void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); rte_be32_t opt_data_key = 0, opt_data_mask = 0; + uint32_t *data; int ret = 0; - if (!geneve_opt_v) + if (MLX5_ITEM_VALID(item, key_type)) return -1; - if (!geneve_opt_m) - geneve_opt_m = &rte_flow_item_geneve_opt_mask; + MLX5_ITEM_UPDATE(item, key_type, geneve_opt_v, geneve_opt_m, + &rte_flow_item_geneve_opt_mask); ret = flow_dev_geneve_tlv_option_resource_register(dev, item, error); if (ret) { @@ -9598,17 +9493,21 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, * If the option length was not requested but the GENEVE TLV option item * is present we set the option length field implicitly. */ - if (!MLX5_GET16(fte_match_set_misc, misc_m, geneve_opt_len)) { - MLX5_SET(fte_match_set_misc, misc_m, geneve_opt_len, - MLX5_GENEVE_OPTLEN_MASK); - MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, - geneve_opt_v->option_len + 1); - } - MLX5_SET(fte_match_set_misc, misc_m, geneve_tlv_option_0_exist, 1); - MLX5_SET(fte_match_set_misc, misc_v, geneve_tlv_option_0_exist, 1); + if (!MLX5_GET16(fte_match_set_misc, misc_v, geneve_opt_len)) { + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, + MLX5_GENEVE_OPTLEN_MASK); + else + MLX5_SET(fte_match_set_misc, misc_v, geneve_opt_len, + geneve_opt_v->option_len + 1); + } /* Set the data. */ - if (geneve_opt_v->data) { - memcpy(&opt_data_key, geneve_opt_v->data, + if (key_type == MLX5_SET_MATCHER_SW_V) + data = geneve_opt_vv->data; + else + data = geneve_opt_v->data; + if (data) { + memcpy(&opt_data_key, data, RTE_MIN((uint32_t)(geneve_opt_v->option_len * 4), sizeof(opt_data_key))); MLX5_ASSERT((uint32_t)(geneve_opt_v->option_len * 4) <= @@ -9618,9 +9517,6 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, sizeof(opt_data_mask))); MLX5_ASSERT((uint32_t)(geneve_opt_v->option_len * 4) <= sizeof(opt_data_mask)); - MLX5_SET(fte_match_set_misc3, misc3_m, - geneve_tlv_option_0_data, - rte_be_to_cpu_32(opt_data_mask)); MLX5_SET(fte_match_set_misc3, misc3_v, geneve_tlv_option_0_data, rte_be_to_cpu_32(opt_data_key & opt_data_mask)); @@ -9629,10 +9525,8 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, } /** - * Add MPLS item to matcher and to the value. + * Add MPLS item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -9641,93 +9535,78 @@ flow_dv_translate_item_geneve_opt(struct rte_eth_dev *dev, void *matcher, * The protocol layer indicated in previous item. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_mpls(void *matcher, void *key, - const struct rte_flow_item *item, - uint64_t prev_layer, - int inner) +flow_dv_translate_item_mpls(void *key, const struct rte_flow_item *item, + uint64_t prev_layer, int inner, + uint32_t key_type) { - const uint32_t *in_mpls_m = item->mask; - const uint32_t *in_mpls_v = item->spec; - uint32_t *out_mpls_m = 0; + const uint32_t *in_mpls_m; + const uint32_t *in_mpls_v; uint32_t *out_mpls_v = 0; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - void *misc2_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_2); void *misc2_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2); - void *headers_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); void *headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); switch (prev_layer) { case MLX5_FLOW_LAYER_OUTER_L4_UDP: if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, - 0xffff); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, - MLX5_UDP_PORT_MPLS); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xffff); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, MLX5_UDP_PORT_MPLS); } break; case MLX5_FLOW_LAYER_GRE: /* Fall-through. */ case MLX5_FLOW_LAYER_GRE_KEY: if (!MLX5_GET16(fte_match_set_misc, misc_v, gre_protocol)) { - MLX5_SET(fte_match_set_misc, misc_m, gre_protocol, - 0xffff); - MLX5_SET(fte_match_set_misc, misc_v, gre_protocol, - RTE_ETHER_TYPE_MPLS); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_misc, misc_v, + gre_protocol, 0xffff); + else + MLX5_SET(fte_match_set_misc, misc_v, + gre_protocol, RTE_ETHER_TYPE_MPLS); } break; default: break; } - if (!in_mpls_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!in_mpls_m) - in_mpls_m = (const uint32_t *)&rte_flow_item_mpls_mask; + MLX5_ITEM_UPDATE(item, key_type, in_mpls_v, in_mpls_m, + &rte_flow_item_mpls_mask); switch (prev_layer) { case MLX5_FLOW_LAYER_OUTER_L4_UDP: - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_m, - outer_first_mpls_over_udp); out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls_over_udp); break; case MLX5_FLOW_LAYER_GRE: - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_m, - outer_first_mpls_over_gre); out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls_over_gre); break; default: /* Inner MPLS not over GRE is not supported. */ - if (!inner) { - out_mpls_m = - (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, - misc2_m, - outer_first_mpls); + if (!inner) out_mpls_v = (uint32_t *)MLX5_ADDR_OF(fte_match_set_misc2, misc2_v, outer_first_mpls); - } break; } - if (out_mpls_m && out_mpls_v) { - *out_mpls_m = *in_mpls_m; + if (out_mpls_v) *out_mpls_v = *in_mpls_v & *in_mpls_m; - } } /** * Add metadata register item to matcher * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] reg_type @@ -9738,12 +9617,9 @@ flow_dv_translate_item_mpls(void *matcher, void *key, * Register mask */ static void -flow_dv_match_meta_reg(void *matcher, void *key, - enum modify_reg reg_type, +flow_dv_match_meta_reg(void *key, enum modify_reg reg_type, uint32_t data, uint32_t mask) { - void *misc2_m = - MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters_2); void *misc2_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_2); uint32_t temp; @@ -9751,11 +9627,9 @@ flow_dv_match_meta_reg(void *matcher, void *key, data &= mask; switch (reg_type) { case REG_A: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_a, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_a, data); break; case REG_B: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_b, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_b, data); break; case REG_C_0: @@ -9764,40 +9638,31 @@ flow_dv_match_meta_reg(void *matcher, void *key, * source vport index and META item value, we should set * this field according to specified mask, not as whole one. */ - temp = MLX5_GET(fte_match_set_misc2, misc2_m, metadata_reg_c_0); - temp |= mask; - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_0, temp); temp = MLX5_GET(fte_match_set_misc2, misc2_v, metadata_reg_c_0); - temp &= ~mask; + if (mask) + temp &= ~mask; temp |= data; MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_0, temp); break; case REG_C_1: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_1, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_1, data); break; case REG_C_2: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_2, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_2, data); break; case REG_C_3: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_3, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_3, data); break; case REG_C_4: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_4, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_4, data); break; case REG_C_5: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_5, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_5, data); break; case REG_C_6: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_6, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_6, data); break; case REG_C_7: - MLX5_SET(fte_match_set_misc2, misc2_m, metadata_reg_c_7, mask); MLX5_SET(fte_match_set_misc2, misc2_v, metadata_reg_c_7, data); break; default: @@ -9806,34 +9671,71 @@ flow_dv_match_meta_reg(void *matcher, void *key, } } +/** + * Add metadata register item to matcher + * + * @param[in, out] matcher + * Flow matcher. + * @param[in, out] key + * Flow matcher value. + * @param[in] reg_type + * Type of device metadata register + * @param[in] value + * Register value + * @param[in] mask + * Register mask + */ +static void +flow_dv_match_meta_reg_all(void *matcher, void *key, enum modify_reg reg_type, + uint32_t data, uint32_t mask) +{ + flow_dv_match_meta_reg(key, reg_type, data, mask); + flow_dv_match_meta_reg(matcher, reg_type, mask, mask); +} + /** * Add MARK item to matcher * * @param[in] dev * The device to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_mark(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_mark(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { struct mlx5_priv *priv = dev->data->dev_private; const struct rte_flow_item_mark *mark; uint32_t value; - uint32_t mask; - - mark = item->mask ? (const void *)item->mask : - &rte_flow_item_mark_mask; - mask = mark->id & priv->sh->dv_mark_mask; - mark = (const void *)item->spec; - MLX5_ASSERT(mark); - value = mark->id & priv->sh->dv_mark_mask & mask; + uint32_t mask = 0; + + if (key_type & MLX5_SET_MATCHER_SW) { + mark = item->mask ? (const void *)item->mask : + &rte_flow_item_mark_mask; + mask = mark->id; + if (key_type == MLX5_SET_MATCHER_SW_M) { + value = mask; + } else { + mark = (const void *)item->spec; + MLX5_ASSERT(mark); + value = mark->id; + } + } else { + mark = (key_type == MLX5_SET_MATCHER_HS_V) ? + (const void *)item->spec : (const void *)item->mask; + MLX5_ASSERT(mark); + value = mark->id; + if (key_type == MLX5_SET_MATCHER_HS_M) + mask = value; + } + mask &= priv->sh->dv_mark_mask; + value &= mask; if (mask) { enum modify_reg reg; @@ -9849,7 +9751,7 @@ flow_dv_translate_item_mark(struct rte_eth_dev *dev, mask <<= shl_c0; value <<= shl_c0; } - flow_dv_match_meta_reg(matcher, key, reg, value, mask); + flow_dv_match_meta_reg(key, reg, value, mask); } } @@ -9858,65 +9760,66 @@ flow_dv_translate_item_mark(struct rte_eth_dev *dev, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] attr * Attributes of flow that includes this item. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_meta(struct rte_eth_dev *dev, - void *matcher, void *key, + void *key, const struct rte_flow_attr *attr, - const struct rte_flow_item *item) + const struct rte_flow_item *item, + uint32_t key_type) { const struct rte_flow_item_meta *meta_m; const struct rte_flow_item_meta *meta_v; + uint32_t value; + uint32_t mask = 0; + int reg; - meta_m = (const void *)item->mask; - if (!meta_m) - meta_m = &rte_flow_item_meta_mask; - meta_v = (const void *)item->spec; - if (meta_v) { - int reg; - uint32_t value = meta_v->data; - uint32_t mask = meta_m->data; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, meta_v, meta_m, + &rte_flow_item_meta_mask); + value = meta_v->data; + mask = meta_m->data; + if (key_type == MLX5_SET_MATCHER_HS_M) + mask = value; + reg = flow_dv_get_metadata_reg(dev, attr, NULL); + if (reg < 0) + return; + MLX5_ASSERT(reg != REG_NON); + if (reg == REG_C_0) { + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t msk_c0 = priv->sh->dv_regc0_mask; + uint32_t shl_c0 = rte_bsf32(msk_c0); - reg = flow_dv_get_metadata_reg(dev, attr, NULL); - if (reg < 0) - return; - MLX5_ASSERT(reg != REG_NON); - if (reg == REG_C_0) { - struct mlx5_priv *priv = dev->data->dev_private; - uint32_t msk_c0 = priv->sh->dv_regc0_mask; - uint32_t shl_c0 = rte_bsf32(msk_c0); - - mask &= msk_c0; - mask <<= shl_c0; - value <<= shl_c0; - } - flow_dv_match_meta_reg(matcher, key, reg, value, mask); + mask &= msk_c0; + mask <<= shl_c0; + value <<= shl_c0; } + flow_dv_match_meta_reg(key, reg, value, mask); } /** * Add vport metadata Reg C0 item to matcher * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. - * @param[in] reg - * Flow pattern to translate. + * @param[in] value + * Register value + * @param[in] mask + * Register mask */ static void -flow_dv_translate_item_meta_vport(void *matcher, void *key, - uint32_t value, uint32_t mask) +flow_dv_translate_item_meta_vport(void *key, uint32_t value, uint32_t mask) { - flow_dv_match_meta_reg(matcher, key, REG_C_0, value, mask); + flow_dv_match_meta_reg(key, REG_C_0, value, mask); } /** @@ -9924,17 +9827,17 @@ flow_dv_translate_item_meta_vport(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { const struct mlx5_rte_flow_item_tag *tag_v = item->spec; const struct mlx5_rte_flow_item_tag *tag_m = item->mask; @@ -9943,6 +9846,8 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, MLX5_ASSERT(tag_v); value = tag_v->data; mask = tag_m ? tag_m->data : UINT32_MAX; + if (key_type & MLX5_SET_MATCHER_M) + value = mask; if (tag_v->id == REG_C_0) { struct mlx5_priv *priv = dev->data->dev_private; uint32_t msk_c0 = priv->sh->dv_regc0_mask; @@ -9952,7 +9857,7 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, mask <<= shl_c0; value <<= shl_c0; } - flow_dv_match_meta_reg(matcher, key, tag_v->id, value, mask); + flow_dv_match_meta_reg(key, tag_v->id, value, mask); } /** @@ -9960,50 +9865,50 @@ flow_dv_translate_mlx5_item_tag(struct rte_eth_dev *dev, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_tag(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_tag(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint32_t key_type) { - const struct rte_flow_item_tag *tag_v = item->spec; - const struct rte_flow_item_tag *tag_m = item->mask; + const struct rte_flow_item_tag *tag_vv = item->spec; + const struct rte_flow_item_tag *tag_v; + const struct rte_flow_item_tag *tag_m; enum modify_reg reg; + uint32_t index; - MLX5_ASSERT(tag_v); - tag_m = tag_m ? tag_m : &rte_flow_item_tag_mask; + if (MLX5_ITEM_VALID(item, key_type)) + return; + MLX5_ITEM_UPDATE(item, key_type, tag_v, tag_m, + &rte_flow_item_tag_mask); + /* When set mask, the index should be from spec. */ + index = tag_vv ? tag_vv->index : tag_v->index; /* Get the metadata register index for the tag. */ - reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, tag_v->index, NULL); + reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, index, NULL); MLX5_ASSERT(reg > 0); - flow_dv_match_meta_reg(matcher, key, reg, tag_v->data, tag_m->data); + flow_dv_match_meta_reg(key, reg, tag_v->data, tag_m->data); } /** * Add source vport match to the specified matcher. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] port * Source vport value to match - * @param[in] mask - * Mask */ static void -flow_dv_translate_item_source_vport(void *matcher, void *key, - int16_t port, uint16_t mask) +flow_dv_translate_item_source_vport(void *key, + int16_t port) { - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - MLX5_SET(fte_match_set_misc, misc_m, source_port, mask); MLX5_SET(fte_match_set_misc, misc_v, source_port, port); } @@ -10012,31 +9917,34 @@ flow_dv_translate_item_source_vport(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] + * @param[in] attr * Flow attributes. + * @param[in] key_type + * Set flow matcher mask or value. * * @return * 0 on success, a negative errno value otherwise. */ static int -flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, - const struct rte_flow_attr *attr) +flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + const struct rte_flow_attr *attr, + uint32_t key_type) { const struct rte_flow_item_port_id *pid_m = item ? item->mask : NULL; const struct rte_flow_item_port_id *pid_v = item ? item->spec : NULL; struct mlx5_priv *priv; uint16_t mask, id; + uint32_t vport_meta; if (pid_v && pid_v->id == MLX5_PORT_ESW_MGR) { - flow_dv_translate_item_source_vport(matcher, key, - mlx5_flow_get_esw_manager_vport_id(dev), 0xffff); + flow_dv_translate_item_source_vport(key, + key_type & MLX5_SET_MATCHER_V ? + mlx5_flow_get_esw_manager_vport_id(dev) : 0xffff); return 0; } mask = pid_m ? pid_m->id : 0xffff; @@ -10044,6 +9952,13 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, priv = mlx5_port_to_eswitch_info(id, item == NULL); if (!priv) return -rte_errno; + if (key_type & MLX5_SET_MATCHER_M) { + id = mask; + vport_meta = priv->vport_meta_mask; + } else { + id = priv->vport_id; + vport_meta = priv->vport_meta_tag; + } /* * Translate to vport field or to metadata, depending on mode. * Kernel can use either misc.source_port or half of C0 metadata @@ -10057,20 +9972,17 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, */ if (mask == 0xffff && priv->vport_id == 0xffff && priv->pf_bond < 0 && attr->transfer) - flow_dv_translate_item_source_vport - (matcher, key, priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); /* * We should always set the vport metadata register, * otherwise the SW steering library can drop * the rule if wire vport metadata value is not zero, * it depends on kernel configuration. */ - flow_dv_translate_item_meta_vport(matcher, key, - priv->vport_meta_tag, - priv->vport_meta_mask); + flow_dv_translate_item_meta_vport + (key, vport_meta, priv->vport_meta_mask); } else { - flow_dv_translate_item_source_vport(matcher, key, - priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); } return 0; } @@ -10080,8 +9992,6 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item @@ -10093,21 +10003,25 @@ flow_dv_translate_item_port_id(struct rte_eth_dev *dev, void *matcher, * 0 on success, a negative errno value otherwise. */ static int -flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, - void *key, +flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *key, const struct rte_flow_item *item, - const struct rte_flow_attr *attr) + const struct rte_flow_attr *attr, + uint32_t key_type) { const struct rte_flow_item_ethdev *pid_m = item ? item->mask : NULL; const struct rte_flow_item_ethdev *pid_v = item ? item->spec : NULL; + struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); struct mlx5_priv *priv; uint16_t mask, id; + uint32_t vport_meta; + MLX5_ASSERT(wks); if (!pid_m && !pid_v) return 0; if (pid_v && pid_v->port_id == UINT16_MAX) { - flow_dv_translate_item_source_vport(matcher, key, - mlx5_flow_get_esw_manager_vport_id(dev), UINT16_MAX); + flow_dv_translate_item_source_vport(key, + key_type & MLX5_SET_MATCHER_V ? + mlx5_flow_get_esw_manager_vport_id(dev) : 0xffff); return 0; } mask = pid_m ? pid_m->port_id : UINT16_MAX; @@ -10115,6 +10029,14 @@ flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, priv = mlx5_port_to_eswitch_info(id, item == NULL); if (!priv) return -rte_errno; + if (key_type & MLX5_SET_MATCHER_M) { + id = mask; + vport_meta = priv->vport_meta_mask; + } else { + id = priv->vport_id; + vport_meta = priv->vport_meta_tag; + wks->vport_meta_tag = vport_meta; + } /* * Translate to vport field or to metadata, depending on mode. * Kernel can use either misc.source_port or half of C0 metadata @@ -10127,119 +10049,133 @@ flow_dv_translate_item_represented_port(struct rte_eth_dev *dev, void *matcher, * save the extra vport match. */ if (mask == UINT16_MAX && priv->vport_id == UINT16_MAX && - priv->pf_bond < 0 && attr->transfer) - flow_dv_translate_item_source_vport - (matcher, key, priv->vport_id, mask); + priv->pf_bond < 0 && attr->transfer && + priv->sh->config.dv_flow_en != 2) + flow_dv_translate_item_source_vport(key, id); /* * We should always set the vport metadata register, * otherwise the SW steering library can drop * the rule if wire vport metadata value is not zero, * it depends on kernel configuration. */ - flow_dv_translate_item_meta_vport(matcher, key, - priv->vport_meta_tag, + flow_dv_translate_item_meta_vport(key, vport_meta, priv->vport_meta_mask); } else { - flow_dv_translate_item_source_vport(matcher, key, - priv->vport_id, mask); + flow_dv_translate_item_source_vport(key, id); } return 0; } /** - * Add ICMP6 item to matcher and to the value. + * Translate port-id item to eswitch match on port-id. * + * @param[in] dev + * The devich to configure through. * @param[in, out] matcher * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] attr + * Flow attributes. + * + * @return + * 0 on success, a negative errno value otherwise. + */ +static int +flow_dv_translate_item_port_id_all(struct rte_eth_dev *dev, + void *matcher, void *key, + const struct rte_flow_item *item, + const struct rte_flow_attr *attr) +{ + int ret; + + ret = flow_dv_translate_item_port_id + (dev, matcher, item, attr, MLX5_SET_MATCHER_SW_M); + if (ret) + return ret; + ret = flow_dv_translate_item_port_id + (dev, key, item, attr, MLX5_SET_MATCHER_SW_V); + return ret; +} + + +/** + * Add ICMP6 item to the value. + * + * @param[in, out] key + * Flow matcher value. + * @param[in] item + * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_icmp6(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_icmp6(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_icmp6 *icmp6_m = item->mask; - const struct rte_flow_item_icmp6 *icmp6_v = item->spec; - void *headers_m; + const struct rte_flow_item_icmp6 *icmp6_m; + const struct rte_flow_item_icmp6 *icmp6_v; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ICMPV6); - if (!icmp6_v) + + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, 0xFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, + IPPROTO_ICMPV6); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!icmp6_m) - icmp6_m = &rte_flow_item_icmp6_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, icmpv6_type, icmp6_m->type); + MLX5_ITEM_UPDATE(item, key_type, icmp6_v, icmp6_m, + &rte_flow_item_icmp6_mask); MLX5_SET(fte_match_set_misc3, misc3_v, icmpv6_type, icmp6_v->type & icmp6_m->type); - MLX5_SET(fte_match_set_misc3, misc3_m, icmpv6_code, icmp6_m->code); MLX5_SET(fte_match_set_misc3, misc3_v, icmpv6_code, icmp6_v->code & icmp6_m->code); } /** - * Add ICMP item to matcher and to the value. + * Add ICMP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_icmp(void *matcher, void *key, - const struct rte_flow_item *item, - int inner) +flow_dv_translate_item_icmp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_icmp *icmp_m = item->mask; - const struct rte_flow_item_icmp *icmp_v = item->spec; + const struct rte_flow_item_icmp *icmp_m; + const struct rte_flow_item_icmp *icmp_v; uint32_t icmp_header_data_m = 0; uint32_t icmp_header_data_v = 0; - void *headers_m; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } - MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_protocol, 0xFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, IPPROTO_ICMP); - if (!icmp_v) + + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, 0xFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + ip_protocol, IPPROTO_ICMP); + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!icmp_m) - icmp_m = &rte_flow_item_icmp_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_type, - icmp_m->hdr.icmp_type); + MLX5_ITEM_UPDATE(item, key_type, icmp_v, icmp_m, + &rte_flow_item_icmp_mask); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_type, icmp_v->hdr.icmp_type & icmp_m->hdr.icmp_type); - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_code, - icmp_m->hdr.icmp_code); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_code, icmp_v->hdr.icmp_code & icmp_m->hdr.icmp_code); icmp_header_data_m = rte_be_to_cpu_16(icmp_m->hdr.icmp_seq_nb); @@ -10248,64 +10184,51 @@ flow_dv_translate_item_icmp(void *matcher, void *key, icmp_header_data_v = rte_be_to_cpu_16(icmp_v->hdr.icmp_seq_nb); icmp_header_data_v |= rte_be_to_cpu_16(icmp_v->hdr.icmp_ident) << 16; - MLX5_SET(fte_match_set_misc3, misc3_m, icmp_header_data, - icmp_header_data_m); MLX5_SET(fte_match_set_misc3, misc3_v, icmp_header_data, icmp_header_data_v & icmp_header_data_m); } } /** - * Add GTP item to matcher and to the value. + * Add GTP item to the value. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] inner * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_gtp(void *matcher, void *key, - const struct rte_flow_item *item, int inner) +flow_dv_translate_item_gtp(void *key, const struct rte_flow_item *item, + int inner, uint32_t key_type) { - const struct rte_flow_item_gtp *gtp_m = item->mask; - const struct rte_flow_item_gtp *gtp_v = item->spec; - void *headers_m; + const struct rte_flow_item_gtp *gtp_m; + const struct rte_flow_item_gtp *gtp_v; void *headers_v; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); uint16_t dport = RTE_GTPU_UDP_PORT; - if (inner) { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - inner_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, inner_headers); - } else { - headers_m = MLX5_ADDR_OF(fte_match_param, matcher, - outer_headers); - headers_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - } + headers_v = inner ? MLX5_ADDR_OF(fte_match_param, key, inner_headers) : + MLX5_ADDR_OF(fte_match_param, key, outer_headers); if (!MLX5_GET16(fte_match_set_lyr_2_4, headers_v, udp_dport)) { - MLX5_SET(fte_match_set_lyr_2_4, headers_m, udp_dport, 0xFFFF); - MLX5_SET(fte_match_set_lyr_2_4, headers_v, udp_dport, dport); + if (key_type & MLX5_SET_MATCHER_M) + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, 0xFFFF); + else + MLX5_SET(fte_match_set_lyr_2_4, headers_v, + udp_dport, dport); } - if (!gtp_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!gtp_m) - gtp_m = &rte_flow_item_gtp_mask; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_flags, - gtp_m->v_pt_rsv_flags); + MLX5_ITEM_UPDATE(item, key_type, gtp_v, gtp_m, + &rte_flow_item_gtp_mask); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags, gtp_v->v_pt_rsv_flags & gtp_m->v_pt_rsv_flags); - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_type, gtp_m->msg_type); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_type, gtp_v->msg_type & gtp_m->msg_type); - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_teid, - rte_be_to_cpu_32(gtp_m->teid)); MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_teid, rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid)); } @@ -10313,21 +10236,19 @@ flow_dv_translate_item_gtp(void *matcher, void *key, /** * Add GTP PSC item to matcher. * - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. + * @param[in] key_type + * Set flow matcher mask or value. */ static int -flow_dv_translate_item_gtp_psc(void *matcher, void *key, - const struct rte_flow_item *item) +flow_dv_translate_item_gtp_psc(void *key, const struct rte_flow_item *item, + uint32_t key_type) { - const struct rte_flow_item_gtp_psc *gtp_psc_m = item->mask; - const struct rte_flow_item_gtp_psc *gtp_psc_v = item->spec; - void *misc3_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_3); + const struct rte_flow_item_gtp_psc *gtp_psc_m; + const struct rte_flow_item_gtp_psc *gtp_psc_v; void *misc3_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_3); union { uint32_t w32; @@ -10337,52 +10258,40 @@ flow_dv_translate_item_gtp_psc(void *matcher, void *key, uint8_t next_ext_header_type; }; } dw_2; + union { + uint32_t w32; + struct { + uint8_t len; + uint8_t type_flags; + uint8_t qfi; + uint8_t reserved; + }; + } dw_0; uint8_t gtp_flags; /* Always set E-flag match on one, regardless of GTP item settings. */ - gtp_flags = MLX5_GET(fte_match_set_misc3, misc3_m, gtpu_msg_flags); - gtp_flags |= MLX5_GTP_EXT_HEADER_FLAG; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_msg_flags, gtp_flags); gtp_flags = MLX5_GET(fte_match_set_misc3, misc3_v, gtpu_msg_flags); gtp_flags |= MLX5_GTP_EXT_HEADER_FLAG; MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_msg_flags, gtp_flags); /*Set next extension header type. */ dw_2.seq_num = 0; dw_2.npdu_num = 0; - dw_2.next_ext_header_type = 0xff; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_dw_2, - rte_cpu_to_be_32(dw_2.w32)); - dw_2.seq_num = 0; - dw_2.npdu_num = 0; - dw_2.next_ext_header_type = 0x85; + if (key_type & MLX5_SET_MATCHER_M) + dw_2.next_ext_header_type = 0xff; + else + dw_2.next_ext_header_type = 0x85; MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_dw_2, rte_cpu_to_be_32(dw_2.w32)); - if (gtp_psc_v) { - union { - uint32_t w32; - struct { - uint8_t len; - uint8_t type_flags; - uint8_t qfi; - uint8_t reserved; - }; - } dw_0; - - /*Set extension header PDU type and Qos. */ - if (!gtp_psc_m) - gtp_psc_m = &rte_flow_item_gtp_psc_mask; - dw_0.w32 = 0; - dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_m->hdr.type); - dw_0.qfi = gtp_psc_m->hdr.qfi; - MLX5_SET(fte_match_set_misc3, misc3_m, gtpu_first_ext_dw_0, - rte_cpu_to_be_32(dw_0.w32)); - dw_0.w32 = 0; - dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_v->hdr.type & - gtp_psc_m->hdr.type); - dw_0.qfi = gtp_psc_v->hdr.qfi & gtp_psc_m->hdr.qfi; - MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_first_ext_dw_0, - rte_cpu_to_be_32(dw_0.w32)); - } + if (MLX5_ITEM_VALID(item, key_type)) + return 0; + MLX5_ITEM_UPDATE(item, key_type, gtp_psc_v, + gtp_psc_m, &rte_flow_item_gtp_psc_mask); + dw_0.w32 = 0; + dw_0.type_flags = MLX5_GTP_PDU_TYPE_SHIFT(gtp_psc_v->hdr.type & + gtp_psc_m->hdr.type); + dw_0.qfi = gtp_psc_v->hdr.qfi & gtp_psc_m->hdr.qfi; + MLX5_SET(fte_match_set_misc3, misc3_v, gtpu_first_ext_dw_0, + rte_cpu_to_be_32(dw_0.w32)); return 0; } @@ -10391,29 +10300,27 @@ flow_dv_translate_item_gtp_psc(void *matcher, void *key, * * @param[in] dev * The devich to configure through. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. * @param[in] last_item * Last item flags. + * @param[in] key_type + * Set flow matcher mask or value. */ static void -flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, - void *key, const struct rte_flow_item *item, - uint64_t last_item) +flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *key, + const struct rte_flow_item *item, + uint64_t last_item, uint32_t key_type) { struct mlx5_priv *priv = dev->data->dev_private; - const struct rte_flow_item_ecpri *ecpri_m = item->mask; - const struct rte_flow_item_ecpri *ecpri_v = item->spec; + const struct rte_flow_item_ecpri *ecpri_m; + const struct rte_flow_item_ecpri *ecpri_v; + const struct rte_flow_item_ecpri *ecpri_vv = item->spec; struct rte_ecpri_common_hdr common; - void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher, - misc_parameters_4); void *misc4_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_4); uint32_t *samples; - void *dw_m; void *dw_v; /* @@ -10421,21 +10328,22 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, * match on eCPRI EtherType implicitly. */ if (last_item & MLX5_FLOW_LAYER_OUTER_L2) { - void *hdrs_m, *hdrs_v, *l2m, *l2v; + void *hdrs_v, *l2v; - hdrs_m = MLX5_ADDR_OF(fte_match_param, matcher, outer_headers); hdrs_v = MLX5_ADDR_OF(fte_match_param, key, outer_headers); - l2m = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_m, ethertype); l2v = MLX5_ADDR_OF(fte_match_set_lyr_2_4, hdrs_v, ethertype); - if (*(uint16_t *)l2m == 0 && *(uint16_t *)l2v == 0) { - *(uint16_t *)l2m = UINT16_MAX; - *(uint16_t *)l2v = RTE_BE16(RTE_ETHER_TYPE_ECPRI); + if (*(uint16_t *)l2v == 0) { + if (key_type & MLX5_SET_MATCHER_M) + *(uint16_t *)l2v = UINT16_MAX; + else + *(uint16_t *)l2v = + RTE_BE16(RTE_ETHER_TYPE_ECPRI); } } - if (!ecpri_v) + if (MLX5_ITEM_VALID(item, key_type)) return; - if (!ecpri_m) - ecpri_m = &rte_flow_item_ecpri_mask; + MLX5_ITEM_UPDATE(item, key_type, ecpri_v, ecpri_m, + &rte_flow_item_ecpri_mask); /* * Maximal four DW samples are supported in a single matching now. * Two are used now for a eCPRI matching: @@ -10447,16 +10355,11 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, return; samples = priv->sh->ecpri_parser.ids; /* Need to take the whole DW as the mask to fill the entry. */ - dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m, - prog_sample_field_value_0); dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v, prog_sample_field_value_0); /* Already big endian (network order) in the header. */ - *(uint32_t *)dw_m = ecpri_m->hdr.common.u32; *(uint32_t *)dw_v = ecpri_v->hdr.common.u32 & ecpri_m->hdr.common.u32; /* Sample#0, used for matching type, offset 0. */ - MLX5_SET(fte_match_set_misc4, misc4_m, - prog_sample_field_id_0, samples[0]); /* It makes no sense to set the sample ID in the mask field. */ MLX5_SET(fte_match_set_misc4, misc4_v, prog_sample_field_id_0, samples[0]); @@ -10465,21 +10368,19 @@ flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher, * Some wildcard rules only matching type field should be supported. */ if (ecpri_m->hdr.dummy[0]) { - common.u32 = rte_be_to_cpu_32(ecpri_v->hdr.common.u32); + if (key_type == MLX5_SET_MATCHER_SW_M) + common.u32 = rte_be_to_cpu_32(ecpri_vv->hdr.common.u32); + else + common.u32 = rte_be_to_cpu_32(ecpri_v->hdr.common.u32); switch (common.type) { case RTE_ECPRI_MSG_TYPE_IQ_DATA: case RTE_ECPRI_MSG_TYPE_RTC_CTRL: case RTE_ECPRI_MSG_TYPE_DLY_MSR: - dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m, - prog_sample_field_value_1); dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v, prog_sample_field_value_1); - *(uint32_t *)dw_m = ecpri_m->hdr.dummy[0]; *(uint32_t *)dw_v = ecpri_v->hdr.dummy[0] & ecpri_m->hdr.dummy[0]; /* Sample#1, to match message body, offset 4. */ - MLX5_SET(fte_match_set_misc4, misc4_m, - prog_sample_field_id_1, samples[1]); MLX5_SET(fte_match_set_misc4, misc4_v, prog_sample_field_id_1, samples[1]); break; @@ -10544,7 +10445,7 @@ flow_dv_translate_item_aso_ct(struct rte_eth_dev *dev, reg_id = mlx5_flow_get_reg_id(dev, MLX5_ASO_CONNTRACK, 0, &error); if (reg_id == REG_NON) return; - flow_dv_match_meta_reg(matcher, key, (enum modify_reg)reg_id, + flow_dv_match_meta_reg_all(matcher, key, (enum modify_reg)reg_id, reg_value, reg_mask); } @@ -11330,42 +11231,48 @@ flow_dv_translate_create_counter(struct rte_eth_dev *dev, * * @param[in] dev * Pointer to the dev struct. - * @param[in, out] matcher - * Flow matcher. * @param[in, out] key * Flow matcher value. * @param[in] item * Flow pattern to translate. - * @param[in] inner - * Item is inner pattern. + * @param[in] key_type + * Set flow matcher mask or value. */ static void flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, - void *matcher, void *key, - const struct rte_flow_item *item) + void *key, + const struct rte_flow_item *item, + uint32_t key_type) { const struct mlx5_rte_flow_item_tx_queue *queue_m; const struct mlx5_rte_flow_item_tx_queue *queue_v; - void *misc_m = MLX5_ADDR_OF(fte_match_param, matcher, misc_parameters); - void *misc_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters); - struct mlx5_txq_ctrl *txq; - uint32_t queue, mask; + const struct mlx5_rte_flow_item_tx_queue queue_mask = { + .queue = UINT32_MAX, + }; + void *misc_v = + MLX5_ADDR_OF(fte_match_param, key, misc_parameters); + struct mlx5_txq_ctrl *txq = NULL; + uint32_t queue; - queue_m = (const void *)item->mask; - queue_v = (const void *)item->spec; - if (!queue_v) + MLX5_ITEM_UPDATE(item, key_type, queue_v, queue_m, &queue_mask); + if (!queue_m || !queue_v) return; - txq = mlx5_txq_get(dev, queue_v->queue); - if (!txq) - return; - if (txq->is_hairpin) - queue = txq->obj->sq->id; - else - queue = txq->obj->sq_obj.sq->id; - mask = queue_m == NULL ? UINT32_MAX : queue_m->queue; - MLX5_SET(fte_match_set_misc, misc_m, source_sqn, mask); - MLX5_SET(fte_match_set_misc, misc_v, source_sqn, queue & mask); - mlx5_txq_release(dev, queue_v->queue); + if (key_type & MLX5_SET_MATCHER_V) { + txq = mlx5_txq_get(dev, queue_v->queue); + if (!txq) + return; + if (txq->is_hairpin) + queue = txq->obj->sq->id; + else + queue = txq->obj->sq_obj.sq->id; + if (key_type == MLX5_SET_MATCHER_SW_V) + queue &= queue_m->queue; + } else { + queue = queue_m->queue; + } + MLX5_SET(fte_match_set_misc, misc_v, source_sqn, queue); + if (txq) + mlx5_txq_release(dev, queue_v->queue); } /** @@ -13076,7 +12983,298 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, } /** - * Translate the flow item to matcher. + * Fill the flow matcher with DV spec. + * + * @param[in] dev + * Pointer to rte_eth_dev structure. + * @param[in] items + * Pointer to the list of items. + * @param[in] wks + * Pointer to the matcher workspace. + * @param[in] key + * Pointer to the flow matcher key. + * @param[in] key_type + * Key type. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +flow_dv_translate_items(struct rte_eth_dev *dev, + const struct rte_flow_item *items, + struct mlx5_dv_matcher_workspace *wks, + void *key, uint32_t key_type, + struct rte_flow_error *error) +{ + struct mlx5_flow_rss_desc *rss_desc = wks->rss_desc; + uint8_t next_protocol = wks->next_protocol; + int tunnel = !!(wks->item_flags & MLX5_FLOW_LAYER_TUNNEL); + int item_type = items->type; + uint64_t last_item = wks->last_item; + int ret; + + switch (item_type) { + case RTE_FLOW_ITEM_TYPE_ESP: + flow_dv_translate_item_esp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_ITEM_ESP; + break; + case RTE_FLOW_ITEM_TYPE_PORT_ID: + flow_dv_translate_item_port_id + (dev, key, items, wks->attr, key_type); + last_item = MLX5_FLOW_ITEM_PORT_ID; + break; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + flow_dv_translate_item_represented_port + (dev, key, items, wks->attr, key_type); + last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; + break; + case RTE_FLOW_ITEM_TYPE_ETH: + flow_dv_translate_item_eth(key, items, tunnel, + wks->group, key_type); + wks->priority = wks->action_flags & + MLX5_FLOW_ACTION_DEFAULT_MISS && + !wks->external ? + MLX5_PRIORITY_MAP_L3 : + MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + flow_dv_translate_item_vlan(key, items, tunnel, wks, key_type); + wks->priority = MLX5_PRIORITY_MAP_L2; + last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | + MLX5_FLOW_LAYER_INNER_VLAN) : + (MLX5_FLOW_LAYER_OUTER_L2 | + MLX5_FLOW_LAYER_OUTER_VLAN); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &wks->item_flags, &tunnel); + flow_dv_translate_item_ipv4(key, items, tunnel, + wks->group, key_type); + wks->priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv4 *) + items->mask)->hdr.next_proto_id) { + next_protocol = + ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + next_protocol &= + ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv4 *) + (items->mask))->hdr.next_proto_id; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv4 *) + (items->spec))->hdr.next_proto_id; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + mlx5_flow_tunnel_ip_check(items, next_protocol, + &wks->item_flags, &tunnel); + flow_dv_translate_item_ipv6(key, items, tunnel, + wks->group, key_type); + wks->priority = MLX5_PRIORITY_MAP_L3; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto) { + next_protocol = + ((const struct rte_flow_item_ipv6 *) + items->spec)->hdr.proto; + next_protocol &= + ((const struct rte_flow_item_ipv6 *) + items->mask)->hdr.proto; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6 *) + (items->mask))->hdr.proto; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6 *) + (items->spec))->hdr.proto; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: + flow_dv_translate_item_ipv6_frag_ext + (key, items, tunnel, key_type); + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : + MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; + if (items->mask != NULL && + items->spec != NULL && + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header) { + next_protocol = + ((const struct rte_flow_item_ipv6_frag_ext *) + items->spec)->hdr.next_header; + next_protocol &= + ((const struct rte_flow_item_ipv6_frag_ext *) + items->mask)->hdr.next_header; + } else if (key_type == MLX5_SET_MATCHER_HS_M && + items->mask != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *) + (items->mask))->hdr.next_header; + } else if (key_type == MLX5_SET_MATCHER_HS_V && + items->spec != NULL) { + next_protocol = ((const struct rte_flow_item_ipv6_frag_ext *) + (items->spec))->hdr.next_header; + } else { + /* Reset for inner layer. */ + next_protocol = 0xff; + } + break; + case RTE_FLOW_ITEM_TYPE_TCP: + flow_dv_translate_item_tcp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; + break; + case RTE_FLOW_ITEM_TYPE_UDP: + flow_dv_translate_item_udp(key, items, tunnel, wks, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; + break; + case RTE_FLOW_ITEM_TYPE_GRE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + wks->gre_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + flow_dv_translate_item_gre_key(key, items, key_type); + last_item = MLX5_FLOW_LAYER_GRE_KEY; + break; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_NVGRE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + flow_dv_translate_item_vxlan(dev, wks->attr, key, + items, tunnel, wks, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_VXLAN; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_VXLAN_GPE; + break; + case RTE_FLOW_ITEM_TYPE_GENEVE: + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + wks->tunnel_item = items; + last_item = MLX5_FLOW_LAYER_GENEVE; + break; + case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: + ret = flow_dv_translate_item_geneve_opt + (dev, key, items, key_type, error); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GENEVE TLV option"); + wks->geneve_tlv_option = 1; + last_item = MLX5_FLOW_LAYER_GENEVE_OPT; + break; + case RTE_FLOW_ITEM_TYPE_MPLS: + flow_dv_translate_item_mpls(key, items, last_item, + tunnel, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_MPLS; + break; + case RTE_FLOW_ITEM_TYPE_MARK: + flow_dv_translate_item_mark(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_MARK; + break; + case RTE_FLOW_ITEM_TYPE_META: + flow_dv_translate_item_meta + (dev, key, wks->attr, items, key_type); + last_item = MLX5_FLOW_ITEM_METADATA; + break; + case RTE_FLOW_ITEM_TYPE_ICMP: + flow_dv_translate_item_icmp(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP; + break; + case RTE_FLOW_ITEM_TYPE_ICMP6: + flow_dv_translate_item_icmp6(key, items, tunnel, key_type); + wks->priority = MLX5_PRIORITY_MAP_L4; + last_item = MLX5_FLOW_LAYER_ICMP6; + break; + case RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_item_tag(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TAG; + break; + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + flow_dv_translate_mlx5_item_tag(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TAG; + break; + case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + flow_dv_translate_item_tx_queue(dev, key, items, key_type); + last_item = MLX5_FLOW_ITEM_TX_QUEUE; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + flow_dv_translate_item_gtp(key, items, tunnel, key_type); + wks->priority = MLX5_TUNNEL_PRIO_GET(rss_desc); + last_item = MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = flow_dv_translate_item_gtp_psc(key, items, key_type); + if (ret) + return rte_flow_error_set(error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "cannot create GTP PSC item"); + last_item = MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_ECPRI: + if (!mlx5_flex_parser_ecpri_exist(dev)) { + /* Create it only the first time to be used. */ + ret = mlx5_flex_parser_ecpri_alloc(dev); + if (ret) + return rte_flow_error_set + (error, -ret, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, + "cannot create eCPRI parser"); + } + flow_dv_translate_item_ecpri + (dev, key, items, last_item, key_type); + /* No other protocol should follow eCPRI layer. */ + last_item = MLX5_FLOW_LAYER_ECPRI; + break; + default: + break; + } + wks->item_flags |= last_item; + wks->last_item = last_item; + wks->next_protocol = next_protocol; + return 0; +} + +/** + * Fill the SW steering flow with DV spec. * * @param[in] dev * Pointer to rte_eth_dev structure. @@ -13086,7 +13284,7 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * Pointer to the flow attributes. * @param[in] items * Pointer to the list of items. - * @param[in] matcher + * @param[in, out] matcher * Pointer to the flow matcher. * @param[out] error * Pointer to the error structure. @@ -13095,287 +13293,41 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev, * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -flow_dv_translate_items(struct rte_eth_dev *dev, - struct mlx5_flow *dev_flow, - const struct rte_flow_attr *attr, - const struct rte_flow_item items[], - struct mlx5_flow_dv_matcher *matcher, - struct rte_flow_error *error) +flow_dv_translate_items_sws(struct rte_eth_dev *dev, + struct mlx5_flow *dev_flow, + const struct rte_flow_attr *attr, + const struct rte_flow_item *items, + struct mlx5_flow_dv_matcher *matcher, + struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; - struct rte_flow *flow = dev_flow->flow; - struct mlx5_flow_handle *handle = dev_flow->handle; - struct mlx5_flow_workspace *wks = mlx5_flow_get_thread_workspace(); - struct mlx5_flow_rss_desc *rss_desc = &wks->rss_desc; - uint64_t item_flags = 0; - uint64_t last_item = 0; void *match_mask = matcher->mask.buf; void *match_value = dev_flow->dv.value.buf; - uint8_t next_protocol = 0xff; - uint16_t priority = 0; + struct mlx5_dv_matcher_workspace wks = { + .action_flags = dev_flow->act_flags, + .item_flags = 0, + .external = dev_flow->external, + .next_protocol = 0xff, + .group = dev_flow->dv.group, + .attr = attr, + .rss_desc = &((struct mlx5_flow_workspace *) + mlx5_flow_get_thread_workspace())->rss_desc, + }; + struct mlx5_dv_matcher_workspace wks_m = wks; const struct rte_flow_item *integrity_items[2] = {NULL, NULL}; - const struct rte_flow_item *tunnel_item = NULL; - const struct rte_flow_item *gre_item = NULL; int ret = 0; + int tunnel; for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { - int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); - int item_type = items->type; - - if (!mlx5_flow_os_item_supported(item_type)) + if (!mlx5_flow_os_item_supported(items->type)) return rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, NULL, "item not supported"); - switch (item_type) { - case RTE_FLOW_ITEM_TYPE_ESP: - flow_dv_translate_item_esp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_ITEM_ESP; - break; - case RTE_FLOW_ITEM_TYPE_PORT_ID: - flow_dv_translate_item_port_id - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_PORT_ID; - break; - case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: - flow_dv_translate_item_represented_port - (dev, match_mask, match_value, items, attr); - last_item = MLX5_FLOW_ITEM_REPRESENTED_PORT; - break; - case RTE_FLOW_ITEM_TYPE_ETH: - flow_dv_translate_item_eth(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = dev_flow->act_flags & - MLX5_FLOW_ACTION_DEFAULT_MISS && - !dev_flow->external ? - MLX5_PRIORITY_MAP_L3 : - MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L2 : - MLX5_FLOW_LAYER_OUTER_L2; - break; - case RTE_FLOW_ITEM_TYPE_VLAN: - flow_dv_translate_item_vlan(dev_flow, - match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L2; - last_item = tunnel ? (MLX5_FLOW_LAYER_INNER_L2 | - MLX5_FLOW_LAYER_INNER_VLAN) : - (MLX5_FLOW_LAYER_OUTER_L2 | - MLX5_FLOW_LAYER_OUTER_VLAN); - break; - case RTE_FLOW_ITEM_TYPE_IPV4: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv4(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : - MLX5_FLOW_LAYER_OUTER_L3_IPV4; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv4 *) - items->mask)->hdr.next_proto_id) { - next_protocol = - ((const struct rte_flow_item_ipv4 *) - (items->spec))->hdr.next_proto_id; - next_protocol &= - ((const struct rte_flow_item_ipv4 *) - (items->mask))->hdr.next_proto_id; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_IPV6: - mlx5_flow_tunnel_ip_check(items, next_protocol, - &item_flags, &tunnel); - flow_dv_translate_item_ipv6(match_mask, match_value, - items, tunnel, - dev_flow->dv.group); - priority = MLX5_PRIORITY_MAP_L3; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : - MLX5_FLOW_LAYER_OUTER_L3_IPV6; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto) { - next_protocol = - ((const struct rte_flow_item_ipv6 *) - items->spec)->hdr.proto; - next_protocol &= - ((const struct rte_flow_item_ipv6 *) - items->mask)->hdr.proto; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT: - flow_dv_translate_item_ipv6_frag_ext(match_mask, - match_value, - items, tunnel); - last_item = tunnel ? - MLX5_FLOW_LAYER_INNER_L3_IPV6_FRAG_EXT : - MLX5_FLOW_LAYER_OUTER_L3_IPV6_FRAG_EXT; - if (items->mask != NULL && - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header) { - next_protocol = - ((const struct rte_flow_item_ipv6_frag_ext *) - items->spec)->hdr.next_header; - next_protocol &= - ((const struct rte_flow_item_ipv6_frag_ext *) - items->mask)->hdr.next_header; - } else { - /* Reset for inner layer. */ - next_protocol = 0xff; - } - break; - case RTE_FLOW_ITEM_TYPE_TCP: - flow_dv_translate_item_tcp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : - MLX5_FLOW_LAYER_OUTER_L4_TCP; - break; - case RTE_FLOW_ITEM_TYPE_UDP: - flow_dv_translate_item_udp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : - MLX5_FLOW_LAYER_OUTER_L4_UDP; - break; - case RTE_FLOW_ITEM_TYPE_GRE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - gre_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GRE_KEY: - flow_dv_translate_item_gre_key(match_mask, - match_value, items); - last_item = MLX5_FLOW_LAYER_GRE_KEY; - break; - case RTE_FLOW_ITEM_TYPE_GRE_OPTION: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_NVGRE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GRE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_VXLAN: - flow_dv_translate_item_vxlan(dev, attr, - match_mask, match_value, - items, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN; - break; - case RTE_FLOW_ITEM_TYPE_VXLAN_GPE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_VXLAN_GPE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GENEVE: - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GENEVE; - tunnel_item = items; - break; - case RTE_FLOW_ITEM_TYPE_GENEVE_OPT: - ret = flow_dv_translate_item_geneve_opt(dev, match_mask, - match_value, - items, error); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GENEVE TLV option"); - flow->geneve_tlv_option = 1; - last_item = MLX5_FLOW_LAYER_GENEVE_OPT; - break; - case RTE_FLOW_ITEM_TYPE_MPLS: - flow_dv_translate_item_mpls(match_mask, match_value, - items, last_item, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_MPLS; - break; - case RTE_FLOW_ITEM_TYPE_MARK: - flow_dv_translate_item_mark(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_MARK; - break; - case RTE_FLOW_ITEM_TYPE_META: - flow_dv_translate_item_meta(dev, match_mask, - match_value, attr, items); - last_item = MLX5_FLOW_ITEM_METADATA; - break; - case RTE_FLOW_ITEM_TYPE_ICMP: - flow_dv_translate_item_icmp(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP; - break; - case RTE_FLOW_ITEM_TYPE_ICMP6: - flow_dv_translate_item_icmp6(match_mask, match_value, - items, tunnel); - priority = MLX5_PRIORITY_MAP_L4; - last_item = MLX5_FLOW_LAYER_ICMP6; - break; - case RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; - break; - case MLX5_RTE_FLOW_ITEM_TYPE_TAG: - flow_dv_translate_mlx5_item_tag(dev, match_mask, - match_value, items); - last_item = MLX5_FLOW_ITEM_TAG; - break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - flow_dv_translate_item_tx_queue(dev, match_mask, - match_value, - items); - last_item = MLX5_FLOW_ITEM_TX_QUEUE; - break; - case RTE_FLOW_ITEM_TYPE_GTP: - flow_dv_translate_item_gtp(match_mask, match_value, - items, tunnel); - priority = MLX5_TUNNEL_PRIO_GET(rss_desc); - last_item = MLX5_FLOW_LAYER_GTP; - break; - case RTE_FLOW_ITEM_TYPE_GTP_PSC: - ret = flow_dv_translate_item_gtp_psc(match_mask, - match_value, - items); - if (ret) - return rte_flow_error_set(error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "cannot create GTP PSC item"); - last_item = MLX5_FLOW_LAYER_GTP_PSC; - break; - case RTE_FLOW_ITEM_TYPE_ECPRI: - if (!mlx5_flex_parser_ecpri_exist(dev)) { - /* Create it only the first time to be used. */ - ret = mlx5_flex_parser_ecpri_alloc(dev); - if (ret) - return rte_flow_error_set - (error, -ret, - RTE_FLOW_ERROR_TYPE_ITEM, - NULL, - "cannot create eCPRI parser"); - } - flow_dv_translate_item_ecpri(dev, match_mask, - match_value, items, - last_item); - /* No other protocol should follow eCPRI layer. */ - last_item = MLX5_FLOW_LAYER_ECPRI; - break; + tunnel = !!(wks.item_flags & MLX5_FLOW_LAYER_TUNNEL); + switch (items->type) { case RTE_FLOW_ITEM_TYPE_INTEGRITY: flow_dv_translate_item_integrity(items, integrity_items, - &last_item); + &wks.last_item); break; case RTE_FLOW_ITEM_TYPE_CONNTRACK: flow_dv_translate_item_aso_ct(dev, match_mask, @@ -13385,13 +13337,22 @@ flow_dv_translate_items(struct rte_eth_dev *dev, flow_dv_translate_item_flex(dev, match_mask, match_value, items, dev_flow, tunnel != 0); - last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : - MLX5_FLOW_ITEM_OUTER_FLEX; + wks.last_item = tunnel ? MLX5_FLOW_ITEM_INNER_FLEX : + MLX5_FLOW_ITEM_OUTER_FLEX; break; + default: + ret = flow_dv_translate_items(dev, items, &wks_m, + match_mask, MLX5_SET_MATCHER_SW_M, error); + if (ret) + return ret; + ret = flow_dv_translate_items(dev, items, &wks, + match_value, MLX5_SET_MATCHER_SW_V, error); + if (ret) + return ret; break; } - item_flags |= last_item; + wks.item_flags |= wks.last_item; } /* * When E-Switch mode is enabled, we have two cases where we need to @@ -13401,48 +13362,82 @@ flow_dv_translate_items(struct rte_eth_dev *dev, * In both cases the source port is set according the current port * in use. */ - if (!(item_flags & MLX5_FLOW_ITEM_PORT_ID) && - !(item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && + if (!(wks.item_flags & MLX5_FLOW_ITEM_PORT_ID) && + !(wks.item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) && priv->sh->esw_mode && !(attr->egress && !attr->transfer)) { - if (flow_dv_translate_item_port_id(dev, match_mask, + if (flow_dv_translate_item_port_id_all(dev, match_mask, match_value, NULL, attr)) return -rte_errno; } - if (item_flags & MLX5_FLOW_ITEM_INTEGRITY) { + if (wks.item_flags & MLX5_FLOW_ITEM_INTEGRITY) { flow_dv_translate_item_integrity_post(match_mask, match_value, integrity_items, - item_flags); - } - if (item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) - flow_dv_translate_item_vxlan_gpe(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GENEVE) - flow_dv_translate_item_geneve(match_mask, match_value, - tunnel_item, item_flags); - else if (item_flags & MLX5_FLOW_LAYER_GRE) { - if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) - flow_dv_translate_item_gre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) - flow_dv_translate_item_nvgre(match_mask, match_value, - tunnel_item, item_flags); - else if (tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) - flow_dv_translate_item_gre_option(match_mask, match_value, - tunnel_item, gre_item, item_flags); - else + wks.item_flags); + } + if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { + flow_dv_translate_item_vxlan_gpe(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_vxlan_gpe(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GENEVE) { + flow_dv_translate_item_geneve(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_geneve(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GRE) { + if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) { + flow_dv_translate_item_gre(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_gre(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) { + flow_dv_translate_item_nvgre(match_mask, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_nvgre(match_value, + wks.tunnel_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) { + flow_dv_translate_item_gre_option(match_mask, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_gre_option(match_value, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + MLX5_SET_MATCHER_SW_V); + } else { MLX5_ASSERT(false); + } } - matcher->priority = priority; + dev_flow->handle->vf_vlan.tag = wks.vlan_tag; + matcher->priority = wks.priority; #ifdef RTE_LIBRTE_MLX5_DEBUG - MLX5_ASSERT(!flow_dv_check_valid_spec(matcher->mask.buf, - dev_flow->dv.value.buf)); + MLX5_ASSERT(!flow_dv_check_valid_spec(match_mask, match_value)); #endif /* * Layers may be already initialized from prefix flow if this dev_flow * is the suffix flow. */ - handle->layers |= item_flags; - return ret; + dev_flow->handle->layers |= wks.item_flags; + dev_flow->flow->geneve_tlv_option = wks.geneve_tlv_option; + return 0; } /** @@ -14182,7 +14177,7 @@ flow_dv_translate(struct rte_eth_dev *dev, modify_action_position = actions_n++; } dev_flow->act_flags = action_flags; - ret = flow_dv_translate_items(dev, dev_flow, attr, items, &matcher, + ret = flow_dv_translate_items_sws(dev, dev_flow, attr, items, &matcher, error); if (ret) return -rte_errno; @@ -16754,27 +16749,23 @@ __flow_dv_create_policy_flow(struct rte_eth_dev *dev, struct mlx5_flow_dv_match_params value = { .size = sizeof(value.buf), }; - struct mlx5_flow_dv_match_params matcher = { - .size = sizeof(matcher.buf), - }; struct mlx5_priv *priv = dev->data->dev_private; uint8_t misc_mask; if (match_src_port && priv->sh->esw_mode) { if (item && item->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT) - ret = flow_dv_translate_item_represented_port(dev, matcher.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_represented_port(dev, value.buf, + item, attr, MLX5_SET_MATCHER_SW_V); else - ret = flow_dv_translate_item_port_id(dev, matcher.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_port_id(dev, value.buf, + item, attr, MLX5_SET_MATCHER_SW_V); if (ret) { DRV_LOG(ERR, "Failed to create meter policy%d flow's" " value with port.", color); return -1; } } - flow_dv_match_meta_reg(matcher.buf, value.buf, - (enum modify_reg)color_reg_c_idx, + flow_dv_match_meta_reg(value.buf, (enum modify_reg)color_reg_c_idx, rte_col_2_mlx5_col(color), UINT32_MAX); misc_mask = flow_dv_matcher_enable(value.buf); __flow_dv_adjust_buf_size(&value.size, misc_mask); @@ -16806,9 +16797,6 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, }, .tbl = tbl_rsc, }; - struct mlx5_flow_dv_match_params value = { - .size = sizeof(value.buf), - }; struct mlx5_flow_cb_ctx ctx = { .error = error, .data = &matcher, @@ -16821,10 +16809,10 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, if (match_src_port && priv->sh->esw_mode) { if (item && item->type == RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT) ret = flow_dv_translate_item_represented_port(dev, matcher.mask.buf, - value.buf, item, attr); + item, attr, MLX5_SET_MATCHER_SW_M); else - ret = flow_dv_translate_item_port_id(dev, matcher.mask.buf, value.buf, - item, attr); + ret = flow_dv_translate_item_port_id(dev, matcher.mask.buf, + item, attr, MLX5_SET_MATCHER_SW_M); if (ret) { DRV_LOG(ERR, "Failed to register meter policy%d matcher" " with port.", priority); @@ -16833,7 +16821,7 @@ __flow_dv_create_policy_matcher(struct rte_eth_dev *dev, } tbl_data = container_of(tbl_rsc, struct mlx5_flow_tbl_data_entry, tbl); if (priority < RTE_COLOR_RED) - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg(matcher.mask.buf, (enum modify_reg)color_reg_c_idx, 0, color_mask); matcher.priority = priority; matcher.crc = rte_raw_cksum((const void *)matcher.mask.buf, @@ -17369,7 +17357,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, tbl_data = container_of(mtrmng->drop_tbl[domain], struct mlx5_flow_tbl_data_entry, tbl); if (!mtrmng->def_matcher[domain]) { - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg_all(matcher.mask.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, 0); matcher.priority = MLX5_MTRS_DEFAULT_RULE_PRIORITY; @@ -17389,7 +17377,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, if (!mtrmng->def_rule[domain]) { i = 0; actions[i++] = priv->sh->dr_drop_action; - flow_dv_match_meta_reg(matcher_para.buf, value.buf, + flow_dv_match_meta_reg_all(matcher_para.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, 0); misc_mask = flow_dv_matcher_enable(value.buf); __flow_dv_adjust_buf_size(&value.size, misc_mask); @@ -17408,7 +17396,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, MLX5_ASSERT(mtrmng->max_mtr_bits); if (!mtrmng->drop_matcher[domain][mtrmng->max_mtr_bits - 1]) { /* Create matchers for Drop. */ - flow_dv_match_meta_reg(matcher.mask.buf, value.buf, + flow_dv_match_meta_reg_all(matcher.mask.buf, value.buf, (enum modify_reg)mtr_id_reg_c, 0, (mtr_id_mask << mtr_id_offset)); matcher.priority = MLX5_REG_BITS - mtrmng->max_mtr_bits; @@ -17428,7 +17416,7 @@ flow_dv_create_mtr_tbls(struct rte_eth_dev *dev, drop_matcher = mtrmng->drop_matcher[domain][mtrmng->max_mtr_bits - 1]; /* Create drop rule, matching meter_id only. */ - flow_dv_match_meta_reg(matcher_para.buf, value.buf, + flow_dv_match_meta_reg_all(matcher_para.buf, value.buf, (enum modify_reg)mtr_id_reg_c, (mtr_idx << mtr_id_offset), UINT32_MAX); i = 0; @@ -18910,8 +18898,12 @@ flow_dv_discover_priorities(struct rte_eth_dev *dev, flow.dv.actions[0] = action; flow.dv.actions_n = 1; memset(ð, 0, sizeof(eth)); - flow_dv_translate_item_eth(matcher.mask.buf, flow.dv.value.buf, - &item, /* inner */ false, /* group */ 0); + flow_dv_translate_item_eth(matcher.mask.buf, &item, + /* inner */ false, /* group */ 0, + MLX5_SET_MATCHER_SW_M); + flow_dv_translate_item_eth(flow.dv.value.buf, &item, + /* inner */ false, /* group */ 0, + MLX5_SET_MATCHER_SW_V); matcher.crc = rte_raw_cksum(matcher.mask.buf, matcher.mask.size); for (i = 0; i < vprio_n; i++) { /* Configure the next proposed maximum priority. */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 02/18] net/mlx5: split flow item matcher and value translation 2022-10-20 15:57 ` [v6 02/18] net/mlx5: split flow item matcher and value translation Alex Vesker @ 2022-10-24 6:49 ` Slava Ovsiienko 0 siblings, 0 replies; 134+ messages in thread From: Slava Ovsiienko @ 2022-10-24 6:49 UTC (permalink / raw) To: Alex Vesker, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 18:58 > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad > <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com> > Subject: [v6 02/18] net/mlx5: split flow item matcher and value translation > > From: Suanming Mou <suanmingm@nvidia.com> > > As hardware steering mode translates flow matcher and value in two > different stages, split the flow item matcher and value translation > to help reuse the code. > > Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 03/18] net/mlx5: add hardware steering item translation function 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker 2022-10-20 15:57 ` [v6 01/18] net/mlx5: split flow item translation Alex Vesker 2022-10-20 15:57 ` [v6 02/18] net/mlx5: split flow item matcher and value translation Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-24 6:50 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 04/18] net/mlx5: add port to metadata conversion Alex Vesker ` (15 subsequent siblings) 18 siblings, 1 reply; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika From: Suanming Mou <suanmingm@nvidia.com> As hardware steering root table flows still work under FW steering mode. This commit provides shared item tranlsation code for hardware steering root table flows. Signed-off-by: Suanming Mou <suanmingm@nvidia.com> --- drivers/net/mlx5/mlx5_flow.c | 10 +-- drivers/net/mlx5/mlx5_flow.h | 52 ++++++++++++- drivers/net/mlx5/mlx5_flow_dv.c | 134 ++++++++++++++++++++++++-------- 3 files changed, 155 insertions(+), 41 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 06b465de7a..026d77b01f 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -7108,7 +7108,7 @@ mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) struct rte_flow_item_port_id port_spec = { .id = MLX5_PORT_ESW_MGR, }; - struct mlx5_rte_flow_item_tx_queue txq_spec = { + struct mlx5_rte_flow_item_sq txq_spec = { .queue = txq, }; struct rte_flow_item pattern[] = { @@ -7118,7 +7118,7 @@ mlx5_flow_create_devx_sq_miss_flow(struct rte_eth_dev *dev, uint32_t txq) }, { .type = (enum rte_flow_item_type) - MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, + MLX5_RTE_FLOW_ITEM_TYPE_SQ, .spec = &txq_spec, }, { @@ -7504,16 +7504,16 @@ mlx5_ctrl_flow_source_queue(struct rte_eth_dev *dev, .egress = 1, .priority = 0, }; - struct mlx5_rte_flow_item_tx_queue queue_spec = { + struct mlx5_rte_flow_item_sq queue_spec = { .queue = queue, }; - struct mlx5_rte_flow_item_tx_queue queue_mask = { + struct mlx5_rte_flow_item_sq queue_mask = { .queue = UINT32_MAX, }; struct rte_flow_item items[] = { { .type = (enum rte_flow_item_type) - MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, + MLX5_RTE_FLOW_ITEM_TYPE_SQ, .spec = &queue_spec, .last = NULL, .mask = &queue_mask, diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 7e5ade52cb..3537eb3d66 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -28,7 +28,7 @@ enum mlx5_rte_flow_item_type { MLX5_RTE_FLOW_ITEM_TYPE_END = INT_MIN, MLX5_RTE_FLOW_ITEM_TYPE_TAG, - MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE, + MLX5_RTE_FLOW_ITEM_TYPE_SQ, MLX5_RTE_FLOW_ITEM_TYPE_VLAN, MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL, }; @@ -95,7 +95,7 @@ struct mlx5_flow_action_copy_mreg { }; /* Matches on source queue. */ -struct mlx5_rte_flow_item_tx_queue { +struct mlx5_rte_flow_item_sq { uint32_t queue; }; @@ -159,7 +159,7 @@ enum mlx5_feature_name { #define MLX5_FLOW_LAYER_GENEVE (1u << 26) /* Queue items. */ -#define MLX5_FLOW_ITEM_TX_QUEUE (1u << 27) +#define MLX5_FLOW_ITEM_SQ (1u << 27) /* Pattern tunnel Layer bits (continued). */ #define MLX5_FLOW_LAYER_GTP (1u << 28) @@ -196,6 +196,9 @@ enum mlx5_feature_name { #define MLX5_FLOW_ITEM_PORT_REPRESENTOR (UINT64_C(1) << 41) #define MLX5_FLOW_ITEM_REPRESENTED_PORT (UINT64_C(1) << 42) +/* Meter color item */ +#define MLX5_FLOW_ITEM_METER_COLOR (UINT64_C(1) << 44) + /* Outer Masks. */ #define MLX5_FLOW_LAYER_OUTER_L3 \ (MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6) @@ -1009,6 +1012,18 @@ flow_items_to_tunnel(const struct rte_flow_item items[]) return items[0].spec; } +/* HW steering flow attributes. */ +struct mlx5_flow_attr { + uint32_t port_id; /* Port index. */ + uint32_t group; /* Flow group. */ + uint32_t priority; /* Original Priority. */ + /* rss level, used by priority adjustment. */ + uint32_t rss_level; + /* Action flags, used by priority adjustment. */ + uint32_t act_flags; + uint32_t tbl_type; /* Flow table type. */ +}; + /* Flow structure. */ struct rte_flow { uint32_t dev_handles; @@ -1769,6 +1784,32 @@ mlx5_translate_tunnel_etypes(uint64_t pattern_flags) int flow_hw_q_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error); + +/* + * Convert rte_mtr_color to mlx5 color. + * + * @param[in] rcol + * rte_mtr_color. + * + * @return + * mlx5 color. + */ +static inline int +rte_col_2_mlx5_col(enum rte_color rcol) +{ + switch (rcol) { + case RTE_COLOR_GREEN: + return MLX5_FLOW_COLOR_GREEN; + case RTE_COLOR_YELLOW: + return MLX5_FLOW_COLOR_YELLOW; + case RTE_COLOR_RED: + return MLX5_FLOW_COLOR_RED; + default: + break; + } + return MLX5_FLOW_COLOR_UNDEFINED; +} + int mlx5_flow_group_to_table(struct rte_eth_dev *dev, const struct mlx5_flow_tunnel *tunnel, uint32_t group, uint32_t *table, @@ -2128,4 +2169,9 @@ int mlx5_flow_get_item_vport_id(struct rte_eth_dev *dev, bool *all_ports, struct rte_flow_error *error); +int flow_dv_translate_items_hws(const struct rte_flow_item *items, + struct mlx5_flow_attr *attr, void *key, + uint32_t key_type, uint64_t *item_flags, + uint8_t *match_criteria, + struct rte_flow_error *error); #endif /* RTE_PMD_MLX5_FLOW_H_ */ diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 944db9c3e4..fb542ffde9 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -212,31 +212,6 @@ flow_dv_attr_init(const struct rte_flow_item *item, union flow_dv_attr *attr, attr->valid = 1; } -/* - * Convert rte_mtr_color to mlx5 color. - * - * @param[in] rcol - * rte_mtr_color. - * - * @return - * mlx5 color. - */ -static inline int -rte_col_2_mlx5_col(enum rte_color rcol) -{ - switch (rcol) { - case RTE_COLOR_GREEN: - return MLX5_FLOW_COLOR_GREEN; - case RTE_COLOR_YELLOW: - return MLX5_FLOW_COLOR_YELLOW; - case RTE_COLOR_RED: - return MLX5_FLOW_COLOR_RED; - default: - break; - } - return MLX5_FLOW_COLOR_UNDEFINED; -} - struct field_modify_info { uint32_t size; /* Size of field in protocol header, in bytes. */ uint32_t offset; /* Offset of field in protocol header, in bytes. */ @@ -7338,8 +7313,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, return ret; last_item = MLX5_FLOW_ITEM_TAG; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: - last_item = MLX5_FLOW_ITEM_TX_QUEUE; + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: + last_item = MLX5_FLOW_ITEM_SQ; break; case MLX5_RTE_FLOW_ITEM_TYPE_TAG: break; @@ -8225,7 +8200,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, * work due to metadata regC0 mismatch. */ if ((!attr->transfer && attr->egress) && priv->representor && - !(item_flags & MLX5_FLOW_ITEM_TX_QUEUE)) + !(item_flags & MLX5_FLOW_ITEM_SQ)) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL, @@ -11244,9 +11219,9 @@ flow_dv_translate_item_tx_queue(struct rte_eth_dev *dev, const struct rte_flow_item *item, uint32_t key_type) { - const struct mlx5_rte_flow_item_tx_queue *queue_m; - const struct mlx5_rte_flow_item_tx_queue *queue_v; - const struct mlx5_rte_flow_item_tx_queue queue_mask = { + const struct mlx5_rte_flow_item_sq *queue_m; + const struct mlx5_rte_flow_item_sq *queue_v; + const struct mlx5_rte_flow_item_sq queue_mask = { .queue = UINT32_MAX, }; void *misc_v = @@ -13231,9 +13206,9 @@ flow_dv_translate_items(struct rte_eth_dev *dev, flow_dv_translate_mlx5_item_tag(dev, key, items, key_type); last_item = MLX5_FLOW_ITEM_TAG; break; - case MLX5_RTE_FLOW_ITEM_TYPE_TX_QUEUE: + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: flow_dv_translate_item_tx_queue(dev, key, items, key_type); - last_item = MLX5_FLOW_ITEM_TX_QUEUE; + last_item = MLX5_FLOW_ITEM_SQ; break; case RTE_FLOW_ITEM_TYPE_GTP: flow_dv_translate_item_gtp(key, items, tunnel, key_type); @@ -13273,6 +13248,99 @@ flow_dv_translate_items(struct rte_eth_dev *dev, return 0; } +/** + * Fill the HW steering flow with DV spec. + * + * @param[in] items + * Pointer to the list of items. + * @param[in] attr + * Pointer to the flow attributes. + * @param[in] key + * Pointer to the flow matcher key. + * @param[in] key_type + * Key type. + * @param[in, out] item_flags + * Pointer to the flow item flags. + * @param[out] error + * Pointer to the error structure. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +flow_dv_translate_items_hws(const struct rte_flow_item *items, + struct mlx5_flow_attr *attr, void *key, + uint32_t key_type, uint64_t *item_flags, + uint8_t *match_criteria, + struct rte_flow_error *error) +{ + struct mlx5_flow_rss_desc rss_desc = { .level = attr->rss_level }; + struct rte_flow_attr rattr = { + .group = attr->group, + .priority = attr->priority, + .ingress = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_NIC_RX), + .egress = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_NIC_TX), + .transfer = !!(attr->tbl_type == MLX5DR_TABLE_TYPE_FDB), + }; + struct mlx5_dv_matcher_workspace wks = { + .action_flags = attr->act_flags, + .item_flags = item_flags ? *item_flags : 0, + .external = 0, + .next_protocol = 0xff, + .attr = &rattr, + .rss_desc = &rss_desc, + }; + int ret; + + for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) { + if (!mlx5_flow_os_item_supported(items->type)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "item not supported"); + ret = flow_dv_translate_items(&rte_eth_devices[attr->port_id], + items, &wks, key, key_type, NULL); + if (ret) + return ret; + } + if (wks.item_flags & MLX5_FLOW_LAYER_VXLAN_GPE) { + flow_dv_translate_item_vxlan_gpe(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GENEVE) { + flow_dv_translate_item_geneve(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.item_flags & MLX5_FLOW_LAYER_GRE) { + if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE) { + flow_dv_translate_item_gre(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_GRE_OPTION) { + flow_dv_translate_item_gre_option(key, + wks.tunnel_item, + wks.gre_item, + wks.item_flags, + key_type); + } else if (wks.tunnel_item->type == RTE_FLOW_ITEM_TYPE_NVGRE) { + flow_dv_translate_item_nvgre(key, + wks.tunnel_item, + wks.item_flags, + key_type); + } else { + MLX5_ASSERT(false); + } + } + + if (match_criteria) + *match_criteria = flow_dv_matcher_enable(key); + if (item_flags) + *item_flags = wks.item_flags; + return 0; +} + /** * Fill the SW steering flow with DV spec. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 03/18] net/mlx5: add hardware steering item translation function 2022-10-20 15:57 ` [v6 03/18] net/mlx5: add hardware steering item translation function Alex Vesker @ 2022-10-24 6:50 ` Slava Ovsiienko 0 siblings, 0 replies; 134+ messages in thread From: Slava Ovsiienko @ 2022-10-24 6:50 UTC (permalink / raw) To: Alex Vesker, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 18:58 > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad > <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com> > Subject: [v6 03/18] net/mlx5: add hardware steering item translation > function > > From: Suanming Mou <suanmingm@nvidia.com> > > As hardware steering root table flows still work under FW steering mode. > This commit provides shared item tranlsation code for hardware steering root > table flows. > > Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 04/18] net/mlx5: add port to metadata conversion 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (2 preceding siblings ...) 2022-10-20 15:57 ` [v6 03/18] net/mlx5: add hardware steering item translation function Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-24 6:50 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 05/18] common/mlx5: query set capability of registers Alex Vesker ` (14 subsequent siblings) 18 siblings, 1 reply; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Dariusz Sosnowski From: Dariusz Sosnowski <dsosnowski@nvidia.com> This patch initial version of functions used to: - convert between ethdev port_id and internal tag/mask value, - convert between IB context and internal tag/mask value. Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- drivers/net/mlx5/linux/mlx5_os.c | 10 +++++- drivers/net/mlx5/mlx5.c | 1 + drivers/net/mlx5/mlx5_flow.c | 6 ++++ drivers/net/mlx5/mlx5_flow.h | 52 ++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 29 ++++++++++++++++++ 5 files changed, 97 insertions(+), 1 deletion(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 3e505d8f4c..d1e7bcce57 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1554,8 +1554,16 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, if (!priv->hrxqs) goto error; rte_rwlock_init(&priv->ind_tbls_lock); - if (priv->sh->config.dv_flow_en == 2) + if (priv->sh->config.dv_flow_en == 2) { +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + if (priv->vport_meta_mask) + flow_hw_set_port_info(eth_dev); return eth_dev; +#else + DRV_LOG(ERR, "DV support is missing for HWS."); + goto error; +#endif + } /* Port representor shares the same max priority with pf port. */ if (!priv->sh->flow_priority_check_flag) { /* Supported Verbs flow priority number detection. */ diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 752b60d769..1d10932619 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1944,6 +1944,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) mlx5_flex_item_port_cleanup(dev); #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) flow_hw_resource_release(dev); + flow_hw_clear_port_info(dev); #endif if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 026d77b01f..72f4374c07 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -33,6 +33,12 @@ #include "mlx5_common_os.h" #include "rte_pmd_mlx5.h" +/* + * Shared array for quick translation between port_id and vport mask/values + * used for HWS rules. + */ +struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; + struct tunnel_default_miss_ctx { uint16_t *queue; __extension__ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 3537eb3d66..c0c719dd8b 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1326,6 +1326,58 @@ struct mlx5_flow_split_info { uint64_t prefix_layers; /**< Prefix subflow layers. */ }; +struct flow_hw_port_info { + uint32_t regc_mask; + uint32_t regc_value; + uint32_t is_wire:1; +}; + +extern struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; + +/* + * Get metadata match tag and mask for given rte_eth_dev port. + * Used in HWS rule creation. + */ +static __rte_always_inline const struct flow_hw_port_info * +flow_hw_conv_port_id(const uint16_t port_id) +{ + struct flow_hw_port_info *port_info; + + if (port_id >= RTE_MAX_ETHPORTS) + return NULL; + port_info = &mlx5_flow_hw_port_infos[port_id]; + return !!port_info->regc_mask ? port_info : NULL; +} + +#ifdef HAVE_IBV_FLOW_DV_SUPPORT +/* + * Get metadata match tag and mask for the uplink port represented + * by given IB context. Used in HWS context creation. + */ +static __rte_always_inline const struct flow_hw_port_info * +flow_hw_get_wire_port(struct ibv_context *ibctx) +{ + struct ibv_device *ibdev = ibctx->device; + uint16_t port_id; + + MLX5_ETH_FOREACH_DEV(port_id, NULL) { + const struct mlx5_priv *priv = + rte_eth_devices[port_id].data->dev_private; + + if (priv && priv->master) { + struct ibv_context *port_ibctx = priv->sh->cdev->ctx; + + if (port_ibctx->device == ibdev) + return flow_hw_conv_port_id(port_id); + } + } + return NULL; +} +#endif + +void flow_hw_set_port_info(struct rte_eth_dev *dev); +void flow_hw_clear_port_info(struct rte_eth_dev *dev); + typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index b168ff9e7e..765e5164cb 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2211,6 +2211,35 @@ flow_hw_resource_release(struct rte_eth_dev *dev) priv->nb_queue = 0; } +/* Sets vport tag and mask, for given port, used in HWS rules. */ +void +flow_hw_set_port_info(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint16_t port_id = dev->data->port_id; + struct flow_hw_port_info *info; + + MLX5_ASSERT(port_id < RTE_MAX_ETHPORTS); + info = &mlx5_flow_hw_port_infos[port_id]; + info->regc_mask = priv->vport_meta_mask; + info->regc_value = priv->vport_meta_tag; + info->is_wire = priv->master; +} + +/* Clears vport tag and mask used for HWS rules. */ +void +flow_hw_clear_port_info(struct rte_eth_dev *dev) +{ + uint16_t port_id = dev->data->port_id; + struct flow_hw_port_info *info; + + MLX5_ASSERT(port_id < RTE_MAX_ETHPORTS); + info = &mlx5_flow_hw_port_infos[port_id]; + info->regc_mask = 0; + info->regc_value = 0; + info->is_wire = 0; +} + /** * Create shared action. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 04/18] net/mlx5: add port to metadata conversion 2022-10-20 15:57 ` [v6 04/18] net/mlx5: add port to metadata conversion Alex Vesker @ 2022-10-24 6:50 ` Slava Ovsiienko 0 siblings, 0 replies; 134+ messages in thread From: Slava Ovsiienko @ 2022-10-24 6:50 UTC (permalink / raw) To: Alex Vesker, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam, Dariusz Sosnowski > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 18:58 > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad > <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Dariusz Sosnowski > <dsosnowski@nvidia.com> > Subject: [v6 04/18] net/mlx5: add port to metadata conversion > > From: Dariusz Sosnowski <dsosnowski@nvidia.com> > > This patch initial version of functions used to: > > - convert between ethdev port_id and internal tag/mask value, > - convert between IB context and internal tag/mask value. > > Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 05/18] common/mlx5: query set capability of registers 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (3 preceding siblings ...) 2022-10-20 15:57 ` [v6 04/18] net/mlx5: add port to metadata conversion Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-24 6:50 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 06/18] net/mlx5: provide the available tag registers Alex Vesker ` (13 subsequent siblings) 18 siblings, 1 reply; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Bing Zhao From: Bing Zhao <bingz@nvidia.com> In the flow table capabilities, new fields are added to query the capability to set, add, copy to a REG_C_x. The set capability are queried and saved for the future usage. Signed-off-by: Bing Zhao <bingz@nvidia.com> --- drivers/common/mlx5/mlx5_devx_cmds.c | 30 +++++++++++++++++++ drivers/common/mlx5/mlx5_devx_cmds.h | 2 ++ drivers/common/mlx5/mlx5_prm.h | 45 +++++++++++++++++++++++++--- 3 files changed, 73 insertions(+), 4 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index 76f0b6724f..9c185366d0 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -1064,6 +1064,24 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->modify_outer_ip_ecn = MLX5_GET (flow_table_nic_cap, hcattr, ft_header_modify_nic_receive.outer_ip_ecn); + attr->set_reg_c = 0xff; + if (attr->nic_flow_table) { +#define GET_RX_REG_X_BITS \ + MLX5_GET(flow_table_nic_cap, hcattr, \ + ft_header_modify_nic_receive.metadata_reg_c_x) +#define GET_TX_REG_X_BITS \ + MLX5_GET(flow_table_nic_cap, hcattr, \ + ft_header_modify_nic_transmit.metadata_reg_c_x) + + uint32_t tx_reg, rx_reg; + + tx_reg = GET_TX_REG_X_BITS; + rx_reg = GET_RX_REG_X_BITS; + attr->set_reg_c &= (rx_reg & tx_reg); + +#undef GET_RX_REG_X_BITS +#undef GET_TX_REG_X_BITS + } attr->pkt_integrity_match = mlx5_devx_query_pkt_integrity_match(hcattr); attr->inner_ipv4_ihl = MLX5_GET (flow_table_nic_cap, hcattr, @@ -1163,6 +1181,18 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, attr->esw_mgr_vport_id = MLX5_GET(esw_cap, hcattr, esw_manager_vport_number); } + if (attr->eswitch_manager) { + uint32_t esw_reg; + + hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + if (!hcattr) + return rc; + esw_reg = MLX5_GET(flow_table_esw_cap, hcattr, + ft_header_modify_esw_fdb.metadata_reg_c_x); + attr->set_reg_c &= esw_reg; + } return 0; error: rc = (rc > 0) ? -rc : rc; diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index cceaf3411d..a10aa3331b 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -263,6 +263,8 @@ struct mlx5_hca_attr { uint32_t crypto_wrapped_import_method:1; uint16_t esw_mgr_vport_id; /* E-Switch Mgr vport ID . */ uint16_t max_wqe_sz_sq; + uint32_t set_reg_c:8; + uint32_t nic_flow_table:1; uint32_t modify_outer_ip_ecn:1; }; diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 9c1c93f916..ca4763f53d 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -1295,6 +1295,7 @@ enum { MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP = 0xc << 1, MLX5_GET_HCA_CAP_OP_MOD_ROCE = 0x4 << 1, MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE = 0x7 << 1, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE = 0x8 << 1, MLX5_SET_HCA_CAP_OP_MOD_ESW = 0x9 << 1, MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1, MLX5_GET_HCA_CAP_OP_MOD_CRYPTO = 0x1A << 1, @@ -1892,6 +1893,7 @@ struct mlx5_ifc_roce_caps_bits { }; struct mlx5_ifc_ft_fields_support_bits { + /* set_action_field_support */ u8 outer_dmac[0x1]; u8 outer_smac[0x1]; u8 outer_ether_type[0x1]; @@ -1919,7 +1921,7 @@ struct mlx5_ifc_ft_fields_support_bits { u8 outer_gre_key[0x1]; u8 outer_vxlan_vni[0x1]; u8 reserved_at_1a[0x5]; - u8 source_eswitch_port[0x1]; + u8 source_eswitch_port[0x1]; /* end of DW0 */ u8 inner_dmac[0x1]; u8 inner_smac[0x1]; u8 inner_ether_type[0x1]; @@ -1943,8 +1945,33 @@ struct mlx5_ifc_ft_fields_support_bits { u8 inner_tcp_sport[0x1]; u8 inner_tcp_dport[0x1]; u8 inner_tcp_flags[0x1]; - u8 reserved_at_37[0x9]; - u8 reserved_at_40[0x40]; + u8 reserved_at_37[0x9]; /* end of DW1 */ + u8 reserved_at_40[0x20]; /* end of DW2 */ + u8 reserved_at_60[0x18]; + union { + struct { + u8 metadata_reg_c_7[0x1]; + u8 metadata_reg_c_6[0x1]; + u8 metadata_reg_c_5[0x1]; + u8 metadata_reg_c_4[0x1]; + u8 metadata_reg_c_3[0x1]; + u8 metadata_reg_c_2[0x1]; + u8 metadata_reg_c_1[0x1]; + u8 metadata_reg_c_0[0x1]; + }; + u8 metadata_reg_c_x[0x8]; + }; /* end of DW3 */ + /* set_action_field_support_2 */ + u8 reserved_at_80[0x80]; + /* add_action_field_support */ + u8 reserved_at_100[0x80]; + /* add_action_field_support_2 */ + u8 reserved_at_180[0x80]; + /* copy_action_field_support */ + u8 reserved_at_200[0x80]; + /* copy_action_field_support_2 */ + u8 reserved_at_280[0x80]; + u8 reserved_at_300[0x100]; }; /* @@ -1989,9 +2016,18 @@ struct mlx5_ifc_flow_table_nic_cap_bits { u8 reserved_at_e00[0x200]; struct mlx5_ifc_ft_fields_support_bits ft_header_modify_nic_receive; - u8 reserved_at_1080[0x380]; struct mlx5_ifc_ft_fields_support_2_bits ft_field_support_2_nic_receive; + u8 reserved_at_1480[0x780]; + struct mlx5_ifc_ft_fields_support_bits + ft_header_modify_nic_transmit; + u8 reserved_at_2000[0x6000]; +}; + +struct mlx5_ifc_flow_table_esw_cap_bits { + u8 reserved_at_0[0x800]; + struct mlx5_ifc_ft_fields_support_bits ft_header_modify_esw_fdb; + u8 reserved_at_C00[0x7400]; }; /* @@ -2046,6 +2082,7 @@ union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_qos_cap_bits qos_cap; struct mlx5_ifc_virtio_emulation_cap_bits vdpa_caps; struct mlx5_ifc_flow_table_nic_cap_bits flow_table_nic_cap; + struct mlx5_ifc_flow_table_esw_cap_bits flow_table_esw_cap; struct mlx5_ifc_esw_cap_bits esw_cap; struct mlx5_ifc_roce_caps_bits roce_caps; u8 reserved_at_0[0x8000]; -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 05/18] common/mlx5: query set capability of registers 2022-10-20 15:57 ` [v6 05/18] common/mlx5: query set capability of registers Alex Vesker @ 2022-10-24 6:50 ` Slava Ovsiienko 0 siblings, 0 replies; 134+ messages in thread From: Slava Ovsiienko @ 2022-10-24 6:50 UTC (permalink / raw) To: Alex Vesker, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam, Bing Zhao > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 18:58 > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad > <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Bing Zhao <bingz@nvidia.com> > Subject: [v6 05/18] common/mlx5: query set capability of registers > > From: Bing Zhao <bingz@nvidia.com> > > In the flow table capabilities, new fields are added to query the capability > to set, add, copy to a REG_C_x. > > The set capability are queried and saved for the future usage. > > Signed-off-by: Bing Zhao <bingz@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 06/18] net/mlx5: provide the available tag registers 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (4 preceding siblings ...) 2022-10-20 15:57 ` [v6 05/18] common/mlx5: query set capability of registers Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-24 6:51 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 07/18] net/mlx5: Add additional glue functions for HWS Alex Vesker ` (12 subsequent siblings) 18 siblings, 1 reply; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Bing Zhao From: Bing Zhao <bingz@nvidia.com> The available tags that can be used by the application are fixed after startup. A global array is used to store the information and transfer the TAG item directly from the ID to the REG_C_x. Signed-off-by: Bing Zhao <bingz@nvidia.com> --- drivers/net/mlx5/linux/mlx5_os.c | 2 + drivers/net/mlx5/mlx5.c | 2 + drivers/net/mlx5/mlx5.h | 1 + drivers/net/mlx5/mlx5_defs.h | 2 + drivers/net/mlx5/mlx5_flow.c | 11 +++++ drivers/net/mlx5/mlx5_flow.h | 27 ++++++++++++ drivers/net/mlx5/mlx5_flow_hw.c | 76 ++++++++++++++++++++++++++++++++ 7 files changed, 121 insertions(+) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index d1e7bcce57..12f503474a 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1558,6 +1558,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, #ifdef HAVE_IBV_FLOW_DV_SUPPORT if (priv->vport_meta_mask) flow_hw_set_port_info(eth_dev); + /* Only HWS requires this information. */ + flow_hw_init_tags_set(eth_dev); return eth_dev; #else DRV_LOG(ERR, "DV support is missing for HWS."); diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 1d10932619..b39ef1ecbe 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1945,6 +1945,8 @@ mlx5_dev_close(struct rte_eth_dev *dev) #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) flow_hw_resource_release(dev); flow_hw_clear_port_info(dev); + if (priv->sh->config.dv_flow_en == 2) + flow_hw_clear_tags_set(dev); #endif if (priv->rxq_privs != NULL) { /* XXX race condition if mlx5_rx_burst() is still running. */ diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index c2c3ed81fa..aa328c3bc9 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1205,6 +1205,7 @@ struct mlx5_dev_ctx_shared { uint32_t drop_action_check_flag:1; /* Check Flag for drop action. */ uint32_t flow_priority_check_flag:1; /* Check Flag for flow priority. */ uint32_t metadata_regc_check_flag:1; /* Check Flag for metadata REGC. */ + uint32_t hws_tags:1; /* Check if tags info for HWS initialized. */ uint32_t max_port; /* Maximal IB device port index. */ struct mlx5_bond_info bond; /* Bonding information. */ struct mlx5_common_device *cdev; /* Backend mlx5 device. */ diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index 018d3f0f0c..585afb0a98 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -139,6 +139,8 @@ #define MLX5_XMETA_MODE_META32 2 /* Provide info on patrial hw miss. Implies MLX5_XMETA_MODE_META16 */ #define MLX5_XMETA_MODE_MISS_INFO 3 +/* Only valid in HWS, 32bits extended META without MARK support in FDB. */ +#define MLX5_XMETA_MODE_META32_HWS 4 /* Tx accurate scheduling on timestamps parameters. */ #define MLX5_TXPP_WAIT_INIT_TS 1000ul /* How long to wait timestamp. */ diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 72f4374c07..1543d7f75e 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -39,6 +39,17 @@ */ struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; +/* + * A global structure to save the available REG_C_x for tags usage. + * The Meter color REG (ASO) and the last available one will be reserved + * for PMD internal usage. + * Since there is no "port" concept in the driver, it is assumed that the + * available tags set will be the minimum intersection. + * 3 - in FDB mode / 5 - in legacy mode + */ +uint32_t mlx5_flow_hw_avl_tags_init_cnt; +enum modify_reg mlx5_flow_hw_avl_tags[MLX5_FLOW_HW_TAGS_MAX] = {REG_NON}; + struct tunnel_default_miss_ctx { uint16_t *queue; __extension__ diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index c0c719dd8b..98ae7c6bda 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1334,6 +1334,10 @@ struct flow_hw_port_info { extern struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS]; +#define MLX5_FLOW_HW_TAGS_MAX 8 +extern uint32_t mlx5_flow_hw_avl_tags_init_cnt; +extern enum modify_reg mlx5_flow_hw_avl_tags[]; + /* * Get metadata match tag and mask for given rte_eth_dev port. * Used in HWS rule creation. @@ -1375,9 +1379,32 @@ flow_hw_get_wire_port(struct ibv_context *ibctx) } #endif +/* + * Convert metadata or tag to the actual register. + * META: Can only be used to match in the FDB in this stage, fixed C_1. + * TAG: C_x expect meter color reg and the reserved ones. + * TODO: Per port / device, FDB or NIC for Meta matching. + */ +static __rte_always_inline int +flow_hw_get_reg_id(enum rte_flow_item_type type, uint32_t id) +{ + switch (type) { + case RTE_FLOW_ITEM_TYPE_META: + return REG_C_1; + case RTE_FLOW_ITEM_TYPE_TAG: + MLX5_ASSERT(id < MLX5_FLOW_HW_TAGS_MAX); + return mlx5_flow_hw_avl_tags[id]; + default: + return REG_NON; + } +} + void flow_hw_set_port_info(struct rte_eth_dev *dev); void flow_hw_clear_port_info(struct rte_eth_dev *dev); +void flow_hw_init_tags_set(struct rte_eth_dev *dev); +void flow_hw_clear_tags_set(struct rte_eth_dev *dev); + typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev, const struct rte_flow_attr *attr, const struct rte_flow_item items[], diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 765e5164cb..03725649c8 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -2240,6 +2240,82 @@ flow_hw_clear_port_info(struct rte_eth_dev *dev) info->is_wire = 0; } +/* + * Initialize the information of available tag registers and an intersection + * of all the probed devices' REG_C_Xs. + * PS. No port concept in steering part, right now it cannot be per port level. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + */ +void flow_hw_init_tags_set(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint32_t meta_mode = priv->sh->config.dv_xmeta_en; + uint8_t masks = (uint8_t)priv->sh->cdev->config.hca_attr.set_reg_c; + uint32_t i, j; + enum modify_reg copy[MLX5_FLOW_HW_TAGS_MAX] = {REG_NON}; + uint8_t unset = 0; + uint8_t copy_masks = 0; + + /* + * The CAPA is global for common device but only used in net. + * It is shared per eswitch domain. + */ + if (!!priv->sh->hws_tags) + return; + unset |= 1 << (priv->mtr_color_reg - REG_C_0); + unset |= 1 << (REG_C_6 - REG_C_0); + if (meta_mode == MLX5_XMETA_MODE_META32_HWS) { + unset |= 1 << (REG_C_1 - REG_C_0); + unset |= 1 << (REG_C_0 - REG_C_0); + } + masks &= ~unset; + if (mlx5_flow_hw_avl_tags_init_cnt) { + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) { + if (mlx5_flow_hw_avl_tags[i] != REG_NON && !!((1 << i) & masks)) { + copy[mlx5_flow_hw_avl_tags[i] - REG_C_0] = + mlx5_flow_hw_avl_tags[i]; + copy_masks |= (1 << i); + } + } + if (copy_masks != masks) { + j = 0; + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) + if (!!((1 << i) & copy_masks)) + mlx5_flow_hw_avl_tags[j++] = copy[i]; + } + } else { + j = 0; + for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) { + if (!!((1 << i) & masks)) + mlx5_flow_hw_avl_tags[j++] = + (enum modify_reg)(i + (uint32_t)REG_C_0); + } + } + priv->sh->hws_tags = 1; + mlx5_flow_hw_avl_tags_init_cnt++; +} + +/* + * Reset the available tag registers information to NONE. + * + * @param[in] dev + * Pointer to the rte_eth_dev structure. + */ +void flow_hw_clear_tags_set(struct rte_eth_dev *dev) +{ + struct mlx5_priv *priv = dev->data->dev_private; + + if (!priv->sh->hws_tags) + return; + priv->sh->hws_tags = 0; + mlx5_flow_hw_avl_tags_init_cnt--; + if (!mlx5_flow_hw_avl_tags_init_cnt) + memset(mlx5_flow_hw_avl_tags, REG_NON, + sizeof(enum modify_reg) * MLX5_FLOW_HW_TAGS_MAX); +} + /** * Create shared action. * -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 06/18] net/mlx5: provide the available tag registers 2022-10-20 15:57 ` [v6 06/18] net/mlx5: provide the available tag registers Alex Vesker @ 2022-10-24 6:51 ` Slava Ovsiienko 0 siblings, 0 replies; 134+ messages in thread From: Slava Ovsiienko @ 2022-10-24 6:51 UTC (permalink / raw) To: Alex Vesker, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam, Bing Zhao > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 18:58 > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad > <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Bing Zhao <bingz@nvidia.com> > Subject: [v6 06/18] net/mlx5: provide the available tag registers > > From: Bing Zhao <bingz@nvidia.com> > > The available tags that can be used by the application are fixed after > startup. > > A global array is used to store the information and transfer the TAG item > directly from the ID to the REG_C_x. > > Signed-off-by: Bing Zhao <bingz@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 07/18] net/mlx5: Add additional glue functions for HWS 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (5 preceding siblings ...) 2022-10-20 15:57 ` [v6 06/18] net/mlx5: provide the available tag registers Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-24 6:52 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 08/18] net/mlx5/hws: Add HWS command layer Alex Vesker ` (11 subsequent siblings) 18 siblings, 1 reply; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Add missing glue support for HWS mlx5dr layer. The new glue functions are needed for mlx5dv create matcher and action, which are used as the kernel root table as well as for capabilities query like device name and ports info. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/common/mlx5/linux/mlx5_glue.c | 121 ++++++++++++++++++++++++-- drivers/common/mlx5/linux/mlx5_glue.h | 17 ++++ 2 files changed, 131 insertions(+), 7 deletions(-) diff --git a/drivers/common/mlx5/linux/mlx5_glue.c b/drivers/common/mlx5/linux/mlx5_glue.c index b954df0784..702eb36b62 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.c +++ b/drivers/common/mlx5/linux/mlx5_glue.c @@ -111,6 +111,12 @@ mlx5_glue_query_device_ex(struct ibv_context *context, return ibv_query_device_ex(context, input, attr); } +static const char * +mlx5_glue_get_device_name(struct ibv_device *device) +{ + return ibv_get_device_name(device); +} + static int mlx5_glue_query_rt_values_ex(struct ibv_context *context, struct ibv_values_ex *values) @@ -620,6 +626,20 @@ mlx5_glue_dv_create_qp(struct ibv_context *context, #endif } +static void * +__mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, + struct mlx5dv_flow_matcher_attr *matcher_attr) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_matcher(context, matcher_attr); +#else + (void)context; + (void)matcher_attr; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, struct mlx5dv_flow_matcher_attr *matcher_attr, @@ -633,7 +653,7 @@ mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, matcher_attr->match_mask); #else (void)tbl; - return mlx5dv_create_flow_matcher(context, matcher_attr); + return __mlx5_glue_dv_create_flow_matcher(context, matcher_attr); #endif #else (void)context; @@ -644,6 +664,26 @@ mlx5_glue_dv_create_flow_matcher(struct ibv_context *context, #endif } +static void * +__mlx5_glue_dv_create_flow(void *matcher, + void *match_value, + size_t num_actions, + void *actions) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow(matcher, + match_value, + num_actions, + (struct mlx5dv_flow_action_attr *)actions); +#else + (void)matcher; + (void)match_value; + (void)num_actions; + (void)actions; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow(void *matcher, void *match_value, @@ -663,8 +703,8 @@ mlx5_glue_dv_create_flow(void *matcher, for (i = 0; i < num_actions; i++) actions_attr[i] = *((struct mlx5dv_flow_action_attr *)(actions[i])); - return mlx5dv_create_flow(matcher, match_value, - num_actions, actions_attr); + return __mlx5_glue_dv_create_flow(matcher, match_value, + num_actions, actions_attr); #endif #else (void)matcher; @@ -735,6 +775,26 @@ mlx5_glue_dv_create_flow_action_dest_devx_tir(void *tir) #endif } +static void * +__mlx5_glue_dv_create_flow_action_modify_header + (struct ibv_context *ctx, + size_t actions_sz, + uint64_t actions[], + enum mlx5dv_flow_table_type ft_type) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_action_modify_header + (ctx, actions_sz, actions, ft_type); +#else + (void)ctx; + (void)ft_type; + (void)actions_sz; + (void)actions; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_action_modify_header (struct ibv_context *ctx, @@ -758,7 +818,7 @@ mlx5_glue_dv_create_flow_action_modify_header if (!action) return NULL; action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; - action->action = mlx5dv_create_flow_action_modify_header + action->action = __mlx5_glue_dv_create_flow_action_modify_header (ctx, actions_sz, actions, ft_type); return action; #endif @@ -774,6 +834,27 @@ mlx5_glue_dv_create_flow_action_modify_header #endif } +static void * +__mlx5_glue_dv_create_flow_action_packet_reformat + (struct ibv_context *ctx, + size_t data_sz, void *data, + enum mlx5dv_flow_action_packet_reformat_type reformat_type, + enum mlx5dv_flow_table_type ft_type) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_create_flow_action_packet_reformat + (ctx, data_sz, data, reformat_type, ft_type); +#else + (void)ctx; + (void)reformat_type; + (void)ft_type; + (void)data_sz; + (void)data; + errno = ENOTSUP; + return NULL; +#endif +} + static void * mlx5_glue_dv_create_flow_action_packet_reformat (struct ibv_context *ctx, @@ -798,7 +879,7 @@ mlx5_glue_dv_create_flow_action_packet_reformat if (!action) return NULL; action->type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; - action->action = mlx5dv_create_flow_action_packet_reformat + action->action = __mlx5_glue_dv_create_flow_action_packet_reformat (ctx, data_sz, data, reformat_type, ft_type); return action; #endif @@ -908,6 +989,18 @@ mlx5_glue_dv_destroy_flow(void *flow_id) #endif } +static int +__mlx5_glue_dv_destroy_flow_matcher(void *matcher) +{ +#ifdef HAVE_IBV_FLOW_DV_SUPPORT + return mlx5dv_destroy_flow_matcher(matcher); +#else + (void)matcher; + errno = ENOTSUP; + return errno; +#endif +} + static int mlx5_glue_dv_destroy_flow_matcher(void *matcher) { @@ -915,7 +1008,7 @@ mlx5_glue_dv_destroy_flow_matcher(void *matcher) #ifdef HAVE_MLX5DV_DR return mlx5dv_dr_matcher_destroy(matcher); #else - return mlx5dv_destroy_flow_matcher(matcher); + return __mlx5_glue_dv_destroy_flow_matcher(matcher); #endif #else (void)matcher; @@ -1164,12 +1257,18 @@ mlx5_glue_devx_port_query(struct ibv_context *ctx, info->vport_id = devx_port.vport; info->query_flags |= MLX5_PORT_QUERY_VPORT; } + if (devx_port.flags & MLX5DV_QUERY_PORT_ESW_OWNER_VHCA_ID) { + info->esw_owner_vhca_id = devx_port.esw_owner_vhca_id; + info->query_flags |= MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID; + } #else #ifdef HAVE_MLX5DV_DR_DEVX_PORT /* The legacy DevX port query API is implemented (prior v35). */ struct mlx5dv_devx_port devx_port = { .comp_mask = MLX5DV_DEVX_PORT_VPORT | - MLX5DV_DEVX_PORT_MATCH_REG_C_0 + MLX5DV_DEVX_PORT_MATCH_REG_C_0 | + MLX5DV_DEVX_PORT_VPORT_VHCA_ID | + MLX5DV_DEVX_PORT_ESW_OWNER_VHCA_ID }; err = mlx5dv_query_devx_port(ctx, port_num, &devx_port); @@ -1464,6 +1563,7 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { .close_device = mlx5_glue_close_device, .query_device = mlx5_glue_query_device, .query_device_ex = mlx5_glue_query_device_ex, + .get_device_name = mlx5_glue_get_device_name, .query_rt_values_ex = mlx5_glue_query_rt_values_ex, .query_port = mlx5_glue_query_port, .create_comp_channel = mlx5_glue_create_comp_channel, @@ -1522,7 +1622,9 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { .dv_init_obj = mlx5_glue_dv_init_obj, .dv_create_qp = mlx5_glue_dv_create_qp, .dv_create_flow_matcher = mlx5_glue_dv_create_flow_matcher, + .dv_create_flow_matcher_root = __mlx5_glue_dv_create_flow_matcher, .dv_create_flow = mlx5_glue_dv_create_flow, + .dv_create_flow_root = __mlx5_glue_dv_create_flow, .dv_create_flow_action_counter = mlx5_glue_dv_create_flow_action_counter, .dv_create_flow_action_dest_ibv_qp = @@ -1531,8 +1633,12 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { mlx5_glue_dv_create_flow_action_dest_devx_tir, .dv_create_flow_action_modify_header = mlx5_glue_dv_create_flow_action_modify_header, + .dv_create_flow_action_modify_header_root = + __mlx5_glue_dv_create_flow_action_modify_header, .dv_create_flow_action_packet_reformat = mlx5_glue_dv_create_flow_action_packet_reformat, + .dv_create_flow_action_packet_reformat_root = + __mlx5_glue_dv_create_flow_action_packet_reformat, .dv_create_flow_action_tag = mlx5_glue_dv_create_flow_action_tag, .dv_create_flow_action_meter = mlx5_glue_dv_create_flow_action_meter, .dv_modify_flow_action_meter = mlx5_glue_dv_modify_flow_action_meter, @@ -1541,6 +1647,7 @@ const struct mlx5_glue *mlx5_glue = &(const struct mlx5_glue) { mlx5_glue_dr_create_flow_action_default_miss, .dv_destroy_flow = mlx5_glue_dv_destroy_flow, .dv_destroy_flow_matcher = mlx5_glue_dv_destroy_flow_matcher, + .dv_destroy_flow_matcher_root = __mlx5_glue_dv_destroy_flow_matcher, .dv_open_device = mlx5_glue_dv_open_device, .devx_obj_create = mlx5_glue_devx_obj_create, .devx_obj_destroy = mlx5_glue_devx_obj_destroy, diff --git a/drivers/common/mlx5/linux/mlx5_glue.h b/drivers/common/mlx5/linux/mlx5_glue.h index 9616dfdd06..88aa7430e8 100644 --- a/drivers/common/mlx5/linux/mlx5_glue.h +++ b/drivers/common/mlx5/linux/mlx5_glue.h @@ -91,10 +91,12 @@ struct mlx5dv_port; #define MLX5_PORT_QUERY_VPORT (1u << 0) #define MLX5_PORT_QUERY_REG_C0 (1u << 1) +#define MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID (1u << 2) struct mlx5_port_info { uint16_t query_flags; uint16_t vport_id; /* Associated VF vport index (if any). */ + uint16_t esw_owner_vhca_id; /* Associated the esw_owner that this VF belongs to. */ uint32_t vport_meta_tag; /* Used for vport index match ove VF LAG. */ uint32_t vport_meta_mask; /* Used for vport index field match mask. */ }; @@ -164,6 +166,7 @@ struct mlx5_glue { int (*query_device_ex)(struct ibv_context *context, const struct ibv_query_device_ex_input *input, struct ibv_device_attr_ex *attr); + const char *(*get_device_name)(struct ibv_device *device); int (*query_rt_values_ex)(struct ibv_context *context, struct ibv_values_ex *values); int (*query_port)(struct ibv_context *context, uint8_t port_num, @@ -268,8 +271,13 @@ struct mlx5_glue { (struct ibv_context *context, struct mlx5dv_flow_matcher_attr *matcher_attr, void *tbl); + void *(*dv_create_flow_matcher_root) + (struct ibv_context *context, + struct mlx5dv_flow_matcher_attr *matcher_attr); void *(*dv_create_flow)(void *matcher, void *match_value, size_t num_actions, void *actions[]); + void *(*dv_create_flow_root)(void *matcher, void *match_value, + size_t num_actions, void *actions); void *(*dv_create_flow_action_counter)(void *obj, uint32_t offset); void *(*dv_create_flow_action_dest_ibv_qp)(void *qp); void *(*dv_create_flow_action_dest_devx_tir)(void *tir); @@ -277,12 +285,20 @@ struct mlx5_glue { (struct ibv_context *ctx, enum mlx5dv_flow_table_type ft_type, void *domain, uint64_t flags, size_t actions_sz, uint64_t actions[]); + void *(*dv_create_flow_action_modify_header_root) + (struct ibv_context *ctx, size_t actions_sz, uint64_t actions[], + enum mlx5dv_flow_table_type ft_type); void *(*dv_create_flow_action_packet_reformat) (struct ibv_context *ctx, enum mlx5dv_flow_action_packet_reformat_type reformat_type, enum mlx5dv_flow_table_type ft_type, struct mlx5dv_dr_domain *domain, uint32_t flags, size_t data_sz, void *data); + void *(*dv_create_flow_action_packet_reformat_root) + (struct ibv_context *ctx, + size_t data_sz, void *data, + enum mlx5dv_flow_action_packet_reformat_type reformat_type, + enum mlx5dv_flow_table_type ft_type); void *(*dv_create_flow_action_tag)(uint32_t tag); void *(*dv_create_flow_action_meter) (struct mlx5dv_dr_flow_meter_attr *attr); @@ -291,6 +307,7 @@ struct mlx5_glue { void *(*dr_create_flow_action_default_miss)(void); int (*dv_destroy_flow)(void *flow); int (*dv_destroy_flow_matcher)(void *matcher); + int (*dv_destroy_flow_matcher_root)(void *matcher); struct ibv_context *(*dv_open_device)(struct ibv_device *device); struct mlx5dv_var *(*dv_alloc_var)(struct ibv_context *context, uint32_t flags); -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 07/18] net/mlx5: Add additional glue functions for HWS 2022-10-20 15:57 ` [v6 07/18] net/mlx5: Add additional glue functions for HWS Alex Vesker @ 2022-10-24 6:52 ` Slava Ovsiienko 0 siblings, 0 replies; 134+ messages in thread From: Slava Ovsiienko @ 2022-10-24 6:52 UTC (permalink / raw) To: Alex Vesker, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 18:58 > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad > <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com> > Subject: [v6 07/18] net/mlx5: Add additional glue functions for HWS > > Add missing glue support for HWS mlx5dr layer. The new glue functions are > needed for mlx5dv create matcher and action, which are used as the kernel > root table as well as for capabilities query like device name and ports > info. > > Signed-off-by: Alex Vesker <valex@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 08/18] net/mlx5/hws: Add HWS command layer 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (6 preceding siblings ...) 2022-10-20 15:57 ` [v6 07/18] net/mlx5: Add additional glue functions for HWS Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-24 6:52 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 09/18] net/mlx5/hws: Add HWS pool and buddy Alex Vesker ` (10 subsequent siblings) 18 siblings, 1 reply; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit From: Erez Shitrit <erezsh@nvidia.com> The command layer is used to communicate with the FW, query capabilities and allocate FW resources needed for HWS. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/common/mlx5/mlx5_prm.h | 607 ++++++++++++++++++- drivers/net/mlx5/hws/mlx5dr_cmd.c | 948 ++++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_cmd.h | 230 ++++++++ 3 files changed, 1775 insertions(+), 10 deletions(-) create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.h diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index ca4763f53d..371942ae50 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -289,6 +289,8 @@ /* The alignment needed for CQ buffer. */ #define MLX5_CQE_BUF_ALIGNMENT rte_mem_page_size() +#define MAX_ACTIONS_DATA_IN_HEADER_MODIFY 512 + /* Completion mode. */ enum mlx5_completion_mode { MLX5_COMP_ONLY_ERR = 0x0, @@ -677,6 +679,10 @@ enum { MLX5_MODIFICATION_TYPE_SET = 0x1, MLX5_MODIFICATION_TYPE_ADD = 0x2, MLX5_MODIFICATION_TYPE_COPY = 0x3, + MLX5_MODIFICATION_TYPE_INSERT = 0x4, + MLX5_MODIFICATION_TYPE_REMOVE = 0x5, + MLX5_MODIFICATION_TYPE_NOP = 0x6, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS = 0x7, }; /* The field of packet to be modified. */ @@ -1111,6 +1117,10 @@ enum { MLX5_CMD_OP_QUERY_TIS = 0x915, MLX5_CMD_OP_CREATE_RQT = 0x916, MLX5_CMD_OP_MODIFY_RQT = 0x917, + MLX5_CMD_OP_CREATE_FLOW_TABLE = 0x930, + MLX5_CMD_OP_CREATE_FLOW_GROUP = 0x933, + MLX5_CMD_OP_SET_FLOW_TABLE_ENTRY = 0x936, + MLX5_CMD_OP_MODIFY_FLOW_TABLE = 0x93c, MLX5_CMD_OP_ALLOC_FLOW_COUNTER = 0x939, MLX5_CMD_OP_QUERY_FLOW_COUNTER = 0x93b, MLX5_CMD_OP_CREATE_GENERAL_OBJECT = 0xa00, @@ -1299,6 +1309,7 @@ enum { MLX5_SET_HCA_CAP_OP_MOD_ESW = 0x9 << 1, MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1, MLX5_GET_HCA_CAP_OP_MOD_CRYPTO = 0x1A << 1, + MLX5_GET_HCA_CAP_OP_MOD_WQE_BASED_FLOW_TABLE = 0x1B << 1, MLX5_GET_HCA_CAP_OP_MOD_PARSE_GRAPH_NODE_CAP = 0x1C << 1, MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE_2 = 0x20 << 1, }; @@ -1317,6 +1328,14 @@ enum { (1ULL << MLX5_GENERAL_OBJ_TYPE_GENEVE_TLV_OPT) #define MLX5_GENERAL_OBJ_TYPES_CAP_CONN_TRACK_OFFLOAD \ (1ULL << MLX5_GENERAL_OBJ_TYPE_CONN_TRACK_OFFLOAD) +#define MLX5_GENERAL_OBJ_TYPES_CAP_RTC \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_RTC) +#define MLX5_GENERAL_OBJ_TYPES_CAP_STC \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_STC) +#define MLX5_GENERAL_OBJ_TYPES_CAP_STE \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_STE) +#define MLX5_GENERAL_OBJ_TYPES_CAP_DEFINER \ + (1ULL << MLX5_GENERAL_OBJ_TYPE_DEFINER) #define MLX5_GENERAL_OBJ_TYPES_CAP_DEK \ (1ULL << MLX5_GENERAL_OBJ_TYPE_DEK) #define MLX5_GENERAL_OBJ_TYPES_CAP_IMPORT_KEK \ @@ -1373,6 +1392,11 @@ enum { #define MLX5_HCA_FLEX_VXLAN_GPE_ENABLED (1UL << 7) #define MLX5_HCA_FLEX_ICMP_ENABLED (1UL << 8) #define MLX5_HCA_FLEX_ICMPV6_ENABLED (1UL << 9) +#define MLX5_HCA_FLEX_GTPU_ENABLED (1UL << 11) +#define MLX5_HCA_FLEX_GTPU_DW_2_ENABLED (1UL << 16) +#define MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED (1UL << 17) +#define MLX5_HCA_FLEX_GTPU_DW_0_ENABLED (1UL << 18) +#define MLX5_HCA_FLEX_GTPU_TEID_ENABLED (1UL << 19) /* The device steering logic format. */ #define MLX5_STEERING_LOGIC_FORMAT_CONNECTX_5 0x0 @@ -1505,7 +1529,8 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 wol_u[0x1]; u8 wol_p[0x1]; u8 stat_rate_support[0x10]; - u8 reserved_at_1f0[0xc]; + u8 reserved_at_1ef[0xb]; + u8 wqe_based_flow_table_update_cap[0x1]; u8 cqe_version[0x4]; u8 compact_address_vector[0x1]; u8 striding_rq[0x1]; @@ -1681,7 +1706,9 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 cqe_compression[0x1]; u8 cqe_compression_timeout[0x10]; u8 cqe_compression_max_num[0x10]; - u8 reserved_at_5e0[0x10]; + u8 reserved_at_5e0[0x8]; + u8 flex_parser_id_gtpu_dw_0[0x4]; + u8 reserved_at_5ec[0x4]; u8 tag_matching[0x1]; u8 rndv_offload_rc[0x1]; u8 rndv_offload_dc[0x1]; @@ -1691,17 +1718,38 @@ struct mlx5_ifc_cmd_hca_cap_bits { u8 affiliate_nic_vport_criteria[0x8]; u8 native_port_num[0x8]; u8 num_vhca_ports[0x8]; - u8 reserved_at_618[0x6]; + u8 flex_parser_id_gtpu_teid[0x4]; + u8 reserved_at_61c[0x2]; u8 sw_owner_id[0x1]; u8 reserved_at_61f[0x6C]; u8 wait_on_data[0x1]; u8 wait_on_time[0x1]; - u8 reserved_at_68d[0xBB]; + u8 reserved_at_68d[0x37]; + u8 flex_parser_id_geneve_opt_0[0x4]; + u8 flex_parser_id_icmp_dw1[0x4]; + u8 flex_parser_id_icmp_dw0[0x4]; + u8 flex_parser_id_icmpv6_dw1[0x4]; + u8 flex_parser_id_icmpv6_dw0[0x4]; + u8 flex_parser_id_outer_first_mpls_over_gre[0x4]; + u8 flex_parser_id_outer_first_mpls_over_udp_label[0x4]; + u8 reserved_at_6e0[0x20]; + u8 flex_parser_id_gtpu_dw_2[0x4]; + u8 flex_parser_id_gtpu_first_ext_dw_0[0x4]; + u8 reserved_at_708[0x40]; u8 dma_mmo_qp[0x1]; u8 regexp_mmo_qp[0x1]; u8 compress_mmo_qp[0x1]; u8 decompress_mmo_qp[0x1]; - u8 reserved_at_624[0xd4]; + u8 reserved_at_74c[0x14]; + u8 reserved_at_760[0x3]; + u8 log_max_num_header_modify_argument[0x5]; + u8 log_header_modify_argument_granularity_offset[0x4]; + u8 log_header_modify_argument_granularity[0x4]; + u8 reserved_at_770[0x3]; + u8 log_header_modify_argument_max_alloc[0x5]; + u8 reserved_at_778[0x8]; + u8 reserved_at_780[0x40]; + u8 match_definer_format_supported[0x40]; }; struct mlx5_ifc_qos_cap_bits { @@ -1876,7 +1924,9 @@ struct mlx5_ifc_flow_table_prop_layout_bits { u8 log_max_ft_sampler_num[8]; u8 metadata_reg_b_width[0x8]; u8 metadata_reg_a_width[0x8]; - u8 reserved_at_60[0x18]; + u8 reserved_at_60[0xa]; + u8 reparse[0x1]; + u8 reserved_at_6b[0xd]; u8 log_max_ft_num[0x8]; u8 reserved_at_80[0x10]; u8 log_max_flow_counter[0x8]; @@ -2061,7 +2111,17 @@ struct mlx5_ifc_cmd_hca_cap_2_bits { u8 hairpin_sq_wqe_bb_size[0x5]; u8 hairpin_sq_wq_in_host_mem[0x1]; u8 hairpin_data_buffer_locked[0x1]; - u8 reserved_at_16a[0x696]; + u8 reserved_at_16a[0x36]; + u8 reserved_at_1a0[0xb]; + u8 format_select_dw_8_6_ext[0x1]; + u8 reserved_at_1ac[0x14]; + u8 general_obj_types_127_64[0x40]; + u8 reserved_at_200[0x80]; + u8 format_select_dw_gtpu_dw_0[0x8]; + u8 format_select_dw_gtpu_dw_1[0x8]; + u8 format_select_dw_gtpu_dw_2[0x8]; + u8 format_select_dw_gtpu_first_ext_dw_0[0x8]; + u8 reserved_at_2a0[0x560]; }; struct mlx5_ifc_esw_cap_bits { @@ -2074,6 +2134,37 @@ struct mlx5_ifc_esw_cap_bits { u8 reserved_at_80[0x780]; }; +struct mlx5_ifc_wqe_based_flow_table_cap_bits { + u8 reserved_at_0[0x3]; + u8 log_max_num_ste[0x5]; + u8 reserved_at_8[0x3]; + u8 log_max_num_stc[0x5]; + u8 reserved_at_10[0x3]; + u8 log_max_num_rtc[0x5]; + u8 reserved_at_18[0x3]; + u8 log_max_num_header_modify_pattern[0x5]; + u8 reserved_at_20[0x3]; + u8 stc_alloc_log_granularity[0x5]; + u8 reserved_at_28[0x3]; + u8 stc_alloc_log_max[0x5]; + u8 reserved_at_30[0x3]; + u8 ste_alloc_log_granularity[0x5]; + u8 reserved_at_38[0x3]; + u8 ste_alloc_log_max[0x5]; + u8 reserved_at_40[0xb]; + u8 rtc_reparse_mode[0x5]; + u8 reserved_at_50[0x3]; + u8 rtc_index_mode[0x5]; + u8 reserved_at_58[0x3]; + u8 rtc_log_depth_max[0x5]; + u8 reserved_at_60[0x10]; + u8 ste_format[0x10]; + u8 stc_action_type[0x80]; + u8 header_insert_type[0x10]; + u8 header_remove_type[0x10]; + u8 trivial_match_definer[0x20]; +}; + union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_cmd_hca_cap_bits cmd_hca_cap; struct mlx5_ifc_cmd_hca_cap_2_bits cmd_hca_cap_2; @@ -2085,6 +2176,7 @@ union mlx5_ifc_hca_cap_union_bits { struct mlx5_ifc_flow_table_esw_cap_bits flow_table_esw_cap; struct mlx5_ifc_esw_cap_bits esw_cap; struct mlx5_ifc_roce_caps_bits roce_caps; + struct mlx5_ifc_wqe_based_flow_table_cap_bits wqe_based_flow_table_cap; u8 reserved_at_0[0x8000]; }; @@ -2098,6 +2190,20 @@ struct mlx5_ifc_set_action_in_bits { u8 data[0x20]; }; +struct mlx5_ifc_copy_action_in_bits { + u8 action_type[0x4]; + u8 src_field[0xc]; + u8 reserved_at_10[0x3]; + u8 src_offset[0x5]; + u8 reserved_at_18[0x3]; + u8 length[0x5]; + u8 reserved_at_20[0x4]; + u8 dst_field[0xc]; + u8 reserved_at_30[0x3]; + u8 dst_offset[0x5]; + u8 reserved_at_38[0x8]; +}; + struct mlx5_ifc_query_hca_cap_out_bits { u8 status[0x8]; u8 reserved_at_8[0x18]; @@ -2978,6 +3084,7 @@ enum { MLX5_GENERAL_OBJ_TYPE_GENEVE_TLV_OPT = 0x000b, MLX5_GENERAL_OBJ_TYPE_DEK = 0x000c, MLX5_GENERAL_OBJ_TYPE_VIRTQ = 0x000d, + MLX5_GENERAL_OBJ_TYPE_DEFINER = 0x0018, MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS = 0x001c, MLX5_GENERAL_OBJ_TYPE_IMPORT_KEK = 0x001d, MLX5_GENERAL_OBJ_TYPE_CREDENTIAL = 0x001e, @@ -2986,6 +3093,11 @@ enum { MLX5_GENERAL_OBJ_TYPE_FLOW_METER_ASO = 0x0024, MLX5_GENERAL_OBJ_TYPE_FLOW_HIT_ASO = 0x0025, MLX5_GENERAL_OBJ_TYPE_CONN_TRACK_OFFLOAD = 0x0031, + MLX5_GENERAL_OBJ_TYPE_ARG = 0x0023, + MLX5_GENERAL_OBJ_TYPE_STC = 0x0040, + MLX5_GENERAL_OBJ_TYPE_RTC = 0x0041, + MLX5_GENERAL_OBJ_TYPE_STE = 0x0042, + MLX5_GENERAL_OBJ_TYPE_MODIFY_HEADER_PATTERN = 0x0043, }; struct mlx5_ifc_general_obj_in_cmd_hdr_bits { @@ -2993,9 +3105,14 @@ struct mlx5_ifc_general_obj_in_cmd_hdr_bits { u8 reserved_at_10[0x20]; u8 obj_type[0x10]; u8 obj_id[0x20]; - u8 reserved_at_60[0x3]; - u8 log_obj_range[0x5]; - u8 reserved_at_58[0x18]; + union { + struct { + u8 reserved_at_60[0x3]; + u8 log_obj_range[0x5]; + u8 reserved_at_58[0x18]; + }; + u8 obj_offset[0x20]; + }; }; struct mlx5_ifc_general_obj_out_cmd_hdr_bits { @@ -3029,6 +3146,243 @@ struct mlx5_ifc_geneve_tlv_option_bits { u8 reserved_at_80[0x180]; }; + +enum mlx5_ifc_rtc_update_mode { + MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH = 0x0, + MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET = 0x1, +}; + +enum mlx5_ifc_rtc_ste_format { + MLX5_IFC_RTC_STE_FORMAT_8DW = 0x4, + MLX5_IFC_RTC_STE_FORMAT_11DW = 0x5, +}; + +enum mlx5_ifc_rtc_reparse_mode { + MLX5_IFC_RTC_REPARSE_NEVER = 0x0, + MLX5_IFC_RTC_REPARSE_ALWAYS = 0x1, +}; + +struct mlx5_ifc_rtc_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x40]; + u8 update_index_mode[0x2]; + u8 reparse_mode[0x2]; + u8 reserved_at_84[0x4]; + u8 pd[0x18]; + u8 reserved_at_a0[0x13]; + u8 log_depth[0x5]; + u8 log_hash_size[0x8]; + u8 ste_format[0x8]; + u8 table_type[0x8]; + u8 reserved_at_d0[0x10]; + u8 match_definer_id[0x20]; + u8 stc_id[0x20]; + u8 ste_table_base_id[0x20]; + u8 ste_table_offset[0x20]; + u8 reserved_at_160[0x8]; + u8 miss_flow_table_id[0x18]; + u8 reserved_at_180[0x280]; +}; + +enum mlx5_ifc_stc_action_type { + MLX5_IFC_STC_ACTION_TYPE_NOP = 0x00, + MLX5_IFC_STC_ACTION_TYPE_COPY = 0x05, + MLX5_IFC_STC_ACTION_TYPE_SET = 0x06, + MLX5_IFC_STC_ACTION_TYPE_ADD = 0x07, + MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS = 0x08, + MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE = 0x09, + MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT = 0x0b, + MLX5_IFC_STC_ACTION_TYPE_TAG = 0x0c, + MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST = 0x0e, + MLX5_IFC_STC_ACTION_TYPE_ASO = 0x12, + MLX5_IFC_STC_ACTION_TYPE_COUNTER = 0x14, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE = 0x80, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR = 0x81, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT = 0x82, + MLX5_IFC_STC_ACTION_TYPE_DROP = 0x83, + MLX5_IFC_STC_ACTION_TYPE_ALLOW = 0x84, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT = 0x85, + MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK = 0x86, +}; + +struct mlx5_ifc_stc_ste_param_ste_table_bits { + u8 ste_obj_id[0x20]; + u8 match_definer_id[0x20]; + u8 reserved_at_40[0x3]; + u8 log_hash_size[0x5]; + u8 reserved_at_48[0x38]; +}; + +struct mlx5_ifc_stc_ste_param_tir_bits { + u8 reserved_at_0[0x8]; + u8 tirn[0x18]; + u8 reserved_at_20[0x60]; +}; + +struct mlx5_ifc_stc_ste_param_table_bits { + u8 reserved_at_0[0x8]; + u8 table_id[0x18]; + u8 reserved_at_20[0x60]; +}; + +struct mlx5_ifc_stc_ste_param_flow_counter_bits { + u8 flow_counter_id[0x20]; +}; + +enum { + MLX5_ASO_CT_NUM_PER_OBJ = 1, + MLX5_ASO_METER_NUM_PER_OBJ = 2, +}; + +struct mlx5_ifc_stc_ste_param_execute_aso_bits { + u8 aso_object_id[0x20]; + u8 return_reg_id[0x4]; + u8 aso_type[0x4]; + u8 reserved_at_28[0x18]; +}; + +struct mlx5_ifc_stc_ste_param_header_modify_list_bits { + u8 header_modify_pattern_id[0x20]; + u8 header_modify_argument_id[0x20]; +}; + +enum mlx5_ifc_header_anchors { + MLX5_HEADER_ANCHOR_PACKET_START = 0x0, + MLX5_HEADER_ANCHOR_FIRST_VLAN_START = 0x2, + MLX5_HEADER_ANCHOR_IPV6_IPV4 = 0x07, + MLX5_HEADER_ANCHOR_INNER_MAC = 0x13, + MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4 = 0x19, +}; + +struct mlx5_ifc_stc_ste_param_remove_bits { + u8 action_type[0x4]; + u8 decap[0x1]; + u8 reserved_at_5[0x5]; + u8 remove_start_anchor[0x6]; + u8 reserved_at_10[0x2]; + u8 remove_end_anchor[0x6]; + u8 reserved_at_18[0x8]; +}; + +struct mlx5_ifc_stc_ste_param_remove_words_bits { + u8 action_type[0x4]; + u8 reserved_at_4[0x6]; + u8 remove_start_anchor[0x6]; + u8 reserved_at_10[0x1]; + u8 remove_offset[0x7]; + u8 reserved_at_18[0x2]; + u8 remove_size[0x6]; +}; + +struct mlx5_ifc_stc_ste_param_insert_bits { + u8 action_type[0x4]; + u8 encap[0x1]; + u8 inline_data[0x1]; + u8 reserved_at_6[0x4]; + u8 insert_anchor[0x6]; + u8 reserved_at_10[0x1]; + u8 insert_offset[0x7]; + u8 reserved_at_18[0x1]; + u8 insert_size[0x7]; + u8 insert_argument[0x20]; +}; + +struct mlx5_ifc_stc_ste_param_vport_bits { + u8 eswitch_owner_vhca_id[0x10]; + u8 vport_number[0x10]; + u8 eswitch_owner_vhca_id_valid[0x1]; + u8 reserved_at_21[0x59]; +}; + +union mlx5_ifc_stc_param_bits { + struct mlx5_ifc_stc_ste_param_ste_table_bits ste_table; + struct mlx5_ifc_stc_ste_param_tir_bits tir; + struct mlx5_ifc_stc_ste_param_table_bits table; + struct mlx5_ifc_stc_ste_param_flow_counter_bits counter; + struct mlx5_ifc_stc_ste_param_header_modify_list_bits modify_header; + struct mlx5_ifc_stc_ste_param_execute_aso_bits aso; + struct mlx5_ifc_stc_ste_param_remove_bits remove_header; + struct mlx5_ifc_stc_ste_param_insert_bits insert_header; + struct mlx5_ifc_set_action_in_bits add; + struct mlx5_ifc_set_action_in_bits set; + struct mlx5_ifc_copy_action_in_bits copy; + struct mlx5_ifc_stc_ste_param_vport_bits vport; + u8 reserved_at_0[0x80]; +}; + +enum { + MLX5_IFC_MODIFY_STC_FIELD_SELECT_NEW_STC = 1 << 0, +}; + +struct mlx5_ifc_stc_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x48]; + u8 table_type[0x8]; + u8 ste_action_offset[0x8]; + u8 action_type[0x8]; + u8 reserved_at_a0[0x60]; + union mlx5_ifc_stc_param_bits stc_param; + u8 reserved_at_180[0x280]; +}; + +struct mlx5_ifc_ste_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x48]; + u8 table_type[0x8]; + u8 reserved_at_90[0x370]; +}; + +enum { + MLX5_IFC_DEFINER_FORMAT_ID_SELECT = 61, +}; + +struct mlx5_ifc_definer_bits { + u8 modify_field_select[0x40]; + u8 reserved_at_40[0x50]; + u8 format_id[0x10]; + u8 reserved_at_60[0x60]; + u8 format_select_dw3[0x8]; + u8 format_select_dw2[0x8]; + u8 format_select_dw1[0x8]; + u8 format_select_dw0[0x8]; + u8 format_select_dw7[0x8]; + u8 format_select_dw6[0x8]; + u8 format_select_dw5[0x8]; + u8 format_select_dw4[0x8]; + u8 reserved_at_100[0x18]; + u8 format_select_dw8[0x8]; + u8 reserved_at_120[0x20]; + u8 format_select_byte3[0x8]; + u8 format_select_byte2[0x8]; + u8 format_select_byte1[0x8]; + u8 format_select_byte0[0x8]; + u8 format_select_byte7[0x8]; + u8 format_select_byte6[0x8]; + u8 format_select_byte5[0x8]; + u8 format_select_byte4[0x8]; + u8 reserved_at_180[0x40]; + u8 ctrl[0xa0]; + u8 match_mask[0x160]; +}; + +struct mlx5_ifc_arg_bits { + u8 rsvd0[0x88]; + u8 access_pd[0x18]; +}; + +struct mlx5_ifc_header_modify_pattern_in_bits { + u8 modify_field_select[0x40]; + + u8 reserved_at_40[0x40]; + + u8 pattern_length[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x60]; + + u8 pattern_data[MAX_ACTIONS_DATA_IN_HEADER_MODIFY * 8]; +}; + struct mlx5_ifc_create_virtio_q_counters_in_bits { struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; struct mlx5_ifc_virtio_q_counters_bits virtio_q_counters; @@ -3044,6 +3398,36 @@ struct mlx5_ifc_create_geneve_tlv_option_in_bits { struct mlx5_ifc_geneve_tlv_option_bits geneve_tlv_opt; }; +struct mlx5_ifc_create_rtc_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_rtc_bits rtc; +}; + +struct mlx5_ifc_create_stc_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_stc_bits stc; +}; + +struct mlx5_ifc_create_ste_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_ste_bits ste; +}; + +struct mlx5_ifc_create_definer_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_definer_bits definer; +}; + +struct mlx5_ifc_create_arg_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_arg_bits arg; +}; + +struct mlx5_ifc_create_header_modify_pattern_in_bits { + struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr; + struct mlx5_ifc_header_modify_pattern_in_bits pattern; +}; + enum { MLX5_CRYPTO_KEY_SIZE_128b = 0x0, MLX5_CRYPTO_KEY_SIZE_256b = 0x1, @@ -4253,6 +4637,209 @@ struct mlx5_ifc_query_q_counter_in_bits { u8 counter_set_id[0x8]; }; +enum { + FS_FT_NIC_RX = 0x0, + FS_FT_NIC_TX = 0x1, + FS_FT_FDB = 0x4, + FS_FT_FDB_RX = 0xa, + FS_FT_FDB_TX = 0xb, +}; + +struct mlx5_ifc_flow_table_context_bits { + u8 reformat_en[0x1]; + u8 decap_en[0x1]; + u8 sw_owner[0x1]; + u8 termination_table[0x1]; + u8 table_miss_action[0x4]; + u8 level[0x8]; + u8 rtc_valid[0x1]; + u8 reserved_at_11[0x7]; + u8 log_size[0x8]; + + u8 reserved_at_20[0x8]; + u8 table_miss_id[0x18]; + + u8 reserved_at_40[0x8]; + u8 lag_master_next_table_id[0x18]; + + u8 reserved_at_60[0x60]; + + u8 rtc_id_0[0x20]; + + u8 rtc_id_1[0x20]; + + u8 reserved_at_100[0x40]; +}; + +struct mlx5_ifc_create_flow_table_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + + u8 reserved_at_60[0x20]; + + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x20]; + + struct mlx5_ifc_flow_table_context_bits flow_table_context; +}; + +struct mlx5_ifc_create_flow_table_out_bits { + u8 status[0x8]; + u8 icm_address_63_40[0x18]; + u8 syndrome[0x20]; + u8 icm_address_39_32[0x8]; + u8 table_id[0x18]; + u8 icm_address_31_0[0x20]; +}; + +enum mlx5_flow_destination_type { + MLX5_FLOW_DESTINATION_TYPE_VPORT = 0x0, +}; + +enum { + MLX5_FLOW_CONTEXT_ACTION_FWD_DEST = 0x4, +}; + +struct mlx5_ifc_set_fte_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x40]; +}; + +struct mlx5_ifc_dest_format_bits { + u8 destination_type[0x8]; + u8 destination_id[0x18]; + u8 destination_eswitch_owner_vhca_id_valid[0x1]; + u8 packet_reformat[0x1]; + u8 reserved_at_22[0xe]; + u8 destination_eswitch_owner_vhca_id[0x10]; +}; + +struct mlx5_ifc_flow_counter_list_bits { + u8 flow_counter_id[0x20]; + u8 reserved_at_20[0x20]; +}; + +union mlx5_ifc_dest_format_flow_counter_list_auto_bits { + struct mlx5_ifc_dest_format_bits dest_format; + struct mlx5_ifc_flow_counter_list_bits flow_counter_list; + u8 reserved_at_0[0x40]; +}; + +struct mlx5_ifc_flow_context_bits { + u8 reserved_at_00[0x20]; + u8 group_id[0x20]; + u8 reserved_at_40[0x8]; + u8 flow_tag[0x18]; + u8 reserved_at_60[0x10]; + u8 action[0x10]; + u8 extended_destination[0x1]; + u8 reserved_at_81[0x7]; + u8 destination_list_size[0x18]; + u8 reserved_at_a0[0x8]; + u8 flow_counter_list_size[0x18]; + u8 reserved_at_c0[0x1740]; + /* Currently only one destnation */ + union mlx5_ifc_dest_format_flow_counter_list_auto_bits destination[1]; +}; + +struct mlx5_ifc_set_fte_in_bits { + u8 opcode[0x10]; + u8 reserved_at_10[0x10]; + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + u8 reserved_at_60[0x20]; + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + u8 ignore_flow_level[0x1]; + u8 reserved_at_c1[0x17]; + u8 modify_enable_mask[0x8]; + u8 reserved_at_e0[0x20]; + u8 flow_index[0x20]; + u8 reserved_at_120[0xe0]; + struct mlx5_ifc_flow_context_bits flow_context; +}; + +struct mlx5_ifc_create_flow_group_in_bits { + u8 opcode[0x10]; + u8 reserved_at_10[0x10]; + u8 reserved_at_20[0x20]; + u8 other_vport[0x1]; + u8 reserved_at_41[0xf]; + u8 vport_number[0x10]; + u8 reserved_at_60[0x20]; + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + u8 reserved_at_c0[0x1f40]; +}; + +struct mlx5_ifc_create_flow_group_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + u8 syndrome[0x20]; + u8 reserved_at_40[0x8]; + u8 group_id[0x18]; + u8 reserved_at_60[0x20]; +}; + +enum { + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION = 1 << 0, + MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID = 1 << 1, +}; + +enum { + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_DEFAULT = 0, + MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_GOTO_TBL = 1, +}; + +struct mlx5_ifc_modify_flow_table_in_bits { + u8 opcode[0x10]; + u8 uid[0x10]; + + u8 reserved_at_20[0x10]; + u8 op_mod[0x10]; + + u8 reserved_at_40[0x10]; + u8 vport_number[0x10]; + + u8 reserved_at_60[0x10]; + u8 modify_field_select[0x10]; + + u8 table_type[0x8]; + u8 reserved_at_88[0x18]; + + u8 reserved_at_a0[0x8]; + u8 table_id[0x18]; + + struct mlx5_ifc_flow_table_context_bits flow_table_context; +}; + +struct mlx5_ifc_modify_flow_table_out_bits { + u8 status[0x8]; + u8 reserved_at_8[0x18]; + + u8 syndrome[0x20]; + + u8 reserved_at_40[0x60]; +}; + /* CQE format mask. */ #define MLX5E_CQE_FORMAT_MASK 0xc diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c new file mode 100644 index 0000000000..da8cc3d265 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c @@ -0,0 +1,948 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +int mlx5dr_cmd_destroy_obj(struct mlx5dr_devx_obj *devx_obj) +{ + int ret; + + ret = mlx5_glue->devx_obj_destroy(devx_obj->obj); + simple_free(devx_obj); + + return ret; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_table_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(create_flow_table_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_flow_table_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *ft_ctx; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for flow table object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_flow_table_in, in, opcode, MLX5_CMD_OP_CREATE_FLOW_TABLE); + MLX5_SET(create_flow_table_in, in, table_type, ft_attr->type); + + ft_ctx = MLX5_ADDR_OF(create_flow_table_in, in, flow_table_context); + MLX5_SET(flow_table_context, ft_ctx, level, ft_attr->level); + MLX5_SET(flow_table_context, ft_ctx, rtc_valid, ft_attr->rtc_valid); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create FT"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_flow_table_out, out, table_id); + + return devx_obj; +} + +int +mlx5dr_cmd_flow_table_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_ft_modify_attr *ft_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(modify_flow_table_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(modify_flow_table_in)] = {0}; + void *ft_ctx; + int ret; + + MLX5_SET(modify_flow_table_in, in, opcode, MLX5_CMD_OP_MODIFY_FLOW_TABLE); + MLX5_SET(modify_flow_table_in, in, table_type, ft_attr->type); + MLX5_SET(modify_flow_table_in, in, modify_field_select, ft_attr->modify_fs); + MLX5_SET(modify_flow_table_in, in, table_id, devx_obj->id); + + ft_ctx = MLX5_ADDR_OF(modify_flow_table_in, in, flow_table_context); + + MLX5_SET(flow_table_context, ft_ctx, table_miss_action, ft_attr->table_miss_action); + MLX5_SET(flow_table_context, ft_ctx, table_miss_id, ft_attr->table_miss_id); + MLX5_SET(flow_table_context, ft_ctx, rtc_id_0, ft_attr->rtc_id_0); + MLX5_SET(flow_table_context, ft_ctx, rtc_id_1, ft_attr->rtc_id_1); + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify FT"); + rte_errno = errno; + } + + return ret; +} + +static struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_group_create(struct ibv_context *ctx, + struct mlx5dr_cmd_fg_attr *fg_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(create_flow_group_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_flow_group_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for flow group object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_flow_group_in, in, opcode, MLX5_CMD_OP_CREATE_FLOW_GROUP); + MLX5_SET(create_flow_group_in, in, table_type, fg_attr->table_type); + MLX5_SET(create_flow_group_in, in, table_id, fg_attr->table_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create Flow group"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_flow_group_out, out, group_id); + + return devx_obj; +} + +static struct mlx5dr_devx_obj * +mlx5dr_cmd_set_vport_fte(struct ibv_context *ctx, + uint32_t table_type, + uint32_t table_id, + uint32_t group_id, + uint32_t vport_id) +{ + uint32_t in[MLX5_ST_SZ_DW(set_fte_in) + MLX5_ST_SZ_DW(dest_format)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(set_fte_out)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *in_flow_context; + void *in_dests; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for fte object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(set_fte_in, in, opcode, MLX5_CMD_OP_SET_FLOW_TABLE_ENTRY); + MLX5_SET(set_fte_in, in, table_type, table_type); + MLX5_SET(set_fte_in, in, table_id, table_id); + + in_flow_context = MLX5_ADDR_OF(set_fte_in, in, flow_context); + MLX5_SET(flow_context, in_flow_context, group_id, group_id); + MLX5_SET(flow_context, in_flow_context, destination_list_size, 1); + MLX5_SET(flow_context, in_flow_context, action, MLX5_FLOW_CONTEXT_ACTION_FWD_DEST); + + in_dests = MLX5_ADDR_OF(flow_context, in_flow_context, destination); + MLX5_SET(dest_format, in_dests, destination_type, + MLX5_FLOW_DESTINATION_TYPE_VPORT); + MLX5_SET(dest_format, in_dests, destination_id, vport_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create FTE"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + return devx_obj; +} + +void mlx5dr_cmd_miss_ft_destroy(struct mlx5dr_cmd_forward_tbl *tbl) +{ + mlx5dr_cmd_destroy_obj(tbl->fte); + mlx5dr_cmd_destroy_obj(tbl->fg); + mlx5dr_cmd_destroy_obj(tbl->ft); +} + +struct mlx5dr_cmd_forward_tbl * +mlx5dr_cmd_miss_ft_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr, + uint32_t vport) +{ + struct mlx5dr_cmd_fg_attr fg_attr = {0}; + struct mlx5dr_cmd_forward_tbl *tbl; + + tbl = simple_calloc(1, sizeof(*tbl)); + if (!tbl) { + DR_LOG(ERR, "Failed to allocate memory for forward default"); + rte_errno = ENOMEM; + return NULL; + } + + tbl->ft = mlx5dr_cmd_flow_table_create(ctx, ft_attr); + if (!tbl->ft) { + DR_LOG(ERR, "Failed to create FT for miss-table"); + goto free_tbl; + } + + fg_attr.table_id = tbl->ft->id; + fg_attr.table_type = ft_attr->type; + + tbl->fg = mlx5dr_cmd_flow_group_create(ctx, &fg_attr); + if (!tbl->fg) { + DR_LOG(ERR, "Failed to create FG for miss-table"); + goto free_ft; + } + + tbl->fte = mlx5dr_cmd_set_vport_fte(ctx, ft_attr->type, tbl->ft->id, tbl->fg->id, vport); + if (!tbl->fte) { + DR_LOG(ERR, "Failed to create FTE for miss-table"); + goto free_fg; + } + return tbl; + +free_fg: + mlx5dr_cmd_destroy_obj(tbl->fg); +free_ft: + mlx5dr_cmd_destroy_obj(tbl->ft); +free_tbl: + simple_free(tbl); + return NULL; +} + +void mlx5dr_cmd_set_attr_connect_miss_tbl(struct mlx5dr_context *ctx, + uint32_t fw_ft_type, + enum mlx5dr_table_type type, + struct mlx5dr_cmd_ft_modify_attr *ft_attr) +{ + struct mlx5dr_devx_obj *default_miss_tbl; + + if (type != MLX5DR_TABLE_TYPE_FDB) + return; + + default_miss_tbl = ctx->common_res[type].default_miss->ft; + if (!default_miss_tbl) { + assert(false); + return; + } + ft_attr->modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION; + ft_attr->type = fw_ft_type; + ft_attr->table_miss_action = MLX5_IFC_MODIFY_FLOW_TABLE_MISS_ACTION_GOTO_TBL; + ft_attr->table_miss_id = default_miss_tbl->id; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_rtc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_rtc_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for RTC object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_rtc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_RTC); + + attr = MLX5_ADDR_OF(create_rtc_in, in, rtc); + MLX5_SET(rtc, attr, ste_format, rtc_attr->is_jumbo ? + MLX5_IFC_RTC_STE_FORMAT_11DW : + MLX5_IFC_RTC_STE_FORMAT_8DW); + MLX5_SET(rtc, attr, pd, rtc_attr->pd); + MLX5_SET(rtc, attr, update_index_mode, rtc_attr->update_index_mode); + MLX5_SET(rtc, attr, log_depth, rtc_attr->log_depth); + MLX5_SET(rtc, attr, log_hash_size, rtc_attr->log_size); + MLX5_SET(rtc, attr, table_type, rtc_attr->table_type); + MLX5_SET(rtc, attr, match_definer_id, rtc_attr->definer_id); + MLX5_SET(rtc, attr, stc_id, rtc_attr->stc_base); + MLX5_SET(rtc, attr, ste_table_base_id, rtc_attr->ste_base); + MLX5_SET(rtc, attr, ste_table_offset, rtc_attr->ste_offset); + MLX5_SET(rtc, attr, miss_flow_table_id, rtc_attr->miss_ft_id); + MLX5_SET(rtc, attr, reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create RTC"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_stc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_stc_create_attr *stc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_stc_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for STC object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_stc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STC); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, stc_attr->log_obj_range); + + attr = MLX5_ADDR_OF(create_stc_in, in, stc); + MLX5_SET(stc, attr, table_type, stc_attr->table_type); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create STC"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +static int +mlx5dr_cmd_stc_modify_set_stc_param(struct mlx5dr_cmd_stc_modify_attr *stc_attr, + void *stc_parm) +{ + switch (stc_attr->action_type) { + case MLX5_IFC_STC_ACTION_TYPE_COUNTER: + MLX5_SET(stc_ste_param_flow_counter, stc_parm, flow_counter_id, stc_attr->id); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR: + MLX5_SET(stc_ste_param_tir, stc_parm, tirn, stc_attr->dest_tir_num); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT: + MLX5_SET(stc_ste_param_table, stc_parm, table_id, stc_attr->dest_table_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST: + MLX5_SET(stc_ste_param_header_modify_list, stc_parm, + header_modify_pattern_id, stc_attr->modify_header.pattern_id); + MLX5_SET(stc_ste_param_header_modify_list, stc_parm, + header_modify_argument_id, stc_attr->modify_header.arg_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE: + MLX5_SET(stc_ste_param_remove, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_REMOVE); + MLX5_SET(stc_ste_param_remove, stc_parm, decap, + stc_attr->remove_header.decap); + MLX5_SET(stc_ste_param_remove, stc_parm, remove_start_anchor, + stc_attr->remove_header.start_anchor); + MLX5_SET(stc_ste_param_remove, stc_parm, remove_end_anchor, + stc_attr->remove_header.end_anchor); + break; + case MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT: + MLX5_SET(stc_ste_param_insert, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_INSERT); + MLX5_SET(stc_ste_param_insert, stc_parm, encap, + stc_attr->insert_header.encap); + MLX5_SET(stc_ste_param_insert, stc_parm, inline_data, + stc_attr->insert_header.is_inline); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_anchor, + stc_attr->insert_header.insert_anchor); + /* HW gets the next 2 sizes in words */ + MLX5_SET(stc_ste_param_insert, stc_parm, insert_size, + stc_attr->insert_header.header_size / 2); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_offset, + stc_attr->insert_header.insert_offset / 2); + MLX5_SET(stc_ste_param_insert, stc_parm, insert_argument, + stc_attr->insert_header.arg_id); + break; + case MLX5_IFC_STC_ACTION_TYPE_COPY: + case MLX5_IFC_STC_ACTION_TYPE_SET: + case MLX5_IFC_STC_ACTION_TYPE_ADD: + *(__be64 *)stc_parm = stc_attr->modify_action.data; + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT: + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK: + MLX5_SET(stc_ste_param_vport, stc_parm, vport_number, + stc_attr->vport.vport_num); + MLX5_SET(stc_ste_param_vport, stc_parm, eswitch_owner_vhca_id, + stc_attr->vport.esw_owner_vhca_id); + MLX5_SET(stc_ste_param_vport, stc_parm, eswitch_owner_vhca_id_valid, 1); + break; + case MLX5_IFC_STC_ACTION_TYPE_DROP: + case MLX5_IFC_STC_ACTION_TYPE_NOP: + case MLX5_IFC_STC_ACTION_TYPE_TAG: + case MLX5_IFC_STC_ACTION_TYPE_ALLOW: + break; + case MLX5_IFC_STC_ACTION_TYPE_ASO: + MLX5_SET(stc_ste_param_execute_aso, stc_parm, aso_object_id, + stc_attr->aso.devx_obj_id); + MLX5_SET(stc_ste_param_execute_aso, stc_parm, return_reg_id, + stc_attr->aso.return_reg_id); + MLX5_SET(stc_ste_param_execute_aso, stc_parm, aso_type, + stc_attr->aso.aso_type); + break; + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE: + MLX5_SET(stc_ste_param_ste_table, stc_parm, ste_obj_id, + stc_attr->ste_table.ste_obj_id); + MLX5_SET(stc_ste_param_ste_table, stc_parm, match_definer_id, + stc_attr->ste_table.match_definer_id); + MLX5_SET(stc_ste_param_ste_table, stc_parm, log_hash_size, + stc_attr->ste_table.log_hash_size); + break; + case MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS: + MLX5_SET(stc_ste_param_remove_words, stc_parm, action_type, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS); + MLX5_SET(stc_ste_param_remove_words, stc_parm, remove_start_anchor, + stc_attr->remove_words.start_anchor); + MLX5_SET(stc_ste_param_remove_words, stc_parm, + remove_size, stc_attr->remove_words.num_of_words); + break; + default: + DR_LOG(ERR, "Not supported type %d", stc_attr->action_type); + rte_errno = EINVAL; + return rte_errno; + } + return 0; +} + +int +mlx5dr_cmd_stc_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_stc_modify_attr *stc_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_stc_in)] = {0}; + void *stc_parm; + void *attr; + int ret; + + attr = MLX5_ADDR_OF(create_stc_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_MODIFY_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STC); + MLX5_SET(general_obj_in_cmd_hdr, in, obj_id, devx_obj->id); + MLX5_SET(general_obj_in_cmd_hdr, in, obj_offset, stc_attr->stc_offset); + + attr = MLX5_ADDR_OF(create_stc_in, in, stc); + MLX5_SET(stc, attr, ste_action_offset, stc_attr->action_offset); + MLX5_SET(stc, attr, action_type, stc_attr->action_type); + MLX5_SET64(stc, attr, modify_field_select, + MLX5_IFC_MODIFY_STC_FIELD_SELECT_NEW_STC); + + /* Set destination TIRN, TAG, FT ID, STE ID */ + stc_parm = MLX5_ADDR_OF(stc, attr, stc_param); + ret = mlx5dr_cmd_stc_modify_set_stc_param(stc_attr, stc_parm); + if (ret) + return ret; + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify STC FW action_type %d", stc_attr->action_type); + rte_errno = errno; + } + + return ret; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_arg_create(struct ibv_context *ctx, + uint16_t log_obj_range, + uint32_t pd) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_arg_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for ARG object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_arg_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_ARG); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, log_obj_range); + + attr = MLX5_ADDR_OF(create_arg_in, in, arg); + MLX5_SET(arg, attr, access_pd, pd); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create ARG"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_header_modify_pattern_create(struct ibv_context *ctx, + uint32_t pattern_length, + uint8_t *actions) +{ + uint32_t in[MLX5_ST_SZ_DW(create_header_modify_pattern_in)] = {0}; + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *pattern_data; + void *pattern; + void *attr; + + if (pattern_length > MAX_ACTIONS_DATA_IN_HEADER_MODIFY) { + DR_LOG(ERR, "Pattern length %d exceeds limit %d", + pattern_length, MAX_ACTIONS_DATA_IN_HEADER_MODIFY); + rte_errno = EINVAL; + return NULL; + } + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for header_modify_pattern object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_header_modify_pattern_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_MODIFY_HEADER_PATTERN); + + pattern = MLX5_ADDR_OF(create_header_modify_pattern_in, in, pattern); + /* Pattern_length is in ddwords */ + MLX5_SET(header_modify_pattern_in, pattern, pattern_length, pattern_length / (2 * DW_SIZE)); + + pattern_data = MLX5_ADDR_OF(header_modify_pattern_in, pattern, pattern_data); + memcpy(pattern_data, actions, pattern_length); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create header_modify_pattern"); + rte_errno = errno; + goto free_obj; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; + +free_obj: + simple_free(devx_obj); + return NULL; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_ste_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ste_create_attr *ste_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_ste_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *attr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for STE object"); + rte_errno = ENOMEM; + return NULL; + } + + attr = MLX5_ADDR_OF(create_ste_in, in, hdr); + MLX5_SET(general_obj_in_cmd_hdr, + attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + attr, obj_type, MLX5_GENERAL_OBJ_TYPE_STE); + MLX5_SET(general_obj_in_cmd_hdr, + attr, log_obj_range, ste_attr->log_obj_range); + + attr = MLX5_ADDR_OF(create_ste_in, in, ste); + MLX5_SET(ste, attr, table_type, ste_attr->table_type); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create STE"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_definer_create(struct ibv_context *ctx, + struct mlx5dr_cmd_definer_create_attr *def_attr) +{ + uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_definer_in)] = {0}; + struct mlx5dr_devx_obj *devx_obj; + void *ptr; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate memory for definer object"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(general_obj_in_cmd_hdr, + in, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT); + MLX5_SET(general_obj_in_cmd_hdr, + in, obj_type, MLX5_GENERAL_OBJ_TYPE_DEFINER); + + ptr = MLX5_ADDR_OF(create_definer_in, in, definer); + MLX5_SET(definer, ptr, format_id, MLX5_IFC_DEFINER_FORMAT_ID_SELECT); + + MLX5_SET(definer, ptr, format_select_dw0, def_attr->dw_selector[0]); + MLX5_SET(definer, ptr, format_select_dw1, def_attr->dw_selector[1]); + MLX5_SET(definer, ptr, format_select_dw2, def_attr->dw_selector[2]); + MLX5_SET(definer, ptr, format_select_dw3, def_attr->dw_selector[3]); + MLX5_SET(definer, ptr, format_select_dw4, def_attr->dw_selector[4]); + MLX5_SET(definer, ptr, format_select_dw5, def_attr->dw_selector[5]); + MLX5_SET(definer, ptr, format_select_dw6, def_attr->dw_selector[6]); + MLX5_SET(definer, ptr, format_select_dw7, def_attr->dw_selector[7]); + MLX5_SET(definer, ptr, format_select_dw8, def_attr->dw_selector[8]); + + MLX5_SET(definer, ptr, format_select_byte0, def_attr->byte_selector[0]); + MLX5_SET(definer, ptr, format_select_byte1, def_attr->byte_selector[1]); + MLX5_SET(definer, ptr, format_select_byte2, def_attr->byte_selector[2]); + MLX5_SET(definer, ptr, format_select_byte3, def_attr->byte_selector[3]); + MLX5_SET(definer, ptr, format_select_byte4, def_attr->byte_selector[4]); + MLX5_SET(definer, ptr, format_select_byte5, def_attr->byte_selector[5]); + MLX5_SET(definer, ptr, format_select_byte6, def_attr->byte_selector[6]); + MLX5_SET(definer, ptr, format_select_byte7, def_attr->byte_selector[7]); + + ptr = MLX5_ADDR_OF(definer, ptr, match_mask); + memcpy(ptr, def_attr->match_mask, MLX5_FLD_SZ_BYTES(definer, match_mask)); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + DR_LOG(ERR, "Failed to create Definer"); + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id); + + return devx_obj; +} + +struct mlx5dr_devx_obj * +mlx5dr_cmd_sq_create(struct ibv_context *ctx, + struct mlx5dr_cmd_sq_create_attr *attr) +{ + uint32_t out[MLX5_ST_SZ_DW(create_sq_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(create_sq_in)] = {0}; + void *sqc = MLX5_ADDR_OF(create_sq_in, in, ctx); + void *wqc = MLX5_ADDR_OF(sqc, sqc, wq); + struct mlx5dr_devx_obj *devx_obj; + + devx_obj = simple_malloc(sizeof(*devx_obj)); + if (!devx_obj) { + DR_LOG(ERR, "Failed to create SQ"); + rte_errno = ENOMEM; + return NULL; + } + + MLX5_SET(create_sq_in, in, opcode, MLX5_CMD_OP_CREATE_SQ); + MLX5_SET(sqc, sqc, cqn, attr->cqn); + MLX5_SET(sqc, sqc, flush_in_error_en, 1); + MLX5_SET(sqc, sqc, non_wire, 1); + MLX5_SET(wq, wqc, wq_type, MLX5_WQ_TYPE_CYCLIC); + MLX5_SET(wq, wqc, pd, attr->pdn); + MLX5_SET(wq, wqc, uar_page, attr->page_id); + MLX5_SET(wq, wqc, log_wq_stride, log2above(MLX5_SEND_WQE_BB)); + MLX5_SET(wq, wqc, log_wq_sz, attr->log_wq_sz); + MLX5_SET(wq, wqc, dbr_umem_id, attr->dbr_id); + MLX5_SET(wq, wqc, wq_umem_id, attr->wq_id); + + devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out)); + if (!devx_obj->obj) { + simple_free(devx_obj); + rte_errno = errno; + return NULL; + } + + devx_obj->id = MLX5_GET(create_sq_out, out, sqn); + + return devx_obj; +} + +int mlx5dr_cmd_sq_modify_rdy(struct mlx5dr_devx_obj *devx_obj) +{ + uint32_t out[MLX5_ST_SZ_DW(modify_sq_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(modify_sq_in)] = {0}; + void *sqc = MLX5_ADDR_OF(modify_sq_in, in, ctx); + int ret; + + MLX5_SET(modify_sq_in, in, opcode, MLX5_CMD_OP_MODIFY_SQ); + MLX5_SET(modify_sq_in, in, sqn, devx_obj->id); + MLX5_SET(modify_sq_in, in, sq_state, MLX5_SQC_STATE_RST); + MLX5_SET(sqc, sqc, state, MLX5_SQC_STATE_RDY); + + ret = mlx5_glue->devx_obj_modify(devx_obj->obj, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to modify SQ"); + rte_errno = errno; + } + + return ret; +} + +int mlx5dr_cmd_query_caps(struct ibv_context *ctx, + struct mlx5dr_cmd_query_caps *caps) +{ + uint32_t out[MLX5_ST_SZ_DW(query_hca_cap_out)] = {0}; + uint32_t in[MLX5_ST_SZ_DW(query_hca_cap_in)] = {0}; + const struct flow_hw_port_info *port_info; + struct ibv_device_attr_ex attr_ex; + int ret; + + MLX5_SET(query_hca_cap_in, in, opcode, MLX5_CMD_OP_QUERY_HCA_CAP); + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query device caps"); + rte_errno = errno; + return rte_errno; + } + + caps->wqe_based_update = + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.wqe_based_flow_table_update_cap); + + caps->eswitch_manager = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.eswitch_manager); + + caps->flex_protocols = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.flex_parser_protocols); + + caps->log_header_modify_argument_granularity = + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.log_header_modify_argument_granularity); + + caps->log_header_modify_argument_granularity -= + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap. + log_header_modify_argument_granularity_offset); + + caps->log_header_modify_argument_max_alloc = + MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap.log_header_modify_argument_max_alloc); + + caps->definer_format_sup = + MLX5_GET64(query_hca_cap_out, out, + capability.cmd_hca_cap.match_definer_format_supported); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_GENERAL_DEVICE_2 | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query device caps"); + rte_errno = errno; + return rte_errno; + } + + caps->full_dw_jumbo_support = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_8_6_ext); + + caps->format_select_gtpu_dw_0 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_0); + + caps->format_select_gtpu_dw_1 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_1); + + caps->format_select_gtpu_dw_2 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_dw_2); + + caps->format_select_gtpu_ext_dw_0 = MLX5_GET(query_hca_cap_out, out, + capability.cmd_hca_cap_2. + format_select_dw_gtpu_first_ext_dw_0); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query flow table caps"); + rte_errno = errno; + return rte_errno; + } + + caps->nic_ft.max_level = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.max_ft_level); + + caps->nic_ft.reparse = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.reparse); + + if (caps->wqe_based_update) { + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_WQE_BASED_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query WQE based FT caps"); + rte_errno = errno; + return rte_errno; + } + + caps->rtc_reparse_mode = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_reparse_mode); + + caps->ste_format = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_format); + + caps->rtc_index_mode = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_index_mode); + + caps->rtc_log_depth_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + rtc_log_depth_max); + + caps->ste_alloc_log_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_alloc_log_max); + + caps->ste_alloc_log_gran = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + ste_alloc_log_granularity); + + caps->trivial_match_definer = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + trivial_match_definer); + + caps->stc_alloc_log_max = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + stc_alloc_log_max); + + caps->stc_alloc_log_gran = MLX5_GET(query_hca_cap_out, out, + capability.wqe_based_flow_table_cap. + stc_alloc_log_granularity); + } + + if (caps->eswitch_manager) { + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE | + MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Failed to query flow table esw caps"); + rte_errno = errno; + return rte_errno; + } + + caps->fdb_ft.max_level = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.max_ft_level); + + caps->fdb_ft.reparse = MLX5_GET(query_hca_cap_out, out, + capability.flow_table_nic_cap. + flow_table_properties_nic_receive.reparse); + + MLX5_SET(query_hca_cap_in, in, op_mod, + MLX5_SET_HCA_CAP_OP_MOD_ESW | MLX5_HCA_CAP_OPMOD_GET_CUR); + + ret = mlx5_glue->devx_general_cmd(ctx, in, sizeof(in), out, sizeof(out)); + if (ret) { + DR_LOG(ERR, "Query eswitch capabilities failed %d\n", ret); + rte_errno = errno; + return rte_errno; + } + + if (MLX5_GET(query_hca_cap_out, out, + capability.esw_cap.esw_manager_vport_number_valid)) + caps->eswitch_manager_vport_number = + MLX5_GET(query_hca_cap_out, out, + capability.esw_cap.esw_manager_vport_number); + } + + ret = mlx5_glue->query_device_ex(ctx, NULL, &attr_ex); + if (ret) { + DR_LOG(ERR, "Failed to query device attributes"); + rte_errno = ret; + return rte_errno; + } + + strlcpy(caps->fw_ver, attr_ex.orig_attr.fw_ver, sizeof(caps->fw_ver)); + + port_info = flow_hw_get_wire_port(ctx); + if (port_info) { + caps->wire_regc = port_info->regc_value; + caps->wire_regc_mask = port_info->regc_mask; + } else { + DR_LOG(INFO, "Failed to query wire port regc value"); + } + + return ret; +} + +int mlx5dr_cmd_query_ib_port(struct ibv_context *ctx, + struct mlx5dr_cmd_query_vport_caps *vport_caps, + uint32_t port_num) +{ + struct mlx5_port_info port_info = {0}; + uint32_t flags; + int ret; + + flags = MLX5_PORT_QUERY_VPORT | MLX5_PORT_QUERY_ESW_OWNER_VHCA_ID; + + ret = mlx5_glue->devx_port_query(ctx, port_num, &port_info); + /* Check if query succeed and vport is enabled */ + if (ret || (port_info.query_flags & flags) != flags) { + rte_errno = ENOTSUP; + return rte_errno; + } + + vport_caps->vport_num = port_info.vport_id; + vport_caps->esw_owner_vhca_id = port_info.esw_owner_vhca_id; + + if (port_info.query_flags & MLX5_PORT_QUERY_REG_C0) { + vport_caps->metadata_c = port_info.vport_meta_tag; + vport_caps->metadata_c_mask = port_info.vport_meta_mask; + } + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h new file mode 100644 index 0000000000..2548b2b238 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h @@ -0,0 +1,230 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_CMD_H_ +#define MLX5DR_CMD_H_ + +struct mlx5dr_cmd_ft_create_attr { + uint8_t type; + uint8_t level; + bool rtc_valid; +}; + +struct mlx5dr_cmd_ft_modify_attr { + uint8_t type; + uint32_t rtc_id_0; + uint32_t rtc_id_1; + uint32_t table_miss_id; + uint8_t table_miss_action; + uint64_t modify_fs; +}; + +struct mlx5dr_cmd_fg_attr { + uint32_t table_id; + uint32_t table_type; +}; + +struct mlx5dr_cmd_forward_tbl { + struct mlx5dr_devx_obj *ft; + struct mlx5dr_devx_obj *fg; + struct mlx5dr_devx_obj *fte; + uint32_t refcount; +}; + +struct mlx5dr_cmd_rtc_create_attr { + uint32_t pd; + uint32_t stc_base; + uint32_t ste_base; + uint32_t ste_offset; + uint32_t miss_ft_id; + uint8_t update_index_mode; + uint8_t log_depth; + uint8_t log_size; + uint8_t table_type; + uint8_t definer_id; + bool is_jumbo; +}; + +struct mlx5dr_cmd_stc_create_attr { + uint8_t log_obj_range; + uint8_t table_type; +}; + +struct mlx5dr_cmd_stc_modify_attr { + uint32_t stc_offset; + uint8_t action_offset; + enum mlx5_ifc_stc_action_type action_type; + union { + uint32_t id; /* TIRN, TAG, FT ID, STE ID */ + struct { + uint8_t decap; + uint16_t start_anchor; + uint16_t end_anchor; + } remove_header; + struct { + uint32_t arg_id; + uint32_t pattern_id; + } modify_header; + struct { + __be64 data; + } modify_action; + struct { + uint32_t arg_id; + uint32_t header_size; + uint8_t is_inline; + uint8_t encap; + uint16_t insert_anchor; + uint16_t insert_offset; + } insert_header; + struct { + uint8_t aso_type; + uint32_t devx_obj_id; + uint8_t return_reg_id; + } aso; + struct { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + } vport; + struct { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_pool *ste_pool; + uint32_t ste_obj_id; /* Internal */ + uint32_t match_definer_id; + uint8_t log_hash_size; + } ste_table; + struct { + uint16_t start_anchor; + uint16_t num_of_words; + } remove_words; + + uint32_t dest_table_id; + uint32_t dest_tir_num; + }; +}; + +struct mlx5dr_cmd_ste_create_attr { + uint8_t log_obj_range; + uint8_t table_type; +}; + +struct mlx5dr_cmd_definer_create_attr { + uint8_t *dw_selector; + uint8_t *byte_selector; + uint8_t *match_mask; +}; + +struct mlx5dr_cmd_sq_create_attr { + uint32_t cqn; + uint32_t pdn; + uint32_t page_id; + uint32_t dbr_id; + uint32_t wq_id; + uint32_t log_wq_sz; +}; + +struct mlx5dr_cmd_query_ft_caps { + uint8_t max_level; + uint8_t reparse; +}; + +struct mlx5dr_cmd_query_vport_caps { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + uint32_t metadata_c; + uint32_t metadata_c_mask; +}; + +struct mlx5dr_cmd_query_caps { + uint32_t wire_regc; + uint32_t wire_regc_mask; + uint32_t flex_protocols; + uint8_t wqe_based_update; + uint8_t rtc_reparse_mode; + uint16_t ste_format; + uint8_t rtc_index_mode; + uint8_t ste_alloc_log_max; + uint8_t ste_alloc_log_gran; + uint8_t stc_alloc_log_max; + uint8_t stc_alloc_log_gran; + uint8_t rtc_log_depth_max; + uint8_t format_select_gtpu_dw_0; + uint8_t format_select_gtpu_dw_1; + uint8_t format_select_gtpu_dw_2; + uint8_t format_select_gtpu_ext_dw_0; + bool full_dw_jumbo_support; + struct mlx5dr_cmd_query_ft_caps nic_ft; + struct mlx5dr_cmd_query_ft_caps fdb_ft; + bool eswitch_manager; + uint32_t eswitch_manager_vport_number; + uint8_t log_header_modify_argument_granularity; + uint8_t log_header_modify_argument_max_alloc; + uint64_t definer_format_sup; + uint32_t trivial_match_definer; + char fw_ver[64]; +}; + +int mlx5dr_cmd_destroy_obj(struct mlx5dr_devx_obj *devx_obj); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_flow_table_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr); + +int +mlx5dr_cmd_flow_table_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_ft_modify_attr *ft_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_rtc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_stc_create(struct ibv_context *ctx, + struct mlx5dr_cmd_stc_create_attr *stc_attr); + +int +mlx5dr_cmd_stc_modify(struct mlx5dr_devx_obj *devx_obj, + struct mlx5dr_cmd_stc_modify_attr *stc_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_ste_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ste_create_attr *ste_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_definer_create(struct ibv_context *ctx, + struct mlx5dr_cmd_definer_create_attr *def_attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_sq_create(struct ibv_context *ctx, + struct mlx5dr_cmd_sq_create_attr *attr); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_arg_create(struct ibv_context *ctx, + uint16_t log_obj_range, + uint32_t pd); + +struct mlx5dr_devx_obj * +mlx5dr_cmd_header_modify_pattern_create(struct ibv_context *ctx, + uint32_t pattern_length, + uint8_t *actions); + +int mlx5dr_cmd_sq_modify_rdy(struct mlx5dr_devx_obj *devx_obj); + +int mlx5dr_cmd_query_ib_port(struct ibv_context *ctx, + struct mlx5dr_cmd_query_vport_caps *vport_caps, + uint32_t port_num); +int mlx5dr_cmd_query_caps(struct ibv_context *ctx, + struct mlx5dr_cmd_query_caps *caps); + +void mlx5dr_cmd_miss_ft_destroy(struct mlx5dr_cmd_forward_tbl *tbl); + +struct mlx5dr_cmd_forward_tbl * +mlx5dr_cmd_miss_ft_create(struct ibv_context *ctx, + struct mlx5dr_cmd_ft_create_attr *ft_attr, + uint32_t vport); + +void mlx5dr_cmd_set_attr_connect_miss_tbl(struct mlx5dr_context *ctx, + uint32_t fw_ft_type, + enum mlx5dr_table_type type, + struct mlx5dr_cmd_ft_modify_attr *ft_attr); +#endif /* MLX5DR_CMD_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 08/18] net/mlx5/hws: Add HWS command layer 2022-10-20 15:57 ` [v6 08/18] net/mlx5/hws: Add HWS command layer Alex Vesker @ 2022-10-24 6:52 ` Slava Ovsiienko 0 siblings, 0 replies; 134+ messages in thread From: Slava Ovsiienko @ 2022-10-24 6:52 UTC (permalink / raw) To: Alex Vesker, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam, Erez Shitrit > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 18:58 > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad > <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Erez Shitrit > <erezsh@nvidia.com> > Subject: [v6 08/18] net/mlx5/hws: Add HWS command layer > > From: Erez Shitrit <erezsh@nvidia.com> > > The command layer is used to communicate with the FW, > query capabilities and allocate FW resources needed for HWS. > > Signed-off-by: Erez Shitrit <erezsh@nvidia.com> > Signed-off-by: Alex Vesker <valex@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 09/18] net/mlx5/hws: Add HWS pool and buddy 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (7 preceding siblings ...) 2022-10-20 15:57 ` [v6 08/18] net/mlx5/hws: Add HWS command layer Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-24 6:52 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 10/18] net/mlx5/hws: Add HWS send layer Alex Vesker ` (9 subsequent siblings) 18 siblings, 1 reply; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit From: Erez Shitrit <erezsh@nvidia.com> HWS needs to manage different types of device memory in an efficient and quick way. For this, memory pools are being used. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_buddy.c | 200 +++++++++ drivers/net/mlx5/hws/mlx5dr_buddy.h | 22 + drivers/net/mlx5/hws/mlx5dr_pool.c | 672 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_pool.h | 152 +++++++ 4 files changed, 1046 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.h diff --git a/drivers/net/mlx5/hws/mlx5dr_buddy.c b/drivers/net/mlx5/hws/mlx5dr_buddy.c new file mode 100644 index 0000000000..cde4f54f66 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_buddy.c @@ -0,0 +1,200 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include <rte_bitmap.h> +#include <rte_malloc.h> +#include "mlx5dr_internal.h" +#include "mlx5dr_buddy.h" + +static struct rte_bitmap *bitmap_alloc0(int s) +{ + struct rte_bitmap *bitmap; + uint32_t bmp_size; + void *mem; + + bmp_size = rte_bitmap_get_memory_footprint(s); + mem = rte_zmalloc("create_bmap", bmp_size, RTE_CACHE_LINE_SIZE); + if (!mem) { + DR_LOG(ERR, "No mem for bitmap"); + rte_errno = ENOMEM; + return NULL; + } + + bitmap = rte_bitmap_init(s, mem, bmp_size); + if (!bitmap) { + DR_LOG(ERR, "%s Failed to initialize bitmap", __func__); + rte_errno = EINVAL; + goto err_mem_alloc; + } + + return bitmap; + +err_mem_alloc: + rte_free(mem); + return NULL; +} + +static void bitmap_set_bit(struct rte_bitmap *bmp, uint32_t pos) +{ + rte_bitmap_set(bmp, pos); +} + +static void bitmap_clear_bit(struct rte_bitmap *bmp, uint32_t pos) +{ + rte_bitmap_clear(bmp, pos); +} + +static bool bitmap_test_bit(struct rte_bitmap *bmp, unsigned long n) +{ + return !!rte_bitmap_get(bmp, n); +} + +static unsigned long bitmap_ffs(struct rte_bitmap *bmap, + uint64_t n, unsigned long m) +{ + uint64_t out_slab = 0; + uint32_t pos = 0; /* Compilation warn */ + + __rte_bitmap_scan_init(bmap); + if (!rte_bitmap_scan(bmap, &pos, &out_slab)) { + DR_LOG(ERR, "Failed to get slab from bitmap."); + return m; + } + pos = pos + __builtin_ctzll(out_slab); + + if (pos < n) { + DR_LOG(ERR, "Unexpected bit (%d < %"PRIx64") from bitmap", pos, n); + return m; + } + return pos; +} + +static unsigned long mlx5dr_buddy_find_first_bit(struct rte_bitmap *addr, + uint32_t size) +{ + return bitmap_ffs(addr, 0, size); +} + +static int mlx5dr_buddy_init(struct mlx5dr_buddy_mem *buddy, uint32_t max_order) +{ + int i, s; + + buddy->max_order = max_order; + + buddy->bits = simple_calloc(buddy->max_order + 1, sizeof(long *)); + if (!buddy->bits) { + rte_errno = ENOMEM; + return -1; + } + + buddy->num_free = simple_calloc(buddy->max_order + 1, sizeof(*buddy->num_free)); + if (!buddy->num_free) { + rte_errno = ENOMEM; + goto err_out_free_bits; + } + + for (i = 0; i <= (int)buddy->max_order; ++i) { + s = 1 << (buddy->max_order - i); + buddy->bits[i] = bitmap_alloc0(s); + if (!buddy->bits[i]) + goto err_out_free_num_free; + } + + bitmap_set_bit(buddy->bits[buddy->max_order], 0); + + buddy->num_free[buddy->max_order] = 1; + + return 0; + +err_out_free_num_free: + for (i = 0; i <= (int)buddy->max_order; ++i) + rte_free(buddy->bits[i]); + + simple_free(buddy->num_free); + +err_out_free_bits: + simple_free(buddy->bits); + return -1; +} + +struct mlx5dr_buddy_mem *mlx5dr_buddy_create(uint32_t max_order) +{ + struct mlx5dr_buddy_mem *buddy; + + buddy = simple_calloc(1, sizeof(*buddy)); + if (!buddy) { + rte_errno = ENOMEM; + return NULL; + } + + if (mlx5dr_buddy_init(buddy, max_order)) + goto free_buddy; + + return buddy; + +free_buddy: + simple_free(buddy); + return NULL; +} + +void mlx5dr_buddy_cleanup(struct mlx5dr_buddy_mem *buddy) +{ + int i; + + for (i = 0; i <= (int)buddy->max_order; ++i) + rte_free(buddy->bits[i]); + + simple_free(buddy->num_free); + simple_free(buddy->bits); +} + +int mlx5dr_buddy_alloc_mem(struct mlx5dr_buddy_mem *buddy, int order) +{ + int seg; + int o, m; + + for (o = order; o <= (int)buddy->max_order; ++o) + if (buddy->num_free[o]) { + m = 1 << (buddy->max_order - o); + seg = mlx5dr_buddy_find_first_bit(buddy->bits[o], m); + if (m <= seg) + return -1; + + goto found; + } + + return -1; + +found: + bitmap_clear_bit(buddy->bits[o], seg); + --buddy->num_free[o]; + + while (o > order) { + --o; + seg <<= 1; + bitmap_set_bit(buddy->bits[o], seg ^ 1); + ++buddy->num_free[o]; + } + + seg <<= order; + + return seg; +} + +void mlx5dr_buddy_free_mem(struct mlx5dr_buddy_mem *buddy, uint32_t seg, int order) +{ + seg >>= order; + + while (bitmap_test_bit(buddy->bits[order], seg ^ 1)) { + bitmap_clear_bit(buddy->bits[order], seg ^ 1); + --buddy->num_free[order]; + seg >>= 1; + ++order; + } + + bitmap_set_bit(buddy->bits[order], seg); + + ++buddy->num_free[order]; +} + diff --git a/drivers/net/mlx5/hws/mlx5dr_buddy.h b/drivers/net/mlx5/hws/mlx5dr_buddy.h new file mode 100644 index 0000000000..b9ec446b99 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_buddy.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_BUDDY_H_ +#define MLX5DR_BUDDY_H_ + +struct mlx5dr_buddy_mem { + struct rte_bitmap **bits; + unsigned int *num_free; + uint32_t max_order; +}; + +struct mlx5dr_buddy_mem *mlx5dr_buddy_create(uint32_t max_order); + +void mlx5dr_buddy_cleanup(struct mlx5dr_buddy_mem *buddy); + +int mlx5dr_buddy_alloc_mem(struct mlx5dr_buddy_mem *buddy, int order); + +void mlx5dr_buddy_free_mem(struct mlx5dr_buddy_mem *buddy, uint32_t seg, int order); + +#endif /* MLX5DR_BUDDY_H_ */ diff --git a/drivers/net/mlx5/hws/mlx5dr_pool.c b/drivers/net/mlx5/hws/mlx5dr_pool.c new file mode 100644 index 0000000000..2bfda5b4a5 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pool.c @@ -0,0 +1,672 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include <rte_bitmap.h> +#include <rte_malloc.h> +#include "mlx5dr_buddy.h" +#include "mlx5dr_internal.h" + +static void mlx5dr_pool_free_one_resource(struct mlx5dr_pool_resource *resource) +{ + mlx5dr_cmd_destroy_obj(resource->devx_obj); + + simple_free(resource); +} + +static void mlx5dr_pool_resource_free(struct mlx5dr_pool *pool, + int resource_idx) +{ + mlx5dr_pool_free_one_resource(pool->resource[resource_idx]); + pool->resource[resource_idx] = NULL; + + if (pool->tbl_type == MLX5DR_TABLE_TYPE_FDB) { + mlx5dr_pool_free_one_resource(pool->mirror_resource[resource_idx]); + pool->mirror_resource[resource_idx] = NULL; + } +} + +static struct mlx5dr_pool_resource * +mlx5dr_pool_create_one_resource(struct mlx5dr_pool *pool, uint32_t log_range, + uint32_t fw_ft_type) +{ + struct mlx5dr_cmd_ste_create_attr ste_attr; + struct mlx5dr_cmd_stc_create_attr stc_attr; + struct mlx5dr_pool_resource *resource; + struct mlx5dr_devx_obj *devx_obj; + + resource = simple_malloc(sizeof(*resource)); + if (!resource) { + rte_errno = ENOMEM; + return NULL; + } + + switch (pool->type) { + case MLX5DR_POOL_TYPE_STE: + ste_attr.log_obj_range = log_range; + ste_attr.table_type = fw_ft_type; + devx_obj = mlx5dr_cmd_ste_create(pool->ctx->ibv_ctx, &ste_attr); + break; + case MLX5DR_POOL_TYPE_STC: + stc_attr.log_obj_range = log_range; + stc_attr.table_type = fw_ft_type; + devx_obj = mlx5dr_cmd_stc_create(pool->ctx->ibv_ctx, &stc_attr); + break; + default: + assert(0); + break; + } + + if (!devx_obj) { + DR_LOG(ERR, "Failed to allocate resource objects"); + goto free_resource; + } + + resource->pool = pool; + resource->devx_obj = devx_obj; + resource->range = 1 << log_range; + resource->base_id = devx_obj->id; + + return resource; + +free_resource: + simple_free(resource); + return NULL; +} + +static int +mlx5dr_pool_resource_alloc(struct mlx5dr_pool *pool, uint32_t log_range, int idx) +{ + struct mlx5dr_pool_resource *resource; + uint32_t fw_ft_type, opt_log_range; + + fw_ft_type = mlx5dr_table_get_res_fw_ft_type(pool->tbl_type, false); + opt_log_range = pool->opt_type == MLX5DR_POOL_OPTIMIZE_ORIG ? 0 : log_range; + resource = mlx5dr_pool_create_one_resource(pool, opt_log_range, fw_ft_type); + if (!resource) { + DR_LOG(ERR, "Failed allocating resource"); + return rte_errno; + } + pool->resource[idx] = resource; + + if (pool->tbl_type == MLX5DR_TABLE_TYPE_FDB) { + struct mlx5dr_pool_resource *mir_resource; + + fw_ft_type = mlx5dr_table_get_res_fw_ft_type(pool->tbl_type, true); + opt_log_range = pool->opt_type == MLX5DR_POOL_OPTIMIZE_MIRROR ? 0 : log_range; + mir_resource = mlx5dr_pool_create_one_resource(pool, opt_log_range, fw_ft_type); + if (!mir_resource) { + DR_LOG(ERR, "Failed allocating mirrored resource"); + mlx5dr_pool_free_one_resource(resource); + pool->resource[idx] = NULL; + return rte_errno; + } + pool->mirror_resource[idx] = mir_resource; + } + + return 0; +} + +static int mlx5dr_pool_bitmap_get_free_slot(struct rte_bitmap *bitmap, uint32_t *iidx) +{ + uint64_t slab = 0; + + __rte_bitmap_scan_init(bitmap); + + if (!rte_bitmap_scan(bitmap, iidx, &slab)) + return ENOMEM; + + *iidx += __builtin_ctzll(slab); + + rte_bitmap_clear(bitmap, *iidx); + + return 0; +} + +static struct rte_bitmap *mlx5dr_pool_create_and_init_bitmap(uint32_t log_range) +{ + struct rte_bitmap *cur_bmp; + uint32_t bmp_size; + void *mem; + + bmp_size = rte_bitmap_get_memory_footprint(1 << log_range); + mem = rte_zmalloc("create_stc_bmap", bmp_size, RTE_CACHE_LINE_SIZE); + if (!mem) { + DR_LOG(ERR, "No mem for bitmap"); + rte_errno = ENOMEM; + return NULL; + } + + cur_bmp = rte_bitmap_init_with_all_set(1 << log_range, mem, bmp_size); + if (!cur_bmp) { + rte_free(mem); + DR_LOG(ERR, "Failed to initialize stc bitmap."); + rte_errno = ENOMEM; + return NULL; + } + + return cur_bmp; +} + +static void mlx5dr_pool_buddy_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + struct mlx5dr_buddy_mem *buddy; + + buddy = pool->db.buddy_manager->buddies[chunk->resource_idx]; + if (!buddy) { + assert(false); + DR_LOG(ERR, "No such buddy (%d)", chunk->resource_idx); + return; + } + + mlx5dr_buddy_free_mem(buddy, chunk->offset, chunk->order); +} + +static struct mlx5dr_buddy_mem * +mlx5dr_pool_buddy_get_next_buddy(struct mlx5dr_pool *pool, int idx, + uint32_t order, bool *is_new_buddy) +{ + static struct mlx5dr_buddy_mem *buddy; + uint32_t new_buddy_size; + + buddy = pool->db.buddy_manager->buddies[idx]; + if (buddy) + return buddy; + + new_buddy_size = RTE_MAX(pool->alloc_log_sz, order); + *is_new_buddy = true; + buddy = mlx5dr_buddy_create(new_buddy_size); + if (!buddy) { + DR_LOG(ERR, "Failed to create buddy order: %d index: %d", + new_buddy_size, idx); + return NULL; + } + + if (mlx5dr_pool_resource_alloc(pool, new_buddy_size, idx) != 0) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, new_buddy_size, idx); + mlx5dr_buddy_cleanup(buddy); + return NULL; + } + + pool->db.buddy_manager->buddies[idx] = buddy; + + return buddy; +} + +static int mlx5dr_pool_buddy_get_mem_chunk(struct mlx5dr_pool *pool, + int order, + uint32_t *buddy_idx, + int *seg) +{ + struct mlx5dr_buddy_mem *buddy; + bool new_mem = false; + int err = 0; + int i; + + *seg = -1; + + /* Find the next free place from the buddy array */ + while (*seg == -1) { + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + buddy = mlx5dr_pool_buddy_get_next_buddy(pool, i, + order, + &new_mem); + if (!buddy) { + err = rte_errno; + goto out; + } + + *seg = mlx5dr_buddy_alloc_mem(buddy, order); + if (*seg != -1) + goto found; + + if (pool->flags & MLX5DR_POOL_FLAGS_ONE_RESOURCE) { + DR_LOG(ERR, "Fail to allocate seg for one resource pool"); + err = rte_errno; + goto out; + } + + if (new_mem) { + /* We have new memory pool, should be place for us */ + assert(false); + DR_LOG(ERR, "No memory for order: %d with buddy no: %d", + order, i); + rte_errno = ENOMEM; + err = ENOMEM; + goto out; + } + } + } + +found: + *buddy_idx = i; +out: + return err; +} + +static int mlx5dr_pool_buddy_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret = 0; + + /* Go over the buddies and find next free slot */ + ret = mlx5dr_pool_buddy_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_pool_buddy_db_uninit(struct mlx5dr_pool *pool) +{ + struct mlx5dr_buddy_mem *buddy; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + buddy = pool->db.buddy_manager->buddies[i]; + if (buddy) { + mlx5dr_buddy_cleanup(buddy); + simple_free(buddy); + pool->db.buddy_manager->buddies[i] = NULL; + } + } + + simple_free(pool->db.buddy_manager); +} + +static int mlx5dr_pool_buddy_db_init(struct mlx5dr_pool *pool, uint32_t log_range) +{ + pool->db.buddy_manager = simple_calloc(1, sizeof(*pool->db.buddy_manager)); + if (!pool->db.buddy_manager) { + DR_LOG(ERR, "No mem for buddy_manager with log_range: %d", log_range); + rte_errno = ENOMEM; + return rte_errno; + } + + if (pool->flags & MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE) { + bool new_buddy; + + if (!mlx5dr_pool_buddy_get_next_buddy(pool, 0, log_range, &new_buddy)) { + DR_LOG(ERR, "Failed allocating memory on create log_sz: %d", log_range); + simple_free(pool->db.buddy_manager); + return rte_errno; + } + } + + pool->p_db_uninit = &mlx5dr_pool_buddy_db_uninit; + pool->p_get_chunk = &mlx5dr_pool_buddy_db_get_chunk; + pool->p_put_chunk = &mlx5dr_pool_buddy_db_put_chunk; + + return 0; +} + +static int mlx5dr_pool_create_resource_on_index(struct mlx5dr_pool *pool, + uint32_t alloc_size, int idx) +{ + if (mlx5dr_pool_resource_alloc(pool, alloc_size, idx) != 0) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, alloc_size, idx); + return rte_errno; + } + + return 0; +} + +static struct mlx5dr_pool_elements * +mlx5dr_pool_element_create_new_elem(struct mlx5dr_pool *pool, uint32_t order, int idx) +{ + struct mlx5dr_pool_elements *elem; + uint32_t alloc_size; + + alloc_size = pool->alloc_log_sz; + + elem = simple_calloc(1, sizeof(*elem)); + if (!elem) { + DR_LOG(ERR, "Failed to create elem order: %d index: %d", + order, idx); + rte_errno = ENOMEM; + return NULL; + } + /*sharing the same resource, also means that all the elements are with size 1*/ + if ((pool->flags & MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS) && + !(pool->flags & MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK)) { + /* Currently all chunks in size 1 */ + elem->bitmap = mlx5dr_pool_create_and_init_bitmap(alloc_size - order); + if (!elem->bitmap) { + DR_LOG(ERR, "Failed to create bitmap type: %d: size %d index: %d", + pool->type, alloc_size, idx); + goto free_elem; + } + } + + if (mlx5dr_pool_create_resource_on_index(pool, alloc_size, idx)) { + DR_LOG(ERR, "Failed to create resource type: %d: size %d index: %d", + pool->type, alloc_size, idx); + goto free_db; + } + + pool->db.element_manager->elements[idx] = elem; + + return elem; + +free_db: + rte_free(elem->bitmap); +free_elem: + simple_free(elem); + return NULL; +} + +static int mlx5dr_pool_element_find_seg(struct mlx5dr_pool_elements *elem, int *seg) +{ + if (mlx5dr_pool_bitmap_get_free_slot(elem->bitmap, (uint32_t *)seg)) { + elem->is_full = true; + return ENOMEM; + } + return 0; +} + +static int +mlx5dr_pool_onesize_element_get_mem_chunk(struct mlx5dr_pool *pool, uint32_t order, + uint32_t *idx, int *seg) +{ + struct mlx5dr_pool_elements *elem; + + elem = pool->db.element_manager->elements[0]; + if (!elem) + elem = mlx5dr_pool_element_create_new_elem(pool, order, 0); + if (!elem) + goto err_no_elem; + + *idx = 0; + + if (mlx5dr_pool_element_find_seg(elem, seg) != 0) { + DR_LOG(ERR, "No more resources (last request order: %d)", order); + rte_errno = ENOMEM; + return ENOMEM; + } + + elem->num_of_elements++; + return 0; + +err_no_elem: + DR_LOG(ERR, "Failed to allocate element for order: %d", order); + return ENOMEM; +} + +static int +mlx5dr_pool_general_element_get_mem_chunk(struct mlx5dr_pool *pool, uint32_t order, + uint32_t *idx, int *seg) +{ + int ret; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + if (!pool->resource[i]) { + ret = mlx5dr_pool_create_resource_on_index(pool, order, i); + if (ret) + goto err_no_res; + *idx = i; + *seg = 0; /* One memory slot in that element */ + return 0; + } + } + + rte_errno = ENOMEM; + DR_LOG(ERR, "No more resources (last request order: %d)", order); + return ENOMEM; + +err_no_res: + DR_LOG(ERR, "Failed to allocate element for order: %d", order); + return ENOMEM; +} + +static int mlx5dr_pool_general_element_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret; + + /* Go over all memory elements and find/allocate free slot */ + ret = mlx5dr_pool_general_element_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_pool_general_element_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + assert(pool->resource[chunk->resource_idx]); + + if (pool->flags & MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE) + mlx5dr_pool_resource_free(pool, chunk->resource_idx); +} + +static void mlx5dr_pool_general_element_db_uninit(struct mlx5dr_pool *pool) +{ + (void)pool; +} + +/* This memory management works as the following: + * - At start doesn't allocate no mem at all. + * - When new request for chunk arrived: + * allocate resource and give it. + * - When free that chunk: + * the resource is freed. + */ +static int mlx5dr_pool_general_element_db_init(struct mlx5dr_pool *pool) +{ + pool->db.element_manager = simple_calloc(1, sizeof(*pool->db.element_manager)); + if (!pool->db.element_manager) { + DR_LOG(ERR, "No mem for general elemnt_manager"); + rte_errno = ENOMEM; + return rte_errno; + } + + pool->p_db_uninit = &mlx5dr_pool_general_element_db_uninit; + pool->p_get_chunk = &mlx5dr_pool_general_element_db_get_chunk; + pool->p_put_chunk = &mlx5dr_pool_general_element_db_put_chunk; + + return 0; +} + +static void mlx5dr_onesize_element_db_destroy_element(struct mlx5dr_pool *pool, + struct mlx5dr_pool_elements *elem, + struct mlx5dr_pool_chunk *chunk) +{ + assert(pool->resource[chunk->resource_idx]); + + mlx5dr_pool_resource_free(pool, chunk->resource_idx); + + simple_free(elem); + pool->db.element_manager->elements[chunk->resource_idx] = NULL; +} + +static void mlx5dr_onesize_element_db_put_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + struct mlx5dr_pool_elements *elem; + + assert(chunk->resource_idx == 0); + + elem = pool->db.element_manager->elements[chunk->resource_idx]; + if (!elem) { + assert(false); + DR_LOG(ERR, "No such element (%d)", chunk->resource_idx); + return; + } + + rte_bitmap_set(elem->bitmap, chunk->offset); + elem->is_full = false; + elem->num_of_elements--; + + if (pool->flags & MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE && + !elem->num_of_elements) + mlx5dr_onesize_element_db_destroy_element(pool, elem, chunk); +} + +static int mlx5dr_onesize_element_db_get_chunk(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret = 0; + + /* Go over all memory elements and find/allocate free slot */ + ret = mlx5dr_pool_onesize_element_get_mem_chunk(pool, chunk->order, + &chunk->resource_idx, + &chunk->offset); + if (ret) + DR_LOG(ERR, "Failed to get free slot for chunk with order: %d", + chunk->order); + + return ret; +} + +static void mlx5dr_onesize_element_db_uninit(struct mlx5dr_pool *pool) +{ + struct mlx5dr_pool_elements *elem; + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) { + elem = pool->db.element_manager->elements[i]; + if (elem) { + if (elem->bitmap) + rte_free(elem->bitmap); + simple_free(elem); + pool->db.element_manager->elements[i] = NULL; + } + } + simple_free(pool->db.element_manager); +} + +/* This memory management works as the following: + * - At start doesn't allocate no mem at all. + * - When new request for chunk arrived: + * aloocate the first and only slot of memory/resource + * when it ended return error. + */ +static int mlx5dr_pool_onesize_element_db_init(struct mlx5dr_pool *pool) +{ + pool->db.element_manager = simple_calloc(1, sizeof(*pool->db.element_manager)); + if (!pool->db.element_manager) { + DR_LOG(ERR, "No mem for general elemnt_manager"); + rte_errno = ENOMEM; + return rte_errno; + } + + pool->p_db_uninit = &mlx5dr_onesize_element_db_uninit; + pool->p_get_chunk = &mlx5dr_onesize_element_db_get_chunk; + pool->p_put_chunk = &mlx5dr_onesize_element_db_put_chunk; + + return 0; +} + +static int mlx5dr_pool_db_init(struct mlx5dr_pool *pool, + enum mlx5dr_db_type db_type) +{ + int ret; + + if (db_type == MLX5DR_POOL_DB_TYPE_GENERAL_SIZE) + ret = mlx5dr_pool_general_element_db_init(pool); + else if (db_type == MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE) + ret = mlx5dr_pool_onesize_element_db_init(pool); + else + ret = mlx5dr_pool_buddy_db_init(pool, pool->alloc_log_sz); + + if (ret) { + DR_LOG(ERR, "Failed to init general db : %d (ret: %d)", db_type, ret); + return ret; + } + + return 0; +} + +static void mlx5dr_pool_db_unint(struct mlx5dr_pool *pool) +{ + pool->p_db_uninit(pool); +} + +int +mlx5dr_pool_chunk_alloc(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + int ret; + + pthread_spin_lock(&pool->lock); + ret = pool->p_get_chunk(pool, chunk); + pthread_spin_unlock(&pool->lock); + + return ret; +} + +void mlx5dr_pool_chunk_free(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + pthread_spin_lock(&pool->lock); + pool->p_put_chunk(pool, chunk); + pthread_spin_unlock(&pool->lock); +} + +struct mlx5dr_pool * +mlx5dr_pool_create(struct mlx5dr_context *ctx, struct mlx5dr_pool_attr *pool_attr) +{ + enum mlx5dr_db_type res_db_type; + struct mlx5dr_pool *pool; + + pool = simple_calloc(1, sizeof(*pool)); + if (!pool) + return NULL; + + pool->ctx = ctx; + pool->type = pool_attr->pool_type; + pool->alloc_log_sz = pool_attr->alloc_log_sz; + pool->flags = pool_attr->flags; + pool->tbl_type = pool_attr->table_type; + pool->opt_type = pool_attr->opt_type; + + pthread_spin_init(&pool->lock, PTHREAD_PROCESS_PRIVATE); + + /* Support general db */ + if (pool->flags == (MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE | + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK)) + res_db_type = MLX5DR_POOL_DB_TYPE_GENERAL_SIZE; + else if (pool->flags == (MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS)) + res_db_type = MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE; + else + res_db_type = MLX5DR_POOL_DB_TYPE_BUDDY; + + pool->alloc_log_sz = pool_attr->alloc_log_sz; + + if (mlx5dr_pool_db_init(pool, res_db_type)) + goto free_pool; + + return pool; + +free_pool: + pthread_spin_destroy(&pool->lock); + simple_free(pool); + return NULL; +} + +int mlx5dr_pool_destroy(struct mlx5dr_pool *pool) +{ + int i; + + for (i = 0; i < MLX5DR_POOL_RESOURCE_ARR_SZ; i++) + if (pool->resource[i]) + mlx5dr_pool_resource_free(pool, i); + + mlx5dr_pool_db_unint(pool); + + pthread_spin_destroy(&pool->lock); + simple_free(pool); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_pool.h b/drivers/net/mlx5/hws/mlx5dr_pool.h new file mode 100644 index 0000000000..cd12c3ab9a --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pool.h @@ -0,0 +1,152 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_POOL_H_ +#define MLX5DR_POOL_H_ + +enum mlx5dr_pool_type { + MLX5DR_POOL_TYPE_STE, + MLX5DR_POOL_TYPE_STC, +}; + +#define MLX5DR_POOL_STC_LOG_SZ 14 + +#define MLX5DR_POOL_RESOURCE_ARR_SZ 100 + +struct mlx5dr_pool_chunk { + uint32_t resource_idx; + /* Internal offset, relative to base index */ + int offset; + int order; +}; + +struct mlx5dr_pool_resource { + struct mlx5dr_pool *pool; + struct mlx5dr_devx_obj *devx_obj; + uint32_t base_id; + uint32_t range; +}; + +enum mlx5dr_pool_flags { + /* Only a one resource in that pool */ + MLX5DR_POOL_FLAGS_ONE_RESOURCE = 1 << 0, + MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE = 1 << 1, + /* No sharing resources between chunks */ + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK = 1 << 2, + /* All objects are in the same size */ + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS = 1 << 3, + /* Manged by buddy allocator */ + MLX5DR_POOL_FLAGS_BUDDY_MANAGED = 1 << 4, + /* Allocate pool_type memory on pool creation */ + MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE = 1 << 5, + + /* These values should be used by the caller */ + MLX5DR_POOL_FLAGS_FOR_STC_POOL = + MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_FIXED_SIZE_OBJECTS, + MLX5DR_POOL_FLAGS_FOR_MATCHER_STE_POOL = + MLX5DR_POOL_FLAGS_RELEASE_FREE_RESOURCE | + MLX5DR_POOL_FLAGS_RESOURCE_PER_CHUNK, + MLX5DR_POOL_FLAGS_FOR_STE_ACTION_POOL = + MLX5DR_POOL_FLAGS_ONE_RESOURCE | + MLX5DR_POOL_FLAGS_BUDDY_MANAGED | + MLX5DR_POOL_FLAGS_ALLOC_MEM_ON_CREATE, +}; + +enum mlx5dr_pool_optimize { + MLX5DR_POOL_OPTIMIZE_NONE = 0x0, + MLX5DR_POOL_OPTIMIZE_ORIG = 0x1, + MLX5DR_POOL_OPTIMIZE_MIRROR = 0x2, +}; + +struct mlx5dr_pool_attr { + enum mlx5dr_pool_type pool_type; + enum mlx5dr_table_type table_type; + enum mlx5dr_pool_flags flags; + enum mlx5dr_pool_optimize opt_type; + /* Allocation size once memory is depleted */ + size_t alloc_log_sz; +}; + +enum mlx5dr_db_type { + /* Uses for allocating chunk of big memory, each element has its own resource in the FW*/ + MLX5DR_POOL_DB_TYPE_GENERAL_SIZE, + /* One resource only, all the elements are with same one size */ + MLX5DR_POOL_DB_TYPE_ONE_SIZE_RESOURCE, + /* Many resources, the memory allocated with buddy mechanism */ + MLX5DR_POOL_DB_TYPE_BUDDY, +}; + +struct mlx5dr_buddy_manager { + struct mlx5dr_buddy_mem *buddies[MLX5DR_POOL_RESOURCE_ARR_SZ]; +}; + +struct mlx5dr_pool_elements { + uint32_t num_of_elements; + struct rte_bitmap *bitmap; + bool is_full; +}; + +struct mlx5dr_element_manager { + struct mlx5dr_pool_elements *elements[MLX5DR_POOL_RESOURCE_ARR_SZ]; +}; + +struct mlx5dr_pool_db { + enum mlx5dr_db_type type; + union { + struct mlx5dr_element_manager *element_manager; + struct mlx5dr_buddy_manager *buddy_manager; + }; +}; + +typedef int (*mlx5dr_pool_db_get_chunk)(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); +typedef void (*mlx5dr_pool_db_put_chunk)(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); +typedef void (*mlx5dr_pool_unint_db)(struct mlx5dr_pool *pool); + +struct mlx5dr_pool { + struct mlx5dr_context *ctx; + enum mlx5dr_pool_type type; + enum mlx5dr_pool_flags flags; + pthread_spinlock_t lock; + size_t alloc_log_sz; + enum mlx5dr_table_type tbl_type; + enum mlx5dr_pool_optimize opt_type; + struct mlx5dr_pool_resource *resource[MLX5DR_POOL_RESOURCE_ARR_SZ]; + struct mlx5dr_pool_resource *mirror_resource[MLX5DR_POOL_RESOURCE_ARR_SZ]; + /* DB */ + struct mlx5dr_pool_db db; + /* Functions */ + mlx5dr_pool_unint_db p_db_uninit; + mlx5dr_pool_db_get_chunk p_get_chunk; + mlx5dr_pool_db_put_chunk p_put_chunk; +}; + +struct mlx5dr_pool * +mlx5dr_pool_create(struct mlx5dr_context *ctx, + struct mlx5dr_pool_attr *pool_attr); + +int mlx5dr_pool_destroy(struct mlx5dr_pool *pool); + +int mlx5dr_pool_chunk_alloc(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); + +void mlx5dr_pool_chunk_free(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk); + +static inline struct mlx5dr_devx_obj * +mlx5dr_pool_chunk_get_base_devx_obj(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + return pool->resource[chunk->resource_idx]->devx_obj; +} + +static inline struct mlx5dr_devx_obj * +mlx5dr_pool_chunk_get_base_devx_obj_mirror(struct mlx5dr_pool *pool, + struct mlx5dr_pool_chunk *chunk) +{ + return pool->mirror_resource[chunk->resource_idx]->devx_obj; +} +#endif /* MLX5DR_POOL_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 09/18] net/mlx5/hws: Add HWS pool and buddy 2022-10-20 15:57 ` [v6 09/18] net/mlx5/hws: Add HWS pool and buddy Alex Vesker @ 2022-10-24 6:52 ` Slava Ovsiienko 0 siblings, 0 replies; 134+ messages in thread From: Slava Ovsiienko @ 2022-10-24 6:52 UTC (permalink / raw) To: Alex Vesker, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam, Erez Shitrit > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 18:58 > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad > <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Erez Shitrit > <erezsh@nvidia.com> > Subject: [v6 09/18] net/mlx5/hws: Add HWS pool and buddy > > From: Erez Shitrit <erezsh@nvidia.com> > > HWS needs to manage different types of device memory in an efficient and > quick way. For this, memory pools are being used. > > Signed-off-by: Erez Shitrit <erezsh@nvidia.com> > Signed-off-by: Alex Vesker <valex@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 10/18] net/mlx5/hws: Add HWS send layer 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (8 preceding siblings ...) 2022-10-20 15:57 ` [v6 09/18] net/mlx5/hws: Add HWS pool and buddy Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-24 6:53 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 11/18] net/mlx5/hws: Add HWS definer layer Alex Vesker ` (8 subsequent siblings) 18 siblings, 1 reply; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika, Mark Bloch HWS configures flows to the HW using a QP, each WQE has the details of the flow we want to offload. The send layer allocates the resources needed to send the request to the HW as well as managing the queues, getting completions and handling failures. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_send.c | 844 +++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_send.h | 275 ++++++++++ 2 files changed, 1119 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.h diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c new file mode 100644 index 0000000000..26904a9040 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_send.c @@ -0,0 +1,844 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +struct mlx5dr_send_ring_dep_wqe * +mlx5dr_send_add_new_dep_wqe(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *send_sq = &queue->send_ring->send_sq; + unsigned int idx = send_sq->head_dep_idx++ & (queue->num_entries - 1); + + memset(&send_sq->dep_wqe[idx].wqe_data.tag, 0, MLX5DR_MATCH_TAG_SZ); + + return &send_sq->dep_wqe[idx]; +} + +void mlx5dr_send_abort_new_dep_wqe(struct mlx5dr_send_engine *queue) +{ + queue->send_ring->send_sq.head_dep_idx--; +} + +void mlx5dr_send_all_dep_wqe(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *send_sq = &queue->send_ring->send_sq; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE; + + /* Fence first from previous depend WQEs */ + ste_attr.send_attr.fence = 1; + + while (send_sq->head_dep_idx != send_sq->tail_dep_idx) { + dep_wqe = &send_sq->dep_wqe[send_sq->tail_dep_idx++ & (queue->num_entries - 1)]; + + /* Notify HW on the last WQE */ + ste_attr.send_attr.notify_hw = (send_sq->tail_dep_idx == send_sq->head_dep_idx); + ste_attr.send_attr.user_data = dep_wqe->user_data; + ste_attr.send_attr.rule = dep_wqe->rule; + + ste_attr.rtc_0 = dep_wqe->rtc_0; + ste_attr.rtc_1 = dep_wqe->rtc_1; + ste_attr.retry_rtc_0 = dep_wqe->retry_rtc_0; + ste_attr.retry_rtc_1 = dep_wqe->retry_rtc_1; + ste_attr.used_id_rtc_0 = &dep_wqe->rule->rtc_0; + ste_attr.used_id_rtc_1 = &dep_wqe->rule->rtc_1; + ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; + ste_attr.wqe_data = &dep_wqe->wqe_data; + + mlx5dr_send_ste(queue, &ste_attr); + + /* Fencing is done only on the first WQE */ + ste_attr.send_attr.fence = 0; + } +} + +struct mlx5dr_send_engine_post_ctrl +mlx5dr_send_engine_post_start(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_engine_post_ctrl ctrl; + + ctrl.queue = queue; + /* Currently only one send ring is supported */ + ctrl.send_ring = &queue->send_ring[0]; + ctrl.num_wqebbs = 0; + + return ctrl; +} + +void mlx5dr_send_engine_post_req_wqe(struct mlx5dr_send_engine_post_ctrl *ctrl, + char **buf, size_t *len) +{ + struct mlx5dr_send_ring_sq *send_sq = &ctrl->send_ring->send_sq; + unsigned int idx; + + idx = (send_sq->cur_post + ctrl->num_wqebbs) & send_sq->buf_mask; + + *buf = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + *len = MLX5_SEND_WQE_BB; + + if (!ctrl->num_wqebbs) { + *buf += sizeof(struct mlx5dr_wqe_ctrl_seg); + *len -= sizeof(struct mlx5dr_wqe_ctrl_seg); + } + + ctrl->num_wqebbs++; +} + +static void mlx5dr_send_engine_post_ring(struct mlx5dr_send_ring_sq *sq, + struct mlx5dv_devx_uar *uar, + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl) +{ + rte_compiler_barrier(); + sq->db[MLX5_SND_DBR] = rte_cpu_to_be_32(sq->cur_post); + + rte_wmb(); + mlx5dr_uar_write64_relaxed(*((uint64_t *)wqe_ctrl), uar->reg_addr); + rte_wmb(); +} + +static void +mlx5dr_send_wqe_set_tag(struct mlx5dr_wqe_gta_data_seg_ste *wqe_data, + struct mlx5dr_rule_match_tag *tag, + bool is_jumbo) +{ + if (is_jumbo) { + /* Clear previous possibly dirty control */ + memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ); + memcpy(wqe_data->action, tag->jumbo, MLX5DR_JUMBO_TAG_SZ); + } else { + /* Clear previous possibly dirty control and actions */ + memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ + MLX5DR_ACTIONS_SZ); + memcpy(wqe_data->tag, tag->match, MLX5DR_MATCH_TAG_SZ); + } +} + +void mlx5dr_send_engine_post_end(struct mlx5dr_send_engine_post_ctrl *ctrl, + struct mlx5dr_send_engine_post_attr *attr) +{ + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_ring_sq *sq; + uint32_t flags = 0; + unsigned int idx; + + sq = &ctrl->send_ring->send_sq; + idx = sq->cur_post & sq->buf_mask; + sq->last_idx = idx; + + wqe_ctrl = (void *)(sq->buf + (idx << MLX5_SEND_WQE_SHIFT)); + + wqe_ctrl->opmod_idx_opcode = + rte_cpu_to_be_32((attr->opmod << 24) | + ((sq->cur_post & 0xffff) << 8) | + attr->opcode); + wqe_ctrl->qpn_ds = + rte_cpu_to_be_32((attr->len + sizeof(struct mlx5dr_wqe_ctrl_seg)) / 16 | + sq->sqn << 8); + + wqe_ctrl->imm = rte_cpu_to_be_32(attr->id); + + flags |= attr->notify_hw ? MLX5_WQE_CTRL_CQ_UPDATE : 0; + flags |= attr->fence ? MLX5_WQE_CTRL_INITIATOR_SMALL_FENCE : 0; + wqe_ctrl->flags = rte_cpu_to_be_32(flags); + + sq->wr_priv[idx].id = attr->id; + sq->wr_priv[idx].retry_id = attr->retry_id; + + sq->wr_priv[idx].rule = attr->rule; + sq->wr_priv[idx].user_data = attr->user_data; + sq->wr_priv[idx].num_wqebbs = ctrl->num_wqebbs; + + if (attr->rule) { + sq->wr_priv[idx].rule->pending_wqes++; + sq->wr_priv[idx].used_id = attr->used_id; + } + + sq->cur_post += ctrl->num_wqebbs; + + if (attr->notify_hw) + mlx5dr_send_engine_post_ring(sq, ctrl->queue->uar, wqe_ctrl); +} + +static void mlx5dr_send_wqe(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_engine_post_attr *send_attr, + struct mlx5dr_wqe_gta_ctrl_seg *send_wqe_ctrl, + void *send_wqe_data, + void *send_wqe_tag, + bool is_jumbo, + uint8_t gta_opcode, + uint32_t direct_index) +{ + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_engine_post_ctrl ctrl; + size_t wqe_len; + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_data, &wqe_len); + + wqe_ctrl->op_dirix = htobe32(gta_opcode << 28 | direct_index); + memcpy(wqe_ctrl->stc_ix, send_wqe_ctrl->stc_ix, sizeof(send_wqe_ctrl->stc_ix)); + + if (send_wqe_data) + memcpy(wqe_data, send_wqe_data, sizeof(*wqe_data)); + else + mlx5dr_send_wqe_set_tag(wqe_data, send_wqe_tag, is_jumbo); + + mlx5dr_send_engine_post_end(&ctrl, send_attr); +} + +void mlx5dr_send_ste(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ste_attr *ste_attr) +{ + struct mlx5dr_send_engine_post_attr *send_attr = &ste_attr->send_attr; + uint8_t notify_hw = send_attr->notify_hw; + uint8_t fence = send_attr->fence; + + if (ste_attr->rtc_1) { + send_attr->id = ste_attr->rtc_1; + send_attr->used_id = ste_attr->used_id_rtc_1; + send_attr->retry_id = ste_attr->retry_rtc_1; + send_attr->fence = fence; + send_attr->notify_hw = notify_hw && !ste_attr->rtc_0; + mlx5dr_send_wqe(queue, send_attr, + ste_attr->wqe_ctrl, + ste_attr->wqe_data, + ste_attr->wqe_tag, + ste_attr->wqe_tag_is_jumbo, + ste_attr->gta_opcode, + ste_attr->direct_index); + } + + if (ste_attr->rtc_0) { + send_attr->id = ste_attr->rtc_0; + send_attr->used_id = ste_attr->used_id_rtc_0; + send_attr->retry_id = ste_attr->retry_rtc_0; + send_attr->fence = fence && !ste_attr->rtc_1; + send_attr->notify_hw = notify_hw; + mlx5dr_send_wqe(queue, send_attr, + ste_attr->wqe_ctrl, + ste_attr->wqe_data, + ste_attr->wqe_tag, + ste_attr->wqe_tag_is_jumbo, + ste_attr->gta_opcode, + ste_attr->direct_index); + } + + /* Restore to ortginal requested values */ + send_attr->notify_hw = notify_hw; + send_attr->fence = fence; +} + +static void mlx5dr_send_engine_retry_post_send(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_priv *priv, + uint16_t wqe_cnt) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_send_ring_sq *send_sq; + unsigned int idx; + size_t wqe_len; + char *p; + + send_attr.rule = priv->rule; + send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + send_attr.len = MLX5_SEND_WQE_BB * 2 - sizeof(struct mlx5dr_wqe_ctrl_seg); + send_attr.notify_hw = 1; + send_attr.fence = 0; + send_attr.user_data = priv->user_data; + send_attr.id = priv->retry_id; + send_attr.used_id = priv->used_id; + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_data, &wqe_len); + + send_sq = &ctrl.send_ring->send_sq; + idx = wqe_cnt & send_sq->buf_mask; + p = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + + /* Copy old gta ctrl */ + memcpy(wqe_ctrl, p + sizeof(struct mlx5dr_wqe_ctrl_seg), + MLX5_SEND_WQE_BB - sizeof(struct mlx5dr_wqe_ctrl_seg)); + + idx = (wqe_cnt + 1) & send_sq->buf_mask; + p = send_sq->buf + (idx << MLX5_SEND_WQE_SHIFT); + + /* Copy old gta data */ + memcpy(wqe_data, p, MLX5_SEND_WQE_BB); + + mlx5dr_send_engine_post_end(&ctrl, &send_attr); +} + +void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue) +{ + struct mlx5dr_send_ring_sq *sq = &queue->send_ring[0].send_sq; + struct mlx5dr_wqe_ctrl_seg *wqe_ctrl; + + wqe_ctrl = (void *)(sq->buf + (sq->last_idx << MLX5_SEND_WQE_SHIFT)); + + wqe_ctrl->flags |= rte_cpu_to_be_32(MLX5_WQE_CTRL_CQ_UPDATE); + + mlx5dr_send_engine_post_ring(sq, queue->uar, wqe_ctrl); +} + +static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_priv *priv, + uint16_t wqe_cnt, + enum rte_flow_op_status *status) +{ + priv->rule->pending_wqes--; + + if (*status == RTE_FLOW_OP_ERROR) { + if (priv->retry_id) { + mlx5dr_send_engine_retry_post_send(queue, priv, wqe_cnt); + return; + } + /* Some part of the rule failed */ + priv->rule->status = MLX5DR_RULE_STATUS_FAILING; + *priv->used_id = 0; + } else { + *priv->used_id = priv->id; + } + + /* Update rule status for the last completion */ + if (!priv->rule->pending_wqes) { + if (unlikely(priv->rule->status == MLX5DR_RULE_STATUS_FAILING)) { + /* Rule completely failed and doesn't require cleanup */ + if (!priv->rule->rtc_0 && !priv->rule->rtc_1) + priv->rule->status = MLX5DR_RULE_STATUS_FAILED; + + *status = RTE_FLOW_OP_ERROR; + } else { + /* Increase the status, this only works on good flow as the enum + * is arrange it away creating -> created -> deleting -> deleted + */ + priv->rule->status++; + *status = RTE_FLOW_OP_SUCCESS; + /* Rule was deleted now we can safely release action STEs */ + if (priv->rule->status == MLX5DR_RULE_STATUS_DELETED) + mlx5dr_rule_free_action_ste_idx(priv->rule); + } + } +} + +static void mlx5dr_send_engine_update(struct mlx5dr_send_engine *queue, + struct mlx5_cqe64 *cqe, + struct mlx5dr_send_ring_priv *priv, + struct rte_flow_op_result res[], + int64_t *i, + uint32_t res_nb, + uint16_t wqe_cnt) +{ + enum rte_flow_op_status status; + + if (!cqe || (likely(rte_be_to_cpu_32(cqe->byte_cnt) >> 31 == 0) && + likely(mlx5dv_get_cqe_opcode(cqe) == MLX5_CQE_REQ))) { + status = RTE_FLOW_OP_SUCCESS; + } else { + status = RTE_FLOW_OP_ERROR; + } + + if (priv->user_data) { + if (priv->rule) { + mlx5dr_send_engine_update_rule(queue, priv, wqe_cnt, &status); + /* Completion is provided on the last rule WQE */ + if (priv->rule->pending_wqes) + return; + } + + if (*i < res_nb) { + res[*i].user_data = priv->user_data; + res[*i].status = status; + (*i)++; + mlx5dr_send_engine_dec_rule(queue); + } else { + mlx5dr_send_engine_gen_comp(queue, priv->user_data, status); + } + } +} + +static void mlx5dr_send_engine_poll_cq(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring *send_ring, + struct rte_flow_op_result res[], + int64_t *i, + uint32_t res_nb) +{ + struct mlx5dr_send_ring_cq *cq = &send_ring->send_cq; + struct mlx5dr_send_ring_sq *sq = &send_ring->send_sq; + uint32_t cq_idx = cq->cons_index & cq->ncqe_mask; + struct mlx5dr_send_ring_priv *priv; + struct mlx5_cqe64 *cqe; + uint32_t offset_cqe64; + uint8_t cqe_opcode; + uint8_t cqe_owner; + uint16_t wqe_cnt; + uint8_t sw_own; + + offset_cqe64 = RTE_CACHE_LINE_SIZE - sizeof(struct mlx5_cqe64); + cqe = (void *)(cq->buf + (cq_idx << cq->cqe_log_sz) + offset_cqe64); + + sw_own = (cq->cons_index & cq->ncqe) ? 1 : 0; + cqe_opcode = mlx5dv_get_cqe_opcode(cqe); + cqe_owner = mlx5dv_get_cqe_owner(cqe); + + if (cqe_opcode == MLX5_CQE_INVALID || + cqe_owner != sw_own) + return; + + if (unlikely(mlx5dv_get_cqe_opcode(cqe) != MLX5_CQE_REQ)) + queue->err = true; + + rte_io_rmb(); + + wqe_cnt = be16toh(cqe->wqe_counter) & sq->buf_mask; + + while (cq->poll_wqe != wqe_cnt) { + priv = &sq->wr_priv[cq->poll_wqe]; + mlx5dr_send_engine_update(queue, NULL, priv, res, i, res_nb, 0); + cq->poll_wqe = (cq->poll_wqe + priv->num_wqebbs) & sq->buf_mask; + } + + priv = &sq->wr_priv[wqe_cnt]; + cq->poll_wqe = (wqe_cnt + priv->num_wqebbs) & sq->buf_mask; + mlx5dr_send_engine_update(queue, cqe, priv, res, i, res_nb, wqe_cnt); + cq->cons_index++; +} + +static void mlx5dr_send_engine_poll_cqs(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + int64_t *polled, + uint32_t res_nb) +{ + int j; + + for (j = 0; j < MLX5DR_NUM_SEND_RINGS; j++) { + mlx5dr_send_engine_poll_cq(queue, &queue->send_ring[j], + res, polled, res_nb); + + *queue->send_ring[j].send_cq.db = + htobe32(queue->send_ring[j].send_cq.cons_index & 0xffffff); + } +} + +static void mlx5dr_send_engine_poll_list(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + int64_t *polled, + uint32_t res_nb) +{ + struct mlx5dr_completed_poll *comp = &queue->completed; + + while (comp->ci != comp->pi) { + if (*polled < res_nb) { + res[*polled].status = + comp->entries[comp->ci].status; + res[*polled].user_data = + comp->entries[comp->ci].user_data; + (*polled)++; + comp->ci = (comp->ci + 1) & comp->mask; + mlx5dr_send_engine_dec_rule(queue); + } else { + return; + } + } +} + +static int mlx5dr_send_engine_poll(struct mlx5dr_send_engine *queue, + struct rte_flow_op_result res[], + uint32_t res_nb) +{ + int64_t polled = 0; + + mlx5dr_send_engine_poll_list(queue, res, &polled, res_nb); + + if (polled >= res_nb) + return polled; + + mlx5dr_send_engine_poll_cqs(queue, res, &polled, res_nb); + + return polled; +} + +int mlx5dr_send_queue_poll(struct mlx5dr_context *ctx, + uint16_t queue_id, + struct rte_flow_op_result res[], + uint32_t res_nb) +{ + return mlx5dr_send_engine_poll(&ctx->send_queue[queue_id], + res, res_nb); +} + +static int mlx5dr_send_ring_create_sq_obj(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_sq *sq, + struct mlx5dr_send_ring_cq *cq, + size_t log_wq_sz) +{ + struct mlx5dr_cmd_sq_create_attr attr = {0}; + int err; + + attr.cqn = cq->cqn; + attr.pdn = ctx->pd_num; + attr.page_id = queue->uar->page_id; + attr.dbr_id = sq->db_umem->umem_id; + attr.wq_id = sq->buf_umem->umem_id; + attr.log_wq_sz = log_wq_sz; + + sq->obj = mlx5dr_cmd_sq_create(ctx->ibv_ctx, &attr); + if (!sq->obj) + return rte_errno; + + sq->sqn = sq->obj->id; + + err = mlx5dr_cmd_sq_modify_rdy(sq->obj); + if (err) + goto free_sq; + + return 0; + +free_sq: + mlx5dr_cmd_destroy_obj(sq->obj); + + return err; +} + +static inline unsigned long align(unsigned long val, unsigned long align) +{ + return (val + align - 1) & ~(align - 1); +} + +static int mlx5dr_send_ring_open_sq(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_sq *sq, + struct mlx5dr_send_ring_cq *cq) +{ + size_t sq_log_buf_sz; + size_t buf_aligned; + size_t sq_buf_sz; + size_t buf_sz; + int err; + + buf_sz = queue->num_entries * MAX_WQES_PER_RULE; + sq_log_buf_sz = log2above(buf_sz); + sq_buf_sz = 1 << (sq_log_buf_sz + log2above(MLX5_SEND_WQE_BB)); + sq->reg_addr = queue->uar->reg_addr; + + buf_aligned = align(sq_buf_sz, sysconf(_SC_PAGESIZE)); + err = posix_memalign((void **)&sq->buf, sysconf(_SC_PAGESIZE), buf_aligned); + if (err) { + rte_errno = ENOMEM; + return err; + } + memset(sq->buf, 0, buf_aligned); + + err = posix_memalign((void **)&sq->db, 8, 8); + if (err) + goto free_buf; + + sq->buf_umem = mlx5_glue->devx_umem_reg(ctx->ibv_ctx, sq->buf, sq_buf_sz, 0); + + if (!sq->buf_umem) { + err = errno; + goto free_db; + } + + sq->db_umem = mlx5_glue->devx_umem_reg(ctx->ibv_ctx, sq->db, 8, 0); + if (!sq->db_umem) { + err = errno; + goto free_buf_umem; + } + + err = mlx5dr_send_ring_create_sq_obj(ctx, queue, sq, cq, sq_log_buf_sz); + + if (err) + goto free_db_umem; + + sq->wr_priv = simple_malloc(sizeof(*sq->wr_priv) * buf_sz); + if (!sq->wr_priv) { + err = ENOMEM; + goto destroy_sq_obj; + } + + sq->dep_wqe = simple_calloc(queue->num_entries, sizeof(*sq->dep_wqe)); + if (!sq->dep_wqe) { + err = ENOMEM; + goto destroy_wr_priv; + } + + sq->buf_mask = buf_sz - 1; + + return 0; + +destroy_wr_priv: + simple_free(sq->wr_priv); +destroy_sq_obj: + mlx5dr_cmd_destroy_obj(sq->obj); +free_db_umem: + mlx5_glue->devx_umem_dereg(sq->db_umem); +free_buf_umem: + mlx5_glue->devx_umem_dereg(sq->buf_umem); +free_db: + free(sq->db); +free_buf: + free(sq->buf); + rte_errno = err; + return err; +} + +static void mlx5dr_send_ring_close_sq(struct mlx5dr_send_ring_sq *sq) +{ + simple_free(sq->dep_wqe); + mlx5dr_cmd_destroy_obj(sq->obj); + mlx5_glue->devx_umem_dereg(sq->db_umem); + mlx5_glue->devx_umem_dereg(sq->buf_umem); + simple_free(sq->wr_priv); + free(sq->db); + free(sq->buf); +} + +static int mlx5dr_send_ring_open_cq(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring_cq *cq) +{ + struct mlx5dv_cq mlx5_cq = {0}; + struct mlx5dv_obj obj; + struct ibv_cq *ibv_cq; + size_t cq_size; + int err; + + cq_size = queue->num_entries; + ibv_cq = mlx5_glue->create_cq(ctx->ibv_ctx, cq_size, NULL, NULL, 0); + if (!ibv_cq) { + DR_LOG(ERR, "Failed to create CQ"); + rte_errno = errno; + return rte_errno; + } + + obj.cq.in = ibv_cq; + obj.cq.out = &mlx5_cq; + err = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_CQ); + if (err) { + err = errno; + goto close_cq; + } + + cq->buf = mlx5_cq.buf; + cq->db = mlx5_cq.dbrec; + cq->ncqe = mlx5_cq.cqe_cnt; + cq->cqe_sz = mlx5_cq.cqe_size; + cq->cqe_log_sz = log2above(cq->cqe_sz); + cq->ncqe_mask = cq->ncqe - 1; + cq->buf_sz = cq->cqe_sz * cq->ncqe; + cq->cqn = mlx5_cq.cqn; + cq->ibv_cq = ibv_cq; + + return 0; + +close_cq: + mlx5_glue->destroy_cq(ibv_cq); + rte_errno = err; + return err; +} + +static void mlx5dr_send_ring_close_cq(struct mlx5dr_send_ring_cq *cq) +{ + mlx5_glue->destroy_cq(cq->ibv_cq); +} + +static void mlx5dr_send_ring_close(struct mlx5dr_send_ring *ring) +{ + mlx5dr_send_ring_close_sq(&ring->send_sq); + mlx5dr_send_ring_close_cq(&ring->send_cq); +} + +static int mlx5dr_send_ring_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ring *ring) +{ + int err; + + err = mlx5dr_send_ring_open_cq(ctx, queue, &ring->send_cq); + if (err) + return err; + + err = mlx5dr_send_ring_open_sq(ctx, queue, &ring->send_sq, &ring->send_cq); + if (err) + goto close_cq; + + return err; + +close_cq: + mlx5dr_send_ring_close_cq(&ring->send_cq); + + return err; +} + +static void __mlx5dr_send_rings_close(struct mlx5dr_send_engine *queue, + uint16_t i) +{ + while (i--) + mlx5dr_send_ring_close(&queue->send_ring[i]); +} + +static void mlx5dr_send_rings_close(struct mlx5dr_send_engine *queue) +{ + __mlx5dr_send_rings_close(queue, queue->rings); +} + +static int mlx5dr_send_rings_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue) +{ + uint16_t i; + int err; + + for (i = 0; i < queue->rings; i++) { + err = mlx5dr_send_ring_open(ctx, queue, &queue->send_ring[i]); + if (err) + goto free_rings; + } + + return 0; + +free_rings: + __mlx5dr_send_rings_close(queue, i); + + return err; +} + +void mlx5dr_send_queue_close(struct mlx5dr_send_engine *queue) +{ + mlx5dr_send_rings_close(queue); + simple_free(queue->completed.entries); + mlx5_glue->devx_free_uar(queue->uar); +} + +int mlx5dr_send_queue_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + uint16_t queue_size) +{ + struct mlx5dv_devx_uar *uar; + int err; + +#ifdef MLX5DV_UAR_ALLOC_TYPE_NC + uar = mlx5_glue->devx_alloc_uar(ctx->ibv_ctx, MLX5_IB_UAPI_UAR_ALLOC_TYPE_NC); + if (!uar) { + rte_errno = errno; + return rte_errno; + } +#else + uar = NULL; + rte_errno = ENOTSUP; + return rte_errno; +#endif + + queue->uar = uar; + queue->rings = MLX5DR_NUM_SEND_RINGS; + queue->num_entries = roundup_pow_of_two(queue_size); + queue->used_entries = 0; + queue->th_entries = queue->num_entries; + + queue->completed.entries = simple_calloc(queue->num_entries, + sizeof(queue->completed.entries[0])); + if (!queue->completed.entries) { + rte_errno = ENOMEM; + goto free_uar; + } + queue->completed.pi = 0; + queue->completed.ci = 0; + queue->completed.mask = queue->num_entries - 1; + + err = mlx5dr_send_rings_open(ctx, queue); + if (err) + goto free_completed_entries; + + return 0; + +free_completed_entries: + simple_free(queue->completed.entries); +free_uar: + mlx5_glue->devx_free_uar(uar); + return rte_errno; +} + +static void __mlx5dr_send_queues_close(struct mlx5dr_context *ctx, uint16_t queues) +{ + struct mlx5dr_send_engine *queue; + + while (queues--) { + queue = &ctx->send_queue[queues]; + + mlx5dr_send_queue_close(queue); + } +} + +void mlx5dr_send_queues_close(struct mlx5dr_context *ctx) +{ + __mlx5dr_send_queues_close(ctx, ctx->queues); + simple_free(ctx->send_queue); +} + +int mlx5dr_send_queues_open(struct mlx5dr_context *ctx, + uint16_t queues, + uint16_t queue_size) +{ + int err = 0; + uint32_t i; + + /* Open one extra queue for control path */ + ctx->queues = queues + 1; + + ctx->send_queue = simple_calloc(ctx->queues, sizeof(*ctx->send_queue)); + if (!ctx->send_queue) { + rte_errno = ENOMEM; + return rte_errno; + } + + for (i = 0; i < ctx->queues; i++) { + err = mlx5dr_send_queue_open(ctx, &ctx->send_queue[i], queue_size); + if (err) + goto close_send_queues; + } + + return 0; + +close_send_queues: + __mlx5dr_send_queues_close(ctx, i); + + simple_free(ctx->send_queue); + + return err; +} + +int mlx5dr_send_queue_action(struct mlx5dr_context *ctx, + uint16_t queue_id, + uint32_t actions) +{ + struct mlx5dr_send_ring_sq *send_sq; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[queue_id]; + send_sq = &queue->send_ring->send_sq; + + if (actions == MLX5DR_SEND_QUEUE_ACTION_DRAIN) { + if (send_sq->head_dep_idx != send_sq->tail_dep_idx) + /* Send dependent WQEs to drain the queue */ + mlx5dr_send_all_dep_wqe(queue); + else + /* Signal on the last posted WQE */ + mlx5dr_send_engine_flush_queue(queue); + } else { + rte_errno = -EINVAL; + return rte_errno; + } + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h new file mode 100644 index 0000000000..8d4769495d --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_send.h @@ -0,0 +1,275 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_SEND_H_ +#define MLX5DR_SEND_H_ + +#define MLX5DR_NUM_SEND_RINGS 1 + +/* As a single operation requires at least two WQEBBS. + * This means a maximum of 16 such operations per rule. + */ +#define MAX_WQES_PER_RULE 32 + +/* WQE Control segment. */ +struct mlx5dr_wqe_ctrl_seg { + __be32 opmod_idx_opcode; + __be32 qpn_ds; + __be32 flags; + __be32 imm; +}; + +enum mlx5dr_wqe_opcode { + MLX5DR_WQE_OPCODE_TBL_ACCESS = 0x2c, +}; + +enum mlx5dr_wqe_opmod { + MLX5DR_WQE_OPMOD_GTA_STE = 0, + MLX5DR_WQE_OPMOD_GTA_MOD_ARG = 1, +}; + +enum mlx5dr_wqe_gta_opcode { + MLX5DR_WQE_GTA_OP_ACTIVATE = 0, + MLX5DR_WQE_GTA_OP_DEACTIVATE = 1, +}; + +enum mlx5dr_wqe_gta_opmod { + MLX5DR_WQE_GTA_OPMOD_STE = 0, + MLX5DR_WQE_GTA_OPMOD_MOD_ARG = 1, +}; + +enum mlx5dr_wqe_gta_sz { + MLX5DR_WQE_SZ_GTA_CTRL = 48, + MLX5DR_WQE_SZ_GTA_DATA = 64, +}; + +struct mlx5dr_wqe_gta_ctrl_seg { + __be32 op_dirix; + __be32 stc_ix[5]; + __be32 rsvd0[6]; +}; + +struct mlx5dr_wqe_gta_data_seg_ste { + __be32 rsvd0_ctr_id; + __be32 rsvd1[4]; + __be32 action[3]; + __be32 tag[8]; +}; + +struct mlx5dr_wqe_gta_data_seg_arg { + __be32 action_args[8]; +}; + +struct mlx5dr_wqe_gta { + struct mlx5dr_wqe_gta_ctrl_seg gta_ctrl; + union { + struct mlx5dr_wqe_gta_data_seg_ste seg_ste; + struct mlx5dr_wqe_gta_data_seg_arg seg_arg; + }; +}; + +struct mlx5dr_send_ring_cq { + uint8_t *buf; + uint32_t cons_index; + uint32_t ncqe_mask; + uint32_t buf_sz; + uint32_t ncqe; + uint32_t cqe_log_sz; + __be32 *db; + uint16_t poll_wqe; + struct ibv_cq *ibv_cq; + uint32_t cqn; + uint32_t cqe_sz; +}; + +struct mlx5dr_send_ring_priv { + struct mlx5dr_rule *rule; + void *user_data; + uint32_t num_wqebbs; + uint32_t id; + uint32_t retry_id; + uint32_t *used_id; +}; + +struct mlx5dr_send_ring_dep_wqe { + struct mlx5dr_wqe_gta_ctrl_seg wqe_ctrl; + struct mlx5dr_wqe_gta_data_seg_ste wqe_data; + struct mlx5dr_rule *rule; + uint32_t rtc_0; + uint32_t rtc_1; + uint32_t retry_rtc_0; + uint32_t retry_rtc_1; + void *user_data; +}; + +struct mlx5dr_send_ring_sq { + char *buf; + uint32_t sqn; + __be32 *db; + void *reg_addr; + uint16_t cur_post; + uint16_t buf_mask; + struct mlx5dr_send_ring_priv *wr_priv; + unsigned int last_idx; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + unsigned int head_dep_idx; + unsigned int tail_dep_idx; + struct mlx5dr_devx_obj *obj; + struct mlx5dv_devx_umem *buf_umem; + struct mlx5dv_devx_umem *db_umem; +}; + +struct mlx5dr_send_ring { + struct mlx5dr_send_ring_cq send_cq; + struct mlx5dr_send_ring_sq send_sq; +}; + +struct mlx5dr_completed_poll_entry { + void *user_data; + enum rte_flow_op_status status; +}; + +struct mlx5dr_completed_poll { + struct mlx5dr_completed_poll_entry *entries; + uint16_t ci; + uint16_t pi; + uint16_t mask; +}; + +struct mlx5dr_send_engine { + struct mlx5dr_send_ring send_ring[MLX5DR_NUM_SEND_RINGS]; /* For now 1:1 mapping */ + struct mlx5dv_devx_uar *uar; /* Uar is shared between rings of a queue */ + struct mlx5dr_completed_poll completed; + uint16_t used_entries; + uint16_t th_entries; + uint16_t rings; + uint16_t num_entries; + bool err; +} __rte_cache_aligned; + +struct mlx5dr_send_engine_post_ctrl { + struct mlx5dr_send_engine *queue; + struct mlx5dr_send_ring *send_ring; + size_t num_wqebbs; +}; + +struct mlx5dr_send_engine_post_attr { + uint8_t opcode; + uint8_t opmod; + uint8_t notify_hw; + uint8_t fence; + size_t len; + struct mlx5dr_rule *rule; + uint32_t id; + uint32_t retry_id; + uint32_t *used_id; + void *user_data; +}; + +struct mlx5dr_send_ste_attr { + /* rtc / retry_rtc / used_id_rtc override send_attr */ + uint32_t rtc_0; + uint32_t rtc_1; + uint32_t retry_rtc_0; + uint32_t retry_rtc_1; + uint32_t *used_id_rtc_0; + uint32_t *used_id_rtc_1; + bool wqe_tag_is_jumbo; + uint8_t gta_opcode; + uint32_t direct_index; + struct mlx5dr_send_engine_post_attr send_attr; + struct mlx5dr_rule_match_tag *wqe_tag; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + struct mlx5dr_wqe_gta_data_seg_ste *wqe_data; +}; + +/** + * Provide safe 64bit store operation to mlx5 UAR region for + * both 32bit and 64bit architectures. + * + * @param val + * value to write in CPU endian format. + * @param addr + * Address to write to. + * @param lock + * Address of the lock to use for that UAR access. + */ +static __rte_always_inline void +mlx5dr_uar_write64_relaxed(uint64_t val, void *addr) +{ +#ifdef RTE_ARCH_64 + *(uint64_t *)addr = val; +#else /* !RTE_ARCH_64 */ + *(uint32_t *)addr = val; + rte_io_wmb(); + *((uint32_t *)addr + 1) = val >> 32; +#endif +} + +struct mlx5dr_send_ring_dep_wqe * +mlx5dr_send_add_new_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_abort_new_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_all_dep_wqe(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_queue_close(struct mlx5dr_send_engine *queue); + +int mlx5dr_send_queue_open(struct mlx5dr_context *ctx, + struct mlx5dr_send_engine *queue, + uint16_t queue_size); + +void mlx5dr_send_queues_close(struct mlx5dr_context *ctx); + +int mlx5dr_send_queues_open(struct mlx5dr_context *ctx, + uint16_t queues, + uint16_t queue_size); + +struct mlx5dr_send_engine_post_ctrl +mlx5dr_send_engine_post_start(struct mlx5dr_send_engine *queue); + +void mlx5dr_send_engine_post_req_wqe(struct mlx5dr_send_engine_post_ctrl *ctrl, + char **buf, size_t *len); + +void mlx5dr_send_engine_post_end(struct mlx5dr_send_engine_post_ctrl *ctrl, + struct mlx5dr_send_engine_post_attr *attr); + +void mlx5dr_send_ste(struct mlx5dr_send_engine *queue, + struct mlx5dr_send_ste_attr *ste_attr); + +void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue); + +static inline bool mlx5dr_send_engine_full(struct mlx5dr_send_engine *queue) +{ + return queue->used_entries >= queue->th_entries; +} + +static inline void mlx5dr_send_engine_inc_rule(struct mlx5dr_send_engine *queue) +{ + queue->used_entries++; +} + +static inline void mlx5dr_send_engine_dec_rule(struct mlx5dr_send_engine *queue) +{ + queue->used_entries--; +} + +static inline void mlx5dr_send_engine_gen_comp(struct mlx5dr_send_engine *queue, + void *user_data, + int comp_status) +{ + struct mlx5dr_completed_poll *comp = &queue->completed; + + comp->entries[comp->pi].status = comp_status; + comp->entries[comp->pi].user_data = user_data; + + comp->pi = (comp->pi + 1) & comp->mask; +} + +static inline bool mlx5dr_send_engine_err(struct mlx5dr_send_engine *queue) +{ + return queue->err; +} + +#endif /* MLX5DR_SEND_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 10/18] net/mlx5/hws: Add HWS send layer 2022-10-20 15:57 ` [v6 10/18] net/mlx5/hws: Add HWS send layer Alex Vesker @ 2022-10-24 6:53 ` Slava Ovsiienko 0 siblings, 0 replies; 134+ messages in thread From: Slava Ovsiienko @ 2022-10-24 6:53 UTC (permalink / raw) To: Alex Vesker, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam, Mark Bloch > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 18:58 > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad > <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Mark Bloch <mbloch@nvidia.com> > Subject: [v6 10/18] net/mlx5/hws: Add HWS send layer > > HWS configures flows to the HW using a QP, each WQE has the details of the > flow we want to offload. The send layer allocates the resources needed to > send the request to the HW as well as managing the queues, getting > completions and handling failures. > > Signed-off-by: Mark Bloch <mbloch@nvidia.com> > Signed-off-by: Alex Vesker <valex@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 11/18] net/mlx5/hws: Add HWS definer layer 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (9 preceding siblings ...) 2022-10-20 15:57 ` [v6 10/18] net/mlx5/hws: Add HWS send layer Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-24 6:53 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 12/18] net/mlx5/hws: Add HWS context object Alex Vesker ` (7 subsequent siblings) 18 siblings, 1 reply; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Ferruh Yigit, Matan Azrad Cc: dev, orika, Mark Bloch Definers are HW objects that are used for matching, rte items are translated to definers, each definer holds the fields and bit-masks used for HW flow matching. The definer layer is used for finding the most efficient definer for each set of items. In addition to definer creation we also calculate the field copy (fc) array used for efficient items to WQE conversion. Signed-off-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- doc/guides/nics/features/default.ini | 1 + doc/guides/nics/features/mlx5.ini | 1 + drivers/net/mlx5/hws/mlx5dr_definer.c | 1968 +++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_definer.h | 585 ++++++++ 4 files changed, 2555 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.h diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index 27f1a70a87..67ba3567c2 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -140,6 +140,7 @@ udp = vlan = vxlan = vxlan_gpe = +meter_color = [rte_flow actions] age = diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini index 8697515385..b129f5787d 100644 --- a/doc/guides/nics/features/mlx5.ini +++ b/doc/guides/nics/features/mlx5.ini @@ -84,6 +84,7 @@ vlan = Y vxlan = Y vxlan_gpe = Y represented_port = Y +meter_color = Y [rte_flow actions] age = I diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c new file mode 100644 index 0000000000..6b98eb8c96 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c @@ -0,0 +1,1968 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +#define GTP_PDU_SC 0x85 +#define BAD_PORT 0xBAD +#define ETH_TYPE_IPV4_VXLAN 0x0800 +#define ETH_TYPE_IPV6_VXLAN 0x86DD +#define ETH_VXLAN_DEFAULT_PORT 4789 + +#define STE_NO_VLAN 0x0 +#define STE_SVLAN 0x1 +#define STE_CVLAN 0x2 +#define STE_IPV4 0x1 +#define STE_IPV6 0x2 +#define STE_TCP 0x1 +#define STE_UDP 0x2 +#define STE_ICMP 0x3 + +/* Setter function based on bit offset and mask, for 32bit DW*/ +#define _DR_SET_32(p, v, byte_off, bit_off, mask) \ + do { \ + u32 _v = v; \ + *((rte_be32_t *)(p) + ((byte_off) / 4)) = \ + rte_cpu_to_be_32((rte_be_to_cpu_32(*((u32 *)(p) + \ + ((byte_off) / 4))) & \ + (~((mask) << (bit_off)))) | \ + (((_v) & (mask)) << \ + (bit_off))); \ + } while (0) + +/* Setter function based on bit offset and mask */ +#define DR_SET(p, v, byte_off, bit_off, mask) \ + do { \ + if (unlikely((bit_off) < 0)) { \ + u32 _bit_off = -1 * (bit_off); \ + u32 second_dw_mask = (mask) & ((1 << _bit_off) - 1); \ + _DR_SET_32(p, (v) >> _bit_off, byte_off, 0, (mask) >> _bit_off); \ + _DR_SET_32(p, (v) & second_dw_mask, (byte_off) + DW_SIZE, \ + (bit_off) % BITS_IN_DW, second_dw_mask); \ + } else { \ + _DR_SET_32(p, v, byte_off, (bit_off), (mask)); \ + } \ + } while (0) + +/* Setter function based on byte offset to directly set FULL BE32 value */ +#define DR_SET_BE32(p, v, byte_off, bit_off, mask) \ + (*((rte_be32_t *)((uint8_t *)(p) + (byte_off))) = (v)) + +/* Setter function based on byte offset to directly set FULL BE32 value from ptr */ +#define DR_SET_BE32P(p, v_ptr, byte_off, bit_off, mask) \ + memcpy((uint8_t *)(p) + (byte_off), v_ptr, 4) + +/* Setter function based on byte offset to directly set FULL BE16 value */ +#define DR_SET_BE16(p, v, byte_off, bit_off, mask) \ + (*((rte_be16_t *)((uint8_t *)(p) + (byte_off))) = (v)) + +/* Setter function based on byte offset to directly set FULL BE16 value from ptr */ +#define DR_SET_BE16P(p, v_ptr, byte_off, bit_off, mask) \ + memcpy((uint8_t *)(p) + (byte_off), v_ptr, 2) + +#define DR_CALC_FNAME(field, inner) \ + ((inner) ? MLX5DR_DEFINER_FNAME_##field##_I : \ + MLX5DR_DEFINER_FNAME_##field##_O) + +#define DR_CALC_SET_HDR(fc, hdr, field) \ + do { \ + (fc)->bit_mask = __mlx5_mask(definer_hl, hdr.field); \ + (fc)->bit_off = __mlx5_dw_bit_off(definer_hl, hdr.field); \ + (fc)->byte_off = MLX5_BYTE_OFF(definer_hl, hdr.field); \ + } while (0) + +/* Helper to calculate data used by DR_SET */ +#define DR_CALC_SET(fc, hdr, field, is_inner) \ + do { \ + if (is_inner) { \ + DR_CALC_SET_HDR(fc, hdr##_inner, field); \ + } else { \ + DR_CALC_SET_HDR(fc, hdr##_outer, field); \ + } \ + } while (0) + + #define DR_GET(typ, p, fld) \ + ((rte_be_to_cpu_32(*((const rte_be32_t *)(p) + \ + __mlx5_dw_off(typ, fld))) >> __mlx5_dw_bit_off(typ, fld)) & \ + __mlx5_mask(typ, fld)) + +struct mlx5dr_definer_sel_ctrl { + uint8_t allowed_full_dw; /* Full DW selectors cover all offsets */ + uint8_t allowed_lim_dw; /* Limited DW selectors cover offset < 64 */ + uint8_t allowed_bytes; /* Bytes selectors, up to offset 255 */ + uint8_t used_full_dw; + uint8_t used_lim_dw; + uint8_t used_bytes; + uint8_t full_dw_selector[DW_SELECTORS]; + uint8_t lim_dw_selector[DW_SELECTORS_LIMITED]; + uint8_t byte_selector[BYTE_SELECTORS]; +}; + +struct mlx5dr_definer_conv_data { + struct mlx5dr_cmd_query_caps *caps; + struct mlx5dr_definer_fc *fc; + uint8_t relaxed; + uint8_t tunnel; + uint8_t *hl; +}; + +/* Xmacro used to create generic item setter from items */ +#define LIST_OF_FIELDS_INFO \ + X(SET_BE16, eth_type, v->type, rte_flow_item_eth) \ + X(SET_BE32P, eth_smac_47_16, &v->src.addr_bytes[0], rte_flow_item_eth) \ + X(SET_BE16P, eth_smac_15_0, &v->src.addr_bytes[4], rte_flow_item_eth) \ + X(SET_BE32P, eth_dmac_47_16, &v->dst.addr_bytes[0], rte_flow_item_eth) \ + X(SET_BE16P, eth_dmac_15_0, &v->dst.addr_bytes[4], rte_flow_item_eth) \ + X(SET_BE16, tci, v->tci, rte_flow_item_vlan) \ + X(SET, ipv4_ihl, v->ihl, rte_ipv4_hdr) \ + X(SET, ipv4_tos, v->type_of_service, rte_ipv4_hdr) \ + X(SET, ipv4_time_to_live, v->time_to_live, rte_ipv4_hdr) \ + X(SET_BE32, ipv4_dst_addr, v->dst_addr, rte_ipv4_hdr) \ + X(SET_BE32, ipv4_src_addr, v->src_addr, rte_ipv4_hdr) \ + X(SET, ipv4_next_proto, v->next_proto_id, rte_ipv4_hdr) \ + X(SET, ipv4_version, STE_IPV4, rte_ipv4_hdr) \ + X(SET_BE16, ipv4_frag, v->fragment_offset, rte_ipv4_hdr) \ + X(SET_BE16, ipv6_payload_len, v->hdr.payload_len, rte_flow_item_ipv6) \ + X(SET, ipv6_proto, v->hdr.proto, rte_flow_item_ipv6) \ + X(SET, ipv6_hop_limits, v->hdr.hop_limits, rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_127_96, &v->hdr.src_addr[0], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_95_64, &v->hdr.src_addr[4], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_63_32, &v->hdr.src_addr[8], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_src_addr_31_0, &v->hdr.src_addr[12], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_127_96, &v->hdr.dst_addr[0], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_95_64, &v->hdr.dst_addr[4], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_63_32, &v->hdr.dst_addr[8], rte_flow_item_ipv6) \ + X(SET_BE32P, ipv6_dst_addr_31_0, &v->hdr.dst_addr[12], rte_flow_item_ipv6) \ + X(SET, ipv6_version, STE_IPV6, rte_flow_item_ipv6) \ + X(SET, ipv6_frag, v->has_frag_ext, rte_flow_item_ipv6) \ + X(SET, icmp_protocol, STE_ICMP, rte_flow_item_icmp) \ + X(SET, udp_protocol, STE_UDP, rte_flow_item_udp) \ + X(SET_BE16, udp_src_port, v->hdr.src_port, rte_flow_item_udp) \ + X(SET_BE16, udp_dst_port, v->hdr.dst_port, rte_flow_item_udp) \ + X(SET, tcp_flags, v->hdr.tcp_flags, rte_flow_item_tcp) \ + X(SET, tcp_protocol, STE_TCP, rte_flow_item_tcp) \ + X(SET_BE16, tcp_src_port, v->hdr.src_port, rte_flow_item_tcp) \ + X(SET_BE16, tcp_dst_port, v->hdr.dst_port, rte_flow_item_tcp) \ + X(SET, gtp_udp_port, RTE_GTPU_UDP_PORT, rte_flow_item_gtp) \ + X(SET_BE32, gtp_teid, v->teid, rte_flow_item_gtp) \ + X(SET, gtp_msg_type, v->msg_type, rte_flow_item_gtp) \ + X(SET, gtp_ext_flag, !!v->v_pt_rsv_flags, rte_flow_item_gtp) \ + X(SET, gtp_next_ext_hdr, GTP_PDU_SC, rte_flow_item_gtp_psc) \ + X(SET, gtp_ext_hdr_pdu, v->hdr.type, rte_flow_item_gtp_psc) \ + X(SET, gtp_ext_hdr_qfi, v->hdr.qfi, rte_flow_item_gtp_psc) \ + X(SET, vxlan_flags, v->flags, rte_flow_item_vxlan) \ + X(SET, vxlan_udp_port, ETH_VXLAN_DEFAULT_PORT, rte_flow_item_vxlan) \ + X(SET, source_qp, v->queue, mlx5_rte_flow_item_sq) \ + X(SET, tag, v->data, rte_flow_item_tag) \ + X(SET, metadata, v->data, rte_flow_item_meta) \ + X(SET_BE16, gre_c_ver, v->c_rsvd0_ver, rte_flow_item_gre) \ + X(SET_BE16, gre_protocol_type, v->protocol, rte_flow_item_gre) \ + X(SET, ipv4_protocol_gre, IPPROTO_GRE, rte_flow_item_gre) \ + X(SET_BE32, gre_opt_key, v->key.key, rte_flow_item_gre_opt) \ + X(SET_BE32, gre_opt_seq, v->sequence.sequence, rte_flow_item_gre_opt) \ + X(SET_BE16, gre_opt_checksum, v->checksum_rsvd.checksum, rte_flow_item_gre_opt) \ + X(SET, meter_color, rte_col_2_mlx5_col(v->color), rte_flow_item_meter_color) + +/* Item set function format */ +#define X(set_type, func_name, value, item_type) \ +static void mlx5dr_definer_##func_name##_set( \ + struct mlx5dr_definer_fc *fc, \ + const void *item_spec, \ + uint8_t *tag) \ +{ \ + __rte_unused const struct item_type *v = item_spec; \ + DR_##set_type(tag, value, fc->byte_off, fc->bit_off, fc->bit_mask); \ +} +LIST_OF_FIELDS_INFO +#undef X + +static void +mlx5dr_definer_ones_set(struct mlx5dr_definer_fc *fc, + __rte_unused const void *item_spec, + __rte_unused uint8_t *tag) +{ + DR_SET(tag, -1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_eth_first_vlan_q_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_eth *v = item_spec; + uint8_t vlan_type; + + vlan_type = v->has_vlan ? STE_CVLAN : STE_NO_VLAN; + + DR_SET(tag, vlan_type, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_first_vlan_q_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_vlan *v = item_spec; + uint8_t vlan_type; + + vlan_type = v->has_more_vlan ? STE_SVLAN : STE_CVLAN; + + DR_SET(tag, vlan_type, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_conntrack_mask(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_conntrack *m = item_spec; + uint32_t reg_mask = 0; + + if (m->flags & (RTE_FLOW_CONNTRACK_PKT_STATE_VALID | + RTE_FLOW_CONNTRACK_PKT_STATE_INVALID | + RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED)) + reg_mask |= (MLX5_CT_SYNDROME_VALID | MLX5_CT_SYNDROME_INVALID | + MLX5_CT_SYNDROME_TRAP); + + if (m->flags & RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED) + reg_mask |= MLX5_CT_SYNDROME_STATE_CHANGE; + + if (m->flags & RTE_FLOW_CONNTRACK_PKT_STATE_BAD) + reg_mask |= MLX5_CT_SYNDROME_BAD_PACKET; + + DR_SET(tag, reg_mask, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_conntrack_tag(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_conntrack *v = item_spec; + uint32_t reg_value = 0; + + /* The conflict should be checked in the validation. */ + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_VALID) + reg_value |= MLX5_CT_SYNDROME_VALID; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_CHANGED) + reg_value |= MLX5_CT_SYNDROME_STATE_CHANGE; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_INVALID) + reg_value |= MLX5_CT_SYNDROME_INVALID; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_DISABLED) + reg_value |= MLX5_CT_SYNDROME_TRAP; + + if (v->flags & RTE_FLOW_CONNTRACK_PKT_STATE_BAD) + reg_value |= MLX5_CT_SYNDROME_BAD_PACKET; + + DR_SET(tag, reg_value, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_integrity_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + bool inner = (fc->fname == MLX5DR_DEFINER_FNAME_INTEGRITY_I); + const struct rte_flow_item_integrity *v = item_spec; + uint32_t ok1_bits = 0; + + if (v->l3_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L3_OK) | + BIT(MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_L3_OK) | + BIT(MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK); + + if (v->ipv4_csum_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK); + + if (v->l4_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_OK) | + BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_OK) | + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK); + + if (v->l4_csum_ok) + ok1_bits |= inner ? BIT(MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK) : + BIT(MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK); + + DR_SET(tag, ok1_bits, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_gre_key_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const rte_be32_t *v = item_spec; + + DR_SET_BE32(tag, *v, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_vxlan_vni_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_vxlan *v = item_spec; + + memcpy(tag + fc->byte_off, v->vni, sizeof(v->vni)); +} + +static void +mlx5dr_definer_ipv6_tos_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ipv6 *v = item_spec; + uint8_t tos = DR_GET(header_ipv6_vtc, &v->hdr.vtc_flow, tos); + + DR_SET(tag, tos, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp_dw1_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp *v = item_spec; + rte_be32_t icmp_dw1; + + icmp_dw1 = (v->hdr.icmp_type << __mlx5_dw_bit_off(header_icmp, type)) | + (v->hdr.icmp_code << __mlx5_dw_bit_off(header_icmp, code)) | + (rte_be_to_cpu_16(v->hdr.icmp_cksum) << __mlx5_dw_bit_off(header_icmp, cksum)); + + DR_SET(tag, icmp_dw1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp_dw2_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp *v = item_spec; + rte_be32_t icmp_dw2; + + icmp_dw2 = (rte_be_to_cpu_16(v->hdr.icmp_ident) << __mlx5_dw_bit_off(header_icmp, ident)) | + (rte_be_to_cpu_16(v->hdr.icmp_seq_nb) << __mlx5_dw_bit_off(header_icmp, seq_nb)); + + DR_SET(tag, icmp_dw2, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_icmp6_dw1_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_icmp6 *v = item_spec; + rte_be32_t icmp_dw1; + + icmp_dw1 = (v->type << __mlx5_dw_bit_off(header_icmp, type)) | + (v->code << __mlx5_dw_bit_off(header_icmp, code)) | + (rte_be_to_cpu_16(v->checksum) << __mlx5_dw_bit_off(header_icmp, cksum)); + + DR_SET(tag, icmp_dw1, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_ipv6_flow_label_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ipv6 *v = item_spec; + uint32_t flow_label = DR_GET(header_ipv6_vtc, &v->hdr.vtc_flow, flow_label); + + DR_SET(tag, flow_label, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static void +mlx5dr_definer_vport_set(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag) +{ + const struct rte_flow_item_ethdev *v = item_spec; + const struct flow_hw_port_info *port_info; + uint32_t regc_value; + + port_info = flow_hw_conv_port_id(v->port_id); + if (unlikely(!port_info)) + regc_value = BAD_PORT; + else + regc_value = port_info->regc_value >> fc->bit_off; + + /* Bit offset is set to 0 to since regc value is 32bit */ + DR_SET(tag, regc_value, fc->byte_off, fc->bit_off, fc->bit_mask); +} + +static int +mlx5dr_definer_conv_item_eth(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_eth *m = item->mask; + uint8_t empty_mac[RTE_ETHER_ADDR_LEN] = {0}; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->reserved) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->type) { + fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_type_set; + DR_CALC_SET(fc, eth_l2, l3_ethertype, inner); + } + + /* Check SMAC 47_16 */ + if (memcmp(m->src.addr_bytes, empty_mac, 4)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_48_16, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_smac_47_16_set; + DR_CALC_SET(fc, eth_l2_src, smac_47_16, inner); + } + + /* Check SMAC 15_0 */ + if (memcmp(m->src.addr_bytes + 4, empty_mac + 4, 2)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_SMAC_15_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_smac_15_0_set; + DR_CALC_SET(fc, eth_l2_src, smac_15_0, inner); + } + + /* Check DMAC 47_16 */ + if (memcmp(m->dst.addr_bytes, empty_mac, 4)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_48_16, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_dmac_47_16_set; + DR_CALC_SET(fc, eth_l2, dmac_47_16, inner); + } + + /* Check DMAC 15_0 */ + if (memcmp(m->dst.addr_bytes + 4, empty_mac + 4, 2)) { + fc = &cd->fc[DR_CALC_FNAME(ETH_DMAC_15_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_dmac_15_0_set; + DR_CALC_SET(fc, eth_l2, dmac_15_0, inner); + } + + if (m->has_vlan) { + /* Mark packet as tagged (CVLAN) */ + fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_eth_first_vlan_q_set; + DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_vlan *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->reserved) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (!cd->relaxed || m->has_more_vlan) { + /* Mark packet as tagged (CVLAN or SVLAN) even if TCI is not specified.*/ + fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_first_vlan_q_set; + DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner); + } + + if (m->tci) { + fc = &cd->fc[DR_CALC_FNAME(VLAN_TCI, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tci_set; + DR_CALC_SET(fc, eth_l2, tci, inner); + } + + if (m->inner_type) { + fc = &cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_eth_type_set; + DR_CALC_SET(fc, eth_l2, l3_ethertype, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_ipv4(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_ipv4_hdr *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_VERSION, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_version_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l3_type, inner); + + /* Overwrite - Unset ethertype if present */ + memset(&cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)], 0, sizeof(*fc)); + } + + if (!m) + return 0; + + if (m->total_length || m->packet_id || + m->hdr_checksum) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->fragment_offset) { + fc = &cd->fc[DR_CALC_FNAME(IP_FRAG, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_frag_set; + DR_CALC_SET(fc, eth_l3, fragment_offset, inner); + } + + if (m->next_proto_id) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_next_proto_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (m->dst_addr) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_DST, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_dst_addr_set; + DR_CALC_SET(fc, ipv4_src_dest, destination_address, inner); + } + + if (m->src_addr) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_SRC, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_src_addr_set; + DR_CALC_SET(fc, ipv4_src_dest, source_address, inner); + } + + if (m->ihl) { + fc = &cd->fc[DR_CALC_FNAME(IPV4_IHL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_ihl_set; + DR_CALC_SET(fc, eth_l3, ihl, inner); + } + + if (m->time_to_live) { + fc = &cd->fc[DR_CALC_FNAME(IP_TTL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_time_to_live_set; + DR_CALC_SET(fc, eth_l3, time_to_live_hop_limit, inner); + } + + if (m->type_of_service) { + fc = &cd->fc[DR_CALC_FNAME(IP_TOS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv4_tos_set; + DR_CALC_SET(fc, eth_l3, tos, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_ipv6(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_ipv6 *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_VERSION, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_version_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l3_type, inner); + + /* Overwrite - Unset ethertype if present */ + memset(&cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)], 0, sizeof(*fc)); + } + + if (!m) + return 0; + + if (m->has_hop_ext || m->has_route_ext || m->has_auth_ext || + m->has_esp_ext || m->has_dest_ext || m->has_mobil_ext || + m->has_hip_ext || m->has_shim6_ext) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->has_frag_ext) { + fc = &cd->fc[DR_CALC_FNAME(IP_FRAG, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_frag_set; + DR_CALC_SET(fc, eth_l4, ip_fragmented, inner); + } + + if (DR_GET(header_ipv6_vtc, &m->hdr.vtc_flow, tos)) { + fc = &cd->fc[DR_CALC_FNAME(IP_TOS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_tos_set; + DR_CALC_SET(fc, eth_l3, tos, inner); + } + + if (DR_GET(header_ipv6_vtc, &m->hdr.vtc_flow, flow_label)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_FLOW_LABEL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_flow_label_set; + DR_CALC_SET(fc, eth_l3, flow_label, inner); + } + + if (m->hdr.payload_len) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_PAYLOAD_LEN, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_payload_len_set; + DR_CALC_SET(fc, eth_l3, ipv6_payload_length, inner); + } + + if (m->hdr.proto) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_proto_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (m->hdr.hop_limits) { + fc = &cd->fc[DR_CALC_FNAME(IP_TTL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_hop_limits_set; + DR_CALC_SET(fc, eth_l3, time_to_live_hop_limit, inner); + } + + if (!is_mem_zero(m->hdr.src_addr, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_127_96, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_127_96_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_127_96, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 4, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_95_64, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_95_64_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_95_64, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 8, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_63_32, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_63_32_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_63_32, inner); + } + + if (!is_mem_zero(m->hdr.src_addr + 12, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_SRC_31_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_src_addr_31_0_set; + DR_CALC_SET(fc, ipv6_src, ipv6_address_31_0, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_127_96, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_127_96_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_127_96, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 4, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_95_64, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_95_64_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_95_64, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 8, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_63_32, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_63_32_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_63_32, inner); + } + + if (!is_mem_zero(m->hdr.dst_addr + 12, 4)) { + fc = &cd->fc[DR_CALC_FNAME(IPV6_DST_31_0, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ipv6_dst_addr_31_0_set; + DR_CALC_SET(fc, ipv6_dst, ipv6_address_31_0, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_udp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_udp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Set match on L4 type UDP */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + if (!m) + return 0; + + if (m->hdr.dgram_cksum || m->hdr.dgram_len) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->hdr.src_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_SPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_src_port_set; + DR_CALC_SET(fc, eth_l4, source_port, inner); + } + + if (m->hdr.dst_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_udp_dst_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_tcp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_tcp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on L4 type TCP */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + if (!m) + return 0; + + if (m->hdr.tcp_flags) { + fc = &cd->fc[DR_CALC_FNAME(TCP_FLAGS, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_flags_set; + DR_CALC_SET(fc, eth_l4, tcp_flags, inner); + } + + if (m->hdr.src_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_SPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_src_port_set; + DR_CALC_SET(fc, eth_l4, source_port, inner); + } + + if (m->hdr.dst_port) { + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tcp_dst_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gtp *m = item->mask; + struct mlx5dr_definer_fc *fc; + + /* Overwrite GTPU dest port if not present */ + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, false)]; + if (!fc->tag_set && !cd->relaxed) { + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_udp_port_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l4, destination_port, false); + } + + if (!m) + return 0; + + if (m->msg_len || m->v_pt_rsv_flags & ~MLX5DR_DEFINER_GTP_EXT_HDR_BIT) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->teid) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_TEID]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_teid_set; + fc->bit_mask = __mlx5_mask(header_gtp, teid); + fc->byte_off = cd->caps->format_select_gtpu_dw_1 * DW_SIZE; + } + + if (m->v_pt_rsv_flags) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_ext_flag_set; + fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + + if (m->msg_type) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_MSG_TYPE]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_msg_type_set; + fc->bit_mask = __mlx5_mask(header_gtp, msg_type); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, msg_type); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gtp_psc *m = item->mask; + struct mlx5dr_definer_fc *fc; + + /* Overwrite GTP extension flag to be 1 */ + if (!cd->relaxed) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ones_set; + fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag); + fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag); + fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE; + } + + /* Overwrite next extension header type */ + if (!cd->relaxed) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_2_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_NEXT_EXT_HDR]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_next_ext_hdr_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->bit_mask = __mlx5_mask(header_opt_gtp, next_ext_hdr_type); + fc->bit_off = __mlx5_dw_bit_off(header_opt_gtp, next_ext_hdr_type); + fc->byte_off = cd->caps->format_select_gtpu_dw_2 * DW_SIZE; + } + + if (!m) + return 0; + + if (m->hdr.type) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_PDU]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_ext_hdr_pdu_set; + fc->bit_mask = __mlx5_mask(header_gtp_psc, pdu_type); + fc->bit_off = __mlx5_dw_bit_off(header_gtp_psc, pdu_type); + fc->byte_off = cd->caps->format_select_gtpu_ext_dw_0 * DW_SIZE; + } + + if (m->hdr.qfi) { + if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) { + rte_errno = ENOTSUP; + return rte_errno; + } + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_QFI]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gtp_ext_hdr_qfi_set; + fc->bit_mask = __mlx5_mask(header_gtp_psc, qfi); + fc->bit_off = __mlx5_dw_bit_off(header_gtp_psc, qfi); + fc->byte_off = cd->caps->format_select_gtpu_ext_dw_0 * DW_SIZE; + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_port(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_ethdev *m = item->mask; + struct mlx5dr_definer_fc *fc; + uint8_t bit_offset = 0; + + if (m->port_id) { + if (!cd->caps->wire_regc_mask) { + DR_LOG(ERR, "Port ID item not supported, missing wire REGC mask"); + rte_errno = ENOTSUP; + return rte_errno; + } + + while (!(cd->caps->wire_regc_mask & (1 << bit_offset))) + bit_offset++; + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VPORT_REG_C_0]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_vport_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET_HDR(fc, registers, register_c_0); + fc->bit_off = bit_offset; + fc->bit_mask = cd->caps->wire_regc_mask >> bit_offset; + } else { + DR_LOG(ERR, "Pord ID item mask must specify ID mask"); + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_vxlan(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_vxlan *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* In order to match on VXLAN we must match on ether_type, ip_protocol + * and l4_dport. + */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_udp_protocol_set; + DR_CALC_SET(fc, eth_l2, l4_type_bwc, inner); + } + + fc = &cd->fc[DR_CALC_FNAME(L4_DPORT, inner)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_vxlan_udp_port_set; + DR_CALC_SET(fc, eth_l4, destination_port, inner); + } + } + + if (!m) + return 0; + + if (m->flags) { + if (inner) { + DR_LOG(ERR, "Inner VXLAN flags item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VXLAN_FLAGS]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_vxlan_flags_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_vxlan, flags); + fc->bit_off = __mlx5_dw_bit_off(header_vxlan, flags); + } + + if (!is_mem_zero(m->vni, 3)) { + if (inner) { + DR_LOG(ERR, "Inner VXLAN vni item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + fc = &cd->fc[MLX5DR_DEFINER_FNAME_VXLAN_VNI]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_vxlan_vni_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_1); + fc->bit_mask = __mlx5_mask(header_vxlan, vni); + fc->bit_off = __mlx5_dw_bit_off(header_vxlan, vni); + } + + return 0; +} + +static struct mlx5dr_definer_fc * +mlx5dr_definer_get_register_fc(struct mlx5dr_definer_conv_data *cd, int reg) +{ + struct mlx5dr_definer_fc *fc; + + switch (reg) { + case REG_C_0: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_0]; + DR_CALC_SET_HDR(fc, registers, register_c_0); + break; + case REG_C_1: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_1]; + DR_CALC_SET_HDR(fc, registers, register_c_1); + break; + case REG_C_2: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_2]; + DR_CALC_SET_HDR(fc, registers, register_c_2); + break; + case REG_C_3: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_3]; + DR_CALC_SET_HDR(fc, registers, register_c_3); + break; + case REG_C_4: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_4]; + DR_CALC_SET_HDR(fc, registers, register_c_4); + break; + case REG_C_5: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_5]; + DR_CALC_SET_HDR(fc, registers, register_c_5); + break; + case REG_C_6: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_6]; + DR_CALC_SET_HDR(fc, registers, register_c_6); + break; + case REG_C_7: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_7]; + DR_CALC_SET_HDR(fc, registers, register_c_7); + break; + case REG_A: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_A]; + DR_CALC_SET_HDR(fc, metadata, general_purpose); + break; + case REG_B: + fc = &cd->fc[MLX5DR_DEFINER_FNAME_REG_B]; + DR_CALC_SET_HDR(fc, metadata, metadata_to_cqe); + break; + default: + rte_errno = ENOTSUP; + return NULL; + } + + return fc; +} + +static int +mlx5dr_definer_conv_item_tag(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_tag *m = item->mask; + const struct rte_flow_item_tag *v = item->spec; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m || !v) + return 0; + + if (item->type == RTE_FLOW_ITEM_TYPE_TAG) + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_TAG, v->index); + else + reg = (int)v->index; + + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item tag"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_tag_set; + return 0; +} + +static int +mlx5dr_definer_conv_item_metadata(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_meta *m = item->mask; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m) + return 0; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_META, -1); + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item metadata"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_metadata_set; + return 0; +} + +static int +mlx5dr_definer_conv_item_sq(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct mlx5_rte_flow_item_sq *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!m) + return 0; + + if (m->queue) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_SOURCE_QP]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_source_qp_set; + DR_CALC_SET_HDR(fc, source_qp_gvmi, source_qp); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gre(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gre *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (inner) { + DR_LOG(ERR, "Inner GRE item not supported"); + rte_errno = ENOTSUP; + return rte_errno; + } + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, inner); + } + + if (!m) + return 0; + + if (m->c_rsvd0_ver) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_C_VER]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_c_ver_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_gre, c_rsvd0_ver); + fc->bit_off = __mlx5_dw_bit_off(header_gre, c_rsvd0_ver); + } + + if (m->protocol) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_PROTOCOL]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_protocol_type_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->byte_off += MLX5_BYTE_OFF(header_gre, gre_protocol); + fc->bit_mask = __mlx5_mask(header_gre, gre_protocol); + fc->bit_off = __mlx5_dw_bit_off(header_gre, gre_protocol); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gre_opt(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_gre_opt *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, false)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, false); + } + } + + if (!m) + return 0; + + if (m->checksum_rsvd.checksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_CHECKSUM]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_checksum_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_1); + } + + if (m->key.key) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_KEY]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_key_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_2); + } + + if (m->sequence.sequence) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_SEQ]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_opt_seq_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_3); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_gre_key(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const rte_be32_t *m = item->mask; + struct mlx5dr_definer_fc *fc; + + if (!cd->relaxed) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_KEY_PRESENT]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_ones_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_0); + fc->bit_mask = __mlx5_mask(header_gre, gre_k_present); + fc->bit_off = __mlx5_dw_bit_off(header_gre, gre_k_present); + + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, false)]; + if (!fc->tag_set) { + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + fc->tag_set = &mlx5dr_definer_ipv4_protocol_gre_set; + DR_CALC_SET(fc, eth_l3, protocol_next_header, false); + } + } + + if (!m) + return 0; + + if (*m) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_GRE_OPT_KEY]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_gre_key_set; + DR_CALC_SET_HDR(fc, tunnel_header, tunnel_header_2); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_integrity(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_integrity *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + if (!m) + return 0; + + if (m->packet_ok || m->l2_ok || m->l2_crc_ok || m->l3_len_ok) { + rte_errno = ENOTSUP; + return rte_errno; + } + + if (m->l3_ok || m->ipv4_csum_ok || m->l4_ok || m->l4_csum_ok) { + fc = &cd->fc[DR_CALC_FNAME(INTEGRITY, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_integrity_set; + DR_CALC_SET_HDR(fc, oks1, oks1_bits); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_conntrack(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_conntrack *m = item->mask; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m) + return 0; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_CONNTRACK, -1); + if (reg <= 0) { + DR_LOG(ERR, "Invalid register for item conntrack"); + rte_errno = EINVAL; + return rte_errno; + } + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_mask_set = &mlx5dr_definer_conntrack_mask; + fc->tag_set = &mlx5dr_definer_conntrack_tag; + + return 0; +} + +static int +mlx5dr_definer_conv_item_icmp(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_icmp *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on L4 type ICMP */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type, inner); + } + + if (!m) + return 0; + + if (m->hdr.icmp_type || m->hdr.icmp_code || m->hdr.icmp_cksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW1]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_dw1_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw1); + } + + if (m->hdr.icmp_ident || m->hdr.icmp_seq_nb) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW2]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_dw2_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw2); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_icmp6(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_icmp6 *m = item->mask; + struct mlx5dr_definer_fc *fc; + bool inner = cd->tunnel; + + /* Overwrite match on L4 type ICMP6 */ + if (!cd->relaxed) { + fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp_protocol_set; + fc->tag_mask_set = &mlx5dr_definer_ones_set; + DR_CALC_SET(fc, eth_l2, l4_type, inner); + } + + if (!m) + return 0; + + if (m->type || m->code || m->checksum) { + fc = &cd->fc[MLX5DR_DEFINER_FNAME_ICMP_DW1]; + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_icmp6_dw1_set; + DR_CALC_SET_HDR(fc, tcp_icmp, icmp_dw1); + } + + return 0; +} + +static int +mlx5dr_definer_conv_item_meter_color(struct mlx5dr_definer_conv_data *cd, + struct rte_flow_item *item, + int item_idx) +{ + const struct rte_flow_item_meter_color *m = item->mask; + struct mlx5dr_definer_fc *fc; + int reg; + + if (!m) + return 0; + + reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_METER_COLOR, 0); + MLX5_ASSERT(reg > 0); + + fc = mlx5dr_definer_get_register_fc(cd, reg); + if (!fc) + return rte_errno; + + fc->item_idx = item_idx; + fc->tag_set = &mlx5dr_definer_meter_color_set; + return 0; +} + +static int +mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt, + uint8_t *hl) +{ + struct mlx5dr_definer_fc fc[MLX5DR_DEFINER_FNAME_MAX] = {{0}}; + struct mlx5dr_definer_conv_data cd = {0}; + struct rte_flow_item *items = mt->items; + uint64_t item_flags = 0; + uint32_t total = 0; + int i, j; + int ret; + + cd.fc = fc; + cd.hl = hl; + cd.caps = ctx->caps; + cd.relaxed = mt->flags & MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH; + + /* Collect all RTE fields to the field array and set header layout */ + for (i = 0; items->type != RTE_FLOW_ITEM_TYPE_END; i++, items++) { + cd.tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL); + + switch ((int)items->type) { + case RTE_FLOW_ITEM_TYPE_ETH: + ret = mlx5dr_definer_conv_item_eth(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L2 : + MLX5_FLOW_LAYER_OUTER_L2; + break; + case RTE_FLOW_ITEM_TYPE_VLAN: + ret = mlx5dr_definer_conv_item_vlan(&cd, items, i); + item_flags |= cd.tunnel ? + (MLX5_FLOW_LAYER_INNER_VLAN | MLX5_FLOW_LAYER_INNER_L2) : + (MLX5_FLOW_LAYER_OUTER_VLAN | MLX5_FLOW_LAYER_OUTER_L2); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + ret = mlx5dr_definer_conv_item_ipv4(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 : + MLX5_FLOW_LAYER_OUTER_L3_IPV4; + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + ret = mlx5dr_definer_conv_item_ipv6(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 : + MLX5_FLOW_LAYER_OUTER_L3_IPV6; + break; + case RTE_FLOW_ITEM_TYPE_UDP: + ret = mlx5dr_definer_conv_item_udp(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP : + MLX5_FLOW_LAYER_OUTER_L4_UDP; + break; + case RTE_FLOW_ITEM_TYPE_TCP: + ret = mlx5dr_definer_conv_item_tcp(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP : + MLX5_FLOW_LAYER_OUTER_L4_TCP; + break; + case RTE_FLOW_ITEM_TYPE_GTP: + ret = mlx5dr_definer_conv_item_gtp(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GTP; + break; + case RTE_FLOW_ITEM_TYPE_GTP_PSC: + ret = mlx5dr_definer_conv_item_gtp_psc(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GTP_PSC; + break; + case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT: + ret = mlx5dr_definer_conv_item_port(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_REPRESENTED_PORT; + mt->vport_item_id = i; + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + ret = mlx5dr_definer_conv_item_vxlan(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_VXLAN; + break; + case MLX5_RTE_FLOW_ITEM_TYPE_SQ: + ret = mlx5dr_definer_conv_item_sq(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_SQ; + break; + case RTE_FLOW_ITEM_TYPE_TAG: + case MLX5_RTE_FLOW_ITEM_TYPE_TAG: + ret = mlx5dr_definer_conv_item_tag(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_TAG; + break; + case RTE_FLOW_ITEM_TYPE_META: + ret = mlx5dr_definer_conv_item_metadata(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_METADATA; + break; + case RTE_FLOW_ITEM_TYPE_GRE: + ret = mlx5dr_definer_conv_item_gre(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_OPTION: + ret = mlx5dr_definer_conv_item_gre_opt(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE; + break; + case RTE_FLOW_ITEM_TYPE_GRE_KEY: + ret = mlx5dr_definer_conv_item_gre_key(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_GRE_KEY; + break; + case RTE_FLOW_ITEM_TYPE_INTEGRITY: + ret = mlx5dr_definer_conv_item_integrity(&cd, items, i); + item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_INTEGRITY : + MLX5_FLOW_ITEM_OUTER_INTEGRITY; + break; + case RTE_FLOW_ITEM_TYPE_CONNTRACK: + ret = mlx5dr_definer_conv_item_conntrack(&cd, items, i); + break; + case RTE_FLOW_ITEM_TYPE_ICMP: + ret = mlx5dr_definer_conv_item_icmp(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_ICMP; + break; + case RTE_FLOW_ITEM_TYPE_ICMP6: + ret = mlx5dr_definer_conv_item_icmp6(&cd, items, i); + item_flags |= MLX5_FLOW_LAYER_ICMP6; + break; + case RTE_FLOW_ITEM_TYPE_METER_COLOR: + ret = mlx5dr_definer_conv_item_meter_color(&cd, items, i); + item_flags |= MLX5_FLOW_ITEM_METER_COLOR; + break; + default: + DR_LOG(ERR, "Unsupported item type %d", items->type); + rte_errno = ENOTSUP; + return rte_errno; + } + + if (ret) { + DR_LOG(ERR, "Failed processing item type: %d", items->type); + return ret; + } + } + + mt->item_flags = item_flags; + + /* Fill in headers layout and calculate total number of fields */ + for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) { + if (fc[i].tag_set) { + total++; + DR_SET(hl, -1, fc[i].byte_off, fc[i].bit_off, fc[i].bit_mask); + } + } + + mt->fc_sz = total; + mt->fc = simple_calloc(total, sizeof(*mt->fc)); + if (!mt->fc) { + DR_LOG(ERR, "Failed to allocate field copy array"); + rte_errno = ENOMEM; + return rte_errno; + } + + j = 0; + for (i = 0; i < MLX5DR_DEFINER_FNAME_MAX; i++) { + if (fc[i].tag_set) { + memcpy(&mt->fc[j], &fc[i], sizeof(*mt->fc)); + mt->fc[j].fname = i; + j++; + } + } + + return 0; +} + +static int +mlx5dr_definer_find_byte_in_tag(struct mlx5dr_definer *definer, + uint32_t hl_byte_off, + uint32_t *tag_byte_off) +{ + uint8_t byte_offset; + int i; + + /* Add offset since each DW covers multiple BYTEs */ + byte_offset = hl_byte_off % DW_SIZE; + for (i = 0; i < DW_SELECTORS; i++) { + if (definer->dw_selector[i] == hl_byte_off / DW_SIZE) { + *tag_byte_off = byte_offset + DW_SIZE * (DW_SELECTORS - i - 1); + return 0; + } + } + + /* Add offset to skip DWs in definer */ + byte_offset = DW_SIZE * DW_SELECTORS; + /* Iterate in reverse since the code uses bytes from 7 -> 0 */ + for (i = BYTE_SELECTORS; i-- > 0 ;) { + if (definer->byte_selector[i] == hl_byte_off) { + *tag_byte_off = byte_offset + (BYTE_SELECTORS - i - 1); + return 0; + } + } + + /* The hl byte offset must be part of the definer */ + DR_LOG(INFO, "Failed to map to definer, HL byte [%d] not found", byte_offset); + rte_errno = EINVAL; + return rte_errno; +} + +static int +mlx5dr_definer_fc_bind(struct mlx5dr_definer *definer, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz) +{ + uint32_t tag_offset = 0; + int ret, byte_diff; + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + /* Map header layout byte offset to byte offset in tag */ + ret = mlx5dr_definer_find_byte_in_tag(definer, fc->byte_off, &tag_offset); + if (ret) + return ret; + + /* Move setter based on the location in the definer */ + byte_diff = fc->byte_off % DW_SIZE - tag_offset % DW_SIZE; + fc->bit_off = fc->bit_off + byte_diff * BITS_IN_BYTE; + + /* Update offset in headers layout to offset in tag */ + fc->byte_off = tag_offset; + fc++; + } + + return 0; +} + +static bool +mlx5dr_definer_best_hl_fit_recu(struct mlx5dr_definer_sel_ctrl *ctrl, + uint32_t cur_dw, + uint32_t *data) +{ + uint8_t bytes_set; + int byte_idx; + bool ret; + int i; + + /* Reached end, nothing left to do */ + if (cur_dw == MLX5_ST_SZ_DW(definer_hl)) + return true; + + /* No data set, can skip to next DW */ + while (!*data) { + cur_dw++; + data++; + + /* Reached end, nothing left to do */ + if (cur_dw == MLX5_ST_SZ_DW(definer_hl)) + return true; + } + + /* Used all DW selectors and Byte selectors, no possible solution */ + if (ctrl->allowed_full_dw == ctrl->used_full_dw && + ctrl->allowed_lim_dw == ctrl->used_lim_dw && + ctrl->allowed_bytes == ctrl->used_bytes) + return false; + + /* Try to use limited DW selectors */ + if (ctrl->allowed_lim_dw > ctrl->used_lim_dw && cur_dw < 64) { + ctrl->lim_dw_selector[ctrl->used_lim_dw++] = cur_dw; + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + ctrl->lim_dw_selector[--ctrl->used_lim_dw] = 0; + } + + /* Try to use DW selectors */ + if (ctrl->allowed_full_dw > ctrl->used_full_dw) { + ctrl->full_dw_selector[ctrl->used_full_dw++] = cur_dw; + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + ctrl->full_dw_selector[--ctrl->used_full_dw] = 0; + } + + /* No byte selector for offset bigger than 255 */ + if (cur_dw * DW_SIZE > 255) + return false; + + bytes_set = !!(0x000000ff & *data) + + !!(0x0000ff00 & *data) + + !!(0x00ff0000 & *data) + + !!(0xff000000 & *data); + + /* Check if there are enough byte selectors left */ + if (bytes_set + ctrl->used_bytes > ctrl->allowed_bytes) + return false; + + /* Try to use Byte selectors */ + for (i = 0; i < DW_SIZE; i++) + if ((0xff000000 >> (i * BITS_IN_BYTE)) & rte_be_to_cpu_32(*data)) { + /* Use byte selectors high to low */ + byte_idx = ctrl->allowed_bytes - ctrl->used_bytes - 1; + ctrl->byte_selector[byte_idx] = cur_dw * DW_SIZE + i; + ctrl->used_bytes++; + } + + ret = mlx5dr_definer_best_hl_fit_recu(ctrl, cur_dw + 1, data + 1); + if (ret) + return ret; + + for (i = 0; i < DW_SIZE; i++) + if ((0xff << (i * BITS_IN_BYTE)) & rte_be_to_cpu_32(*data)) { + ctrl->used_bytes--; + byte_idx = ctrl->allowed_bytes - ctrl->used_bytes - 1; + ctrl->byte_selector[byte_idx] = 0; + } + + return false; +} + +static void +mlx5dr_definer_apply_sel_ctrl(struct mlx5dr_definer_sel_ctrl *ctrl, + struct mlx5dr_definer *definer) +{ + memcpy(definer->byte_selector, ctrl->byte_selector, ctrl->allowed_bytes); + memcpy(definer->dw_selector, ctrl->full_dw_selector, ctrl->allowed_full_dw); + memcpy(definer->dw_selector + ctrl->allowed_full_dw, + ctrl->lim_dw_selector, ctrl->allowed_lim_dw); +} + +static int +mlx5dr_definer_find_best_hl_fit(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt, + uint8_t *hl) +{ + struct mlx5dr_definer_sel_ctrl ctrl = {0}; + bool found; + + /* Try to create a match definer */ + ctrl.allowed_full_dw = DW_SELECTORS_MATCH; + ctrl.allowed_lim_dw = 0; + ctrl.allowed_bytes = BYTE_SELECTORS; + + found = mlx5dr_definer_best_hl_fit_recu(&ctrl, 0, (uint32_t *)hl); + if (found) { + mlx5dr_definer_apply_sel_ctrl(&ctrl, mt->definer); + mt->definer->type = MLX5DR_DEFINER_TYPE_MATCH; + return 0; + } + + /* Try to create a full/limited jumbo definer */ + ctrl.allowed_full_dw = ctx->caps->full_dw_jumbo_support ? DW_SELECTORS : + DW_SELECTORS_MATCH; + ctrl.allowed_lim_dw = ctx->caps->full_dw_jumbo_support ? 0 : + DW_SELECTORS_LIMITED; + ctrl.allowed_bytes = BYTE_SELECTORS; + + found = mlx5dr_definer_best_hl_fit_recu(&ctrl, 0, (uint32_t *)hl); + if (found) { + mlx5dr_definer_apply_sel_ctrl(&ctrl, mt->definer); + mt->definer->type = MLX5DR_DEFINER_TYPE_JUMBO; + return 0; + } + + DR_LOG(ERR, "Unable to find supporting match/jumbo definer combination"); + rte_errno = ENOTSUP; + return rte_errno; +} + +static void +mlx5dr_definer_create_tag_mask(struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag) +{ + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + if (fc->tag_mask_set) + fc->tag_mask_set(fc, items[fc->item_idx].mask, tag); + else + fc->tag_set(fc, items[fc->item_idx].mask, tag); + fc++; + } +} + +void mlx5dr_definer_create_tag(const struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag) +{ + uint32_t i; + + for (i = 0; i < fc_sz; i++) { + fc->tag_set(fc, items[fc->item_idx].spec, tag); + fc++; + } +} + +int mlx5dr_definer_get_id(struct mlx5dr_definer *definer) +{ + return definer->obj->id; +} + +int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b) +{ + int i; + + if (definer_a->type != definer_b->type) + return 1; + + for (i = 0; i < BYTE_SELECTORS; i++) + if (definer_a->byte_selector[i] != definer_b->byte_selector[i]) + return 1; + + for (i = 0; i < DW_SELECTORS; i++) + if (definer_a->dw_selector[i] != definer_b->dw_selector[i]) + return 1; + + for (i = 0; i < MLX5DR_JUMBO_TAG_SZ; i++) + if (definer_a->mask.jumbo[i] != definer_b->mask.jumbo[i]) + return 1; + + return 0; +} + +int mlx5dr_definer_get(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt) +{ + struct mlx5dr_cmd_definer_create_attr def_attr = {0}; + struct ibv_context *ibv_ctx = ctx->ibv_ctx; + uint8_t *hl; + int ret; + + if (mt->refcount++) + return 0; + + mt->definer = simple_calloc(1, sizeof(*mt->definer)); + if (!mt->definer) { + DR_LOG(ERR, "Failed to allocate memory for definer"); + rte_errno = ENOMEM; + goto dec_refcount; + } + + /* Header layout (hl) holds full bit mask per field */ + hl = simple_calloc(1, MLX5_ST_SZ_BYTES(definer_hl)); + if (!hl) { + DR_LOG(ERR, "Failed to allocate memory for header layout"); + rte_errno = ENOMEM; + goto free_definer; + } + + /* Convert items to hl and allocate the field copy array (fc) */ + ret = mlx5dr_definer_conv_items_to_hl(ctx, mt, hl); + if (ret) { + DR_LOG(ERR, "Failed to convert items to hl"); + goto free_hl; + } + + /* Find the definer for given header layout */ + ret = mlx5dr_definer_find_best_hl_fit(ctx, mt, hl); + if (ret) { + DR_LOG(ERR, "Failed to create definer from header layout"); + goto free_field_copy; + } + + /* Align field copy array based on the new definer */ + ret = mlx5dr_definer_fc_bind(mt->definer, + mt->fc, + mt->fc_sz); + if (ret) { + DR_LOG(ERR, "Failed to bind field copy to definer"); + goto free_field_copy; + } + + /* Create the tag mask used for definer creation */ + mlx5dr_definer_create_tag_mask(mt->items, + mt->fc, + mt->fc_sz, + mt->definer->mask.jumbo); + + /* Create definer based on the bitmask tag */ + def_attr.match_mask = mt->definer->mask.jumbo; + def_attr.dw_selector = mt->definer->dw_selector; + def_attr.byte_selector = mt->definer->byte_selector; + mt->definer->obj = mlx5dr_cmd_definer_create(ibv_ctx, &def_attr); + if (!mt->definer->obj) + goto free_field_copy; + + simple_free(hl); + + return 0; + +free_field_copy: + simple_free(mt->fc); +free_hl: + simple_free(hl); +free_definer: + simple_free(mt->definer); +dec_refcount: + mt->refcount--; + + return rte_errno; +} + +void mlx5dr_definer_put(struct mlx5dr_match_template *mt) +{ + if (--mt->refcount) + return; + + simple_free(mt->fc); + mlx5dr_cmd_destroy_obj(mt->definer->obj); + simple_free(mt->definer); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h new file mode 100644 index 0000000000..d52c6b0627 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_definer.h @@ -0,0 +1,585 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_DEFINER_H_ +#define MLX5DR_DEFINER_H_ + +/* Selectors based on match TAG */ +#define DW_SELECTORS_MATCH 6 +#define DW_SELECTORS_LIMITED 3 +#define DW_SELECTORS 9 +#define BYTE_SELECTORS 8 + +enum mlx5dr_definer_fname { + MLX5DR_DEFINER_FNAME_ETH_SMAC_48_16_O, + MLX5DR_DEFINER_FNAME_ETH_SMAC_48_16_I, + MLX5DR_DEFINER_FNAME_ETH_SMAC_15_0_O, + MLX5DR_DEFINER_FNAME_ETH_SMAC_15_0_I, + MLX5DR_DEFINER_FNAME_ETH_DMAC_48_16_O, + MLX5DR_DEFINER_FNAME_ETH_DMAC_48_16_I, + MLX5DR_DEFINER_FNAME_ETH_DMAC_15_0_O, + MLX5DR_DEFINER_FNAME_ETH_DMAC_15_0_I, + MLX5DR_DEFINER_FNAME_ETH_TYPE_O, + MLX5DR_DEFINER_FNAME_ETH_TYPE_I, + MLX5DR_DEFINER_FNAME_VLAN_TYPE_O, + MLX5DR_DEFINER_FNAME_VLAN_TYPE_I, + MLX5DR_DEFINER_FNAME_VLAN_TCI_O, + MLX5DR_DEFINER_FNAME_VLAN_TCI_I, + MLX5DR_DEFINER_FNAME_IPV4_IHL_O, + MLX5DR_DEFINER_FNAME_IPV4_IHL_I, + MLX5DR_DEFINER_FNAME_IP_TTL_O, + MLX5DR_DEFINER_FNAME_IP_TTL_I, + MLX5DR_DEFINER_FNAME_IPV4_DST_O, + MLX5DR_DEFINER_FNAME_IPV4_DST_I, + MLX5DR_DEFINER_FNAME_IPV4_SRC_O, + MLX5DR_DEFINER_FNAME_IPV4_SRC_I, + MLX5DR_DEFINER_FNAME_IP_VERSION_O, + MLX5DR_DEFINER_FNAME_IP_VERSION_I, + MLX5DR_DEFINER_FNAME_IP_FRAG_O, + MLX5DR_DEFINER_FNAME_IP_FRAG_I, + MLX5DR_DEFINER_FNAME_IPV6_PAYLOAD_LEN_O, + MLX5DR_DEFINER_FNAME_IPV6_PAYLOAD_LEN_I, + MLX5DR_DEFINER_FNAME_IP_TOS_O, + MLX5DR_DEFINER_FNAME_IP_TOS_I, + MLX5DR_DEFINER_FNAME_IPV6_FLOW_LABEL_O, + MLX5DR_DEFINER_FNAME_IPV6_FLOW_LABEL_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_127_96_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_95_64_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_63_32_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_31_0_O, + MLX5DR_DEFINER_FNAME_IPV6_DST_127_96_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_95_64_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_63_32_I, + MLX5DR_DEFINER_FNAME_IPV6_DST_31_0_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_127_96_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_95_64_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_63_32_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_31_0_O, + MLX5DR_DEFINER_FNAME_IPV6_SRC_127_96_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_95_64_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_63_32_I, + MLX5DR_DEFINER_FNAME_IPV6_SRC_31_0_I, + MLX5DR_DEFINER_FNAME_IP_PROTOCOL_O, + MLX5DR_DEFINER_FNAME_IP_PROTOCOL_I, + MLX5DR_DEFINER_FNAME_L4_SPORT_O, + MLX5DR_DEFINER_FNAME_L4_SPORT_I, + MLX5DR_DEFINER_FNAME_L4_DPORT_O, + MLX5DR_DEFINER_FNAME_L4_DPORT_I, + MLX5DR_DEFINER_FNAME_TCP_FLAGS_I, + MLX5DR_DEFINER_FNAME_TCP_FLAGS_O, + MLX5DR_DEFINER_FNAME_GTP_TEID, + MLX5DR_DEFINER_FNAME_GTP_MSG_TYPE, + MLX5DR_DEFINER_FNAME_GTP_EXT_FLAG, + MLX5DR_DEFINER_FNAME_GTP_NEXT_EXT_HDR, + MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_PDU, + MLX5DR_DEFINER_FNAME_GTP_EXT_HDR_QFI, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_0, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_1, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_2, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_3, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_4, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_5, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_6, + MLX5DR_DEFINER_FNAME_FLEX_PARSER_7, + MLX5DR_DEFINER_FNAME_VPORT_REG_C_0, + MLX5DR_DEFINER_FNAME_VXLAN_FLAGS, + MLX5DR_DEFINER_FNAME_VXLAN_VNI, + MLX5DR_DEFINER_FNAME_SOURCE_QP, + MLX5DR_DEFINER_FNAME_REG_0, + MLX5DR_DEFINER_FNAME_REG_1, + MLX5DR_DEFINER_FNAME_REG_2, + MLX5DR_DEFINER_FNAME_REG_3, + MLX5DR_DEFINER_FNAME_REG_4, + MLX5DR_DEFINER_FNAME_REG_5, + MLX5DR_DEFINER_FNAME_REG_6, + MLX5DR_DEFINER_FNAME_REG_7, + MLX5DR_DEFINER_FNAME_REG_A, + MLX5DR_DEFINER_FNAME_REG_B, + MLX5DR_DEFINER_FNAME_GRE_KEY_PRESENT, + MLX5DR_DEFINER_FNAME_GRE_C_VER, + MLX5DR_DEFINER_FNAME_GRE_PROTOCOL, + MLX5DR_DEFINER_FNAME_GRE_OPT_KEY, + MLX5DR_DEFINER_FNAME_GRE_OPT_SEQ, + MLX5DR_DEFINER_FNAME_GRE_OPT_CHECKSUM, + MLX5DR_DEFINER_FNAME_INTEGRITY_O, + MLX5DR_DEFINER_FNAME_INTEGRITY_I, + MLX5DR_DEFINER_FNAME_ICMP_DW1, + MLX5DR_DEFINER_FNAME_ICMP_DW2, + MLX5DR_DEFINER_FNAME_MAX, +}; + +enum mlx5dr_definer_type { + MLX5DR_DEFINER_TYPE_MATCH, + MLX5DR_DEFINER_TYPE_JUMBO, +}; + +struct mlx5dr_definer_fc { + uint8_t item_idx; + uint32_t byte_off; + int bit_off; + uint32_t bit_mask; + enum mlx5dr_definer_fname fname; + void (*tag_set)(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag); + void (*tag_mask_set)(struct mlx5dr_definer_fc *fc, + const void *item_spec, + uint8_t *tag); +}; + +struct mlx5_ifc_definer_hl_eth_l2_bits { + u8 dmac_47_16[0x20]; + u8 dmac_15_0[0x10]; + u8 l3_ethertype[0x10]; + u8 reserved_at_40[0x1]; + u8 sx_sniffer[0x1]; + u8 functional_lb[0x1]; + u8 ip_fragmented[0x1]; + u8 qp_type[0x2]; + u8 encap_type[0x2]; + u8 port_number[0x2]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 tci[0x10]; /* contains first_priority[0x3] + first_cfi[0x1] + first_vlan_id[0xc] */ + u8 l4_type[0x4]; + u8 reserved_at_64[0x2]; + u8 ipsec_layer[0x2]; + u8 l2_type[0x2]; + u8 force_lb[0x1]; + u8 l2_ok[0x1]; + u8 l3_ok[0x1]; + u8 l4_ok[0x1]; + u8 second_vlan_qualifier[0x2]; + u8 second_priority[0x3]; + u8 second_cfi[0x1]; + u8 second_vlan_id[0xc]; +}; + +struct mlx5_ifc_definer_hl_eth_l2_src_bits { + u8 smac_47_16[0x20]; + u8 smac_15_0[0x10]; + u8 loopback_syndrome[0x8]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 ip_fragmented[0x1]; + u8 functional_lb[0x1]; +}; + +struct mlx5_ifc_definer_hl_ib_l2_bits { + u8 sx_sniffer[0x1]; + u8 force_lb[0x1]; + u8 functional_lb[0x1]; + u8 reserved_at_3[0x3]; + u8 port_number[0x2]; + u8 sl[0x4]; + u8 qp_type[0x2]; + u8 lnh[0x2]; + u8 dlid[0x10]; + u8 vl[0x4]; + u8 lrh_packet_length[0xc]; + u8 slid[0x10]; +}; + +struct mlx5_ifc_definer_hl_eth_l3_bits { + u8 ip_version[0x4]; + u8 ihl[0x4]; + union { + u8 tos[0x8]; + struct { + u8 dscp[0x6]; + u8 ecn[0x2]; + }; + }; + u8 time_to_live_hop_limit[0x8]; + u8 protocol_next_header[0x8]; + u8 identification[0x10]; + u8 flags[0x3]; + u8 fragment_offset[0xd]; + u8 ipv4_total_length[0x10]; + u8 checksum[0x10]; + u8 reserved_at_60[0xc]; + u8 flow_label[0x14]; + u8 packet_length[0x10]; + u8 ipv6_payload_length[0x10]; +}; + +struct mlx5_ifc_definer_hl_eth_l4_bits { + u8 source_port[0x10]; + u8 destination_port[0x10]; + u8 data_offset[0x4]; + u8 l4_ok[0x1]; + u8 l3_ok[0x1]; + u8 ip_fragmented[0x1]; + u8 tcp_ns[0x1]; + union { + u8 tcp_flags[0x8]; + struct { + u8 tcp_cwr[0x1]; + u8 tcp_ece[0x1]; + u8 tcp_urg[0x1]; + u8 tcp_ack[0x1]; + u8 tcp_psh[0x1]; + u8 tcp_rst[0x1]; + u8 tcp_syn[0x1]; + u8 tcp_fin[0x1]; + }; + }; + u8 first_fragment[0x1]; + u8 reserved_at_31[0xf]; +}; + +struct mlx5_ifc_definer_hl_src_qp_gvmi_bits { + u8 loopback_syndrome[0x8]; + u8 l3_type[0x2]; + u8 l4_type_bwc[0x2]; + u8 first_vlan_qualifier[0x2]; + u8 reserved_at_e[0x1]; + u8 functional_lb[0x1]; + u8 source_gvmi[0x10]; + u8 force_lb[0x1]; + u8 ip_fragmented[0x1]; + u8 source_is_requestor[0x1]; + u8 reserved_at_23[0x5]; + u8 source_qp[0x18]; +}; + +struct mlx5_ifc_definer_hl_ib_l4_bits { + u8 opcode[0x8]; + u8 qp[0x18]; + u8 se[0x1]; + u8 migreq[0x1]; + u8 ackreq[0x1]; + u8 fecn[0x1]; + u8 becn[0x1]; + u8 bth[0x1]; + u8 deth[0x1]; + u8 dcceth[0x1]; + u8 reserved_at_28[0x2]; + u8 pad_count[0x2]; + u8 tver[0x4]; + u8 p_key[0x10]; + u8 reserved_at_40[0x8]; + u8 deth_source_qp[0x18]; +}; + +enum mlx5dr_integrity_ok1_bits { + MLX5DR_DEFINER_OKS1_FIRST_L4_OK = 24, + MLX5DR_DEFINER_OKS1_FIRST_L3_OK = 25, + MLX5DR_DEFINER_OKS1_SECOND_L4_OK = 26, + MLX5DR_DEFINER_OKS1_SECOND_L3_OK = 27, + MLX5DR_DEFINER_OKS1_FIRST_L4_CSUM_OK = 28, + MLX5DR_DEFINER_OKS1_FIRST_IPV4_CSUM_OK = 29, + MLX5DR_DEFINER_OKS1_SECOND_L4_CSUM_OK = 30, + MLX5DR_DEFINER_OKS1_SECOND_IPV4_CSUM_OK = 31, +}; + +struct mlx5_ifc_definer_hl_oks1_bits { + union { + u8 oks1_bits[0x20]; + struct { + u8 second_ipv4_checksum_ok[0x1]; + u8 second_l4_checksum_ok[0x1]; + u8 first_ipv4_checksum_ok[0x1]; + u8 first_l4_checksum_ok[0x1]; + u8 second_l3_ok[0x1]; + u8 second_l4_ok[0x1]; + u8 first_l3_ok[0x1]; + u8 first_l4_ok[0x1]; + u8 flex_parser7_steering_ok[0x1]; + u8 flex_parser6_steering_ok[0x1]; + u8 flex_parser5_steering_ok[0x1]; + u8 flex_parser4_steering_ok[0x1]; + u8 flex_parser3_steering_ok[0x1]; + u8 flex_parser2_steering_ok[0x1]; + u8 flex_parser1_steering_ok[0x1]; + u8 flex_parser0_steering_ok[0x1]; + u8 second_ipv6_extension_header_vld[0x1]; + u8 first_ipv6_extension_header_vld[0x1]; + u8 l3_tunneling_ok[0x1]; + u8 l2_tunneling_ok[0x1]; + u8 second_tcp_ok[0x1]; + u8 second_udp_ok[0x1]; + u8 second_ipv4_ok[0x1]; + u8 second_ipv6_ok[0x1]; + u8 second_l2_ok[0x1]; + u8 vxlan_ok[0x1]; + u8 gre_ok[0x1]; + u8 first_tcp_ok[0x1]; + u8 first_udp_ok[0x1]; + u8 first_ipv4_ok[0x1]; + u8 first_ipv6_ok[0x1]; + u8 first_l2_ok[0x1]; + }; + }; +}; + +struct mlx5_ifc_definer_hl_oks2_bits { + u8 reserved_at_0[0xa]; + u8 second_mpls_ok[0x1]; + u8 second_mpls4_s_bit[0x1]; + u8 second_mpls4_qualifier[0x1]; + u8 second_mpls3_s_bit[0x1]; + u8 second_mpls3_qualifier[0x1]; + u8 second_mpls2_s_bit[0x1]; + u8 second_mpls2_qualifier[0x1]; + u8 second_mpls1_s_bit[0x1]; + u8 second_mpls1_qualifier[0x1]; + u8 second_mpls0_s_bit[0x1]; + u8 second_mpls0_qualifier[0x1]; + u8 first_mpls_ok[0x1]; + u8 first_mpls4_s_bit[0x1]; + u8 first_mpls4_qualifier[0x1]; + u8 first_mpls3_s_bit[0x1]; + u8 first_mpls3_qualifier[0x1]; + u8 first_mpls2_s_bit[0x1]; + u8 first_mpls2_qualifier[0x1]; + u8 first_mpls1_s_bit[0x1]; + u8 first_mpls1_qualifier[0x1]; + u8 first_mpls0_s_bit[0x1]; + u8 first_mpls0_qualifier[0x1]; +}; + +struct mlx5_ifc_definer_hl_voq_bits { + u8 reserved_at_0[0x18]; + u8 ecn_ok[0x1]; + u8 congestion[0x1]; + u8 profile[0x2]; + u8 internal_prio[0x4]; +}; + +struct mlx5_ifc_definer_hl_ipv4_src_dst_bits { + u8 source_address[0x20]; + u8 destination_address[0x20]; +}; + +struct mlx5_ifc_definer_hl_ipv6_addr_bits { + u8 ipv6_address_127_96[0x20]; + u8 ipv6_address_95_64[0x20]; + u8 ipv6_address_63_32[0x20]; + u8 ipv6_address_31_0[0x20]; +}; + +struct mlx5_ifc_definer_tcp_icmp_header_bits { + union { + struct { + u8 icmp_dw1[0x20]; + u8 icmp_dw2[0x20]; + u8 icmp_dw3[0x20]; + }; + struct { + u8 tcp_seq[0x20]; + u8 tcp_ack[0x20]; + u8 tcp_win_urg[0x20]; + }; + }; +}; + +struct mlx5_ifc_definer_hl_tunnel_header_bits { + u8 tunnel_header_0[0x20]; + u8 tunnel_header_1[0x20]; + u8 tunnel_header_2[0x20]; + u8 tunnel_header_3[0x20]; +}; + +struct mlx5_ifc_definer_hl_ipsec_bits { + u8 spi[0x20]; + u8 sequence_number[0x20]; + u8 reserved[0x10]; + u8 ipsec_syndrome[0x8]; + u8 next_header[0x8]; +}; + +struct mlx5_ifc_definer_hl_metadata_bits { + u8 metadata_to_cqe[0x20]; + u8 general_purpose[0x20]; + u8 acomulated_hash[0x20]; +}; + +struct mlx5_ifc_definer_hl_flex_parser_bits { + u8 flex_parser_7[0x20]; + u8 flex_parser_6[0x20]; + u8 flex_parser_5[0x20]; + u8 flex_parser_4[0x20]; + u8 flex_parser_3[0x20]; + u8 flex_parser_2[0x20]; + u8 flex_parser_1[0x20]; + u8 flex_parser_0[0x20]; +}; + +struct mlx5_ifc_definer_hl_registers_bits { + u8 register_c_10[0x20]; + u8 register_c_11[0x20]; + u8 register_c_8[0x20]; + u8 register_c_9[0x20]; + u8 register_c_6[0x20]; + u8 register_c_7[0x20]; + u8 register_c_4[0x20]; + u8 register_c_5[0x20]; + u8 register_c_2[0x20]; + u8 register_c_3[0x20]; + u8 register_c_0[0x20]; + u8 register_c_1[0x20]; +}; + +struct mlx5_ifc_definer_hl_bits { + struct mlx5_ifc_definer_hl_eth_l2_bits eth_l2_outer; + struct mlx5_ifc_definer_hl_eth_l2_bits eth_l2_inner; + struct mlx5_ifc_definer_hl_eth_l2_src_bits eth_l2_src_outer; + struct mlx5_ifc_definer_hl_eth_l2_src_bits eth_l2_src_inner; + struct mlx5_ifc_definer_hl_ib_l2_bits ib_l2; + struct mlx5_ifc_definer_hl_eth_l3_bits eth_l3_outer; + struct mlx5_ifc_definer_hl_eth_l3_bits eth_l3_inner; + struct mlx5_ifc_definer_hl_eth_l4_bits eth_l4_outer; + struct mlx5_ifc_definer_hl_eth_l4_bits eth_l4_inner; + struct mlx5_ifc_definer_hl_src_qp_gvmi_bits source_qp_gvmi; + struct mlx5_ifc_definer_hl_ib_l4_bits ib_l4; + struct mlx5_ifc_definer_hl_oks1_bits oks1; + struct mlx5_ifc_definer_hl_oks2_bits oks2; + struct mlx5_ifc_definer_hl_voq_bits voq; + u8 reserved_at_480[0x380]; + struct mlx5_ifc_definer_hl_ipv4_src_dst_bits ipv4_src_dest_outer; + struct mlx5_ifc_definer_hl_ipv4_src_dst_bits ipv4_src_dest_inner; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_dst_outer; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_dst_inner; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_src_outer; + struct mlx5_ifc_definer_hl_ipv6_addr_bits ipv6_src_inner; + u8 unsupported_dest_ib_l3[0x80]; + u8 unsupported_source_ib_l3[0x80]; + u8 unsupported_udp_misc_outer[0x20]; + u8 unsupported_udp_misc_inner[0x20]; + struct mlx5_ifc_definer_tcp_icmp_header_bits tcp_icmp; + struct mlx5_ifc_definer_hl_tunnel_header_bits tunnel_header; + u8 unsupported_mpls_outer[0xa0]; + u8 unsupported_mpls_inner[0xa0]; + u8 unsupported_config_headers_outer[0x80]; + u8 unsupported_config_headers_inner[0x80]; + u8 unsupported_random_number[0x20]; + struct mlx5_ifc_definer_hl_ipsec_bits ipsec; + struct mlx5_ifc_definer_hl_metadata_bits metadata; + u8 unsupported_utc_timestamp[0x40]; + u8 unsupported_free_running_timestamp[0x40]; + struct mlx5_ifc_definer_hl_flex_parser_bits flex_parser; + struct mlx5_ifc_definer_hl_registers_bits registers; + /* struct x ib_l3_extended; */ + /* struct x rwh */ + /* struct x dcceth */ + /* struct x dceth */ +}; + +enum mlx5dr_definer_gtp { + MLX5DR_DEFINER_GTP_EXT_HDR_BIT = 0x04, +}; + +struct mlx5_ifc_header_gtp_bits { + u8 version[0x3]; + u8 proto_type[0x1]; + u8 reserved1[0x1]; + u8 ext_hdr_flag[0x1]; + u8 seq_num_flag[0x1]; + u8 pdu_flag[0x1]; + u8 msg_type[0x8]; + u8 msg_len[0x8]; + u8 teid[0x20]; +}; + +struct mlx5_ifc_header_opt_gtp_bits { + u8 seq_num[0x10]; + u8 pdu_num[0x8]; + u8 next_ext_hdr_type[0x8]; +}; + +struct mlx5_ifc_header_gtp_psc_bits { + u8 len[0x8]; + u8 pdu_type[0x4]; + u8 flags[0x4]; + u8 qfi[0x8]; + u8 reserved2[0x8]; +}; + +struct mlx5_ifc_header_ipv6_vtc_bits { + u8 version[0x4]; + union { + u8 tos[0x8]; + struct { + u8 dscp[0x6]; + u8 ecn[0x2]; + }; + }; + u8 flow_label[0x14]; +}; + +struct mlx5_ifc_header_vxlan_bits { + u8 flags[0x8]; + u8 reserved1[0x18]; + u8 vni[0x18]; + u8 reserved2[0x8]; +}; + +struct mlx5_ifc_header_gre_bits { + union { + u8 c_rsvd0_ver[0x10]; + struct { + u8 gre_c_present[0x1]; + u8 reserved_at_1[0x1]; + u8 gre_k_present[0x1]; + u8 gre_s_present[0x1]; + u8 reserved_at_4[0x9]; + u8 version[0x3]; + }; + }; + u8 gre_protocol[0x10]; + u8 checksum[0x10]; + u8 reserved_at_30[0x10]; +}; + +struct mlx5_ifc_header_icmp_bits { + union { + u8 icmp_dw1[0x20]; + struct { + u8 type[0x8]; + u8 code[0x8]; + u8 cksum[0x10]; + }; + }; + union { + u8 icmp_dw2[0x20]; + struct { + u8 ident[0x10]; + u8 seq_nb[0x10]; + }; + }; +}; + +struct mlx5dr_definer { + enum mlx5dr_definer_type type; + uint8_t dw_selector[DW_SELECTORS]; + uint8_t byte_selector[BYTE_SELECTORS]; + struct mlx5dr_rule_match_tag mask; + struct mlx5dr_devx_obj *obj; +}; + +static inline bool +mlx5dr_definer_is_jumbo(struct mlx5dr_definer *definer) +{ + return (definer->type == MLX5DR_DEFINER_TYPE_JUMBO); +} + +void mlx5dr_definer_create_tag(const struct rte_flow_item *items, + struct mlx5dr_definer_fc *fc, + uint32_t fc_sz, + uint8_t *tag); + +int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a, + struct mlx5dr_definer *definer_b); + +int mlx5dr_definer_get_id(struct mlx5dr_definer *definer); + +int mlx5dr_definer_get(struct mlx5dr_context *ctx, + struct mlx5dr_match_template *mt); + +void mlx5dr_definer_put(struct mlx5dr_match_template *mt); + +#endif /* MLX5DR_DEFINER_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 11/18] net/mlx5/hws: Add HWS definer layer 2022-10-20 15:57 ` [v6 11/18] net/mlx5/hws: Add HWS definer layer Alex Vesker @ 2022-10-24 6:53 ` Slava Ovsiienko 0 siblings, 0 replies; 134+ messages in thread From: Slava Ovsiienko @ 2022-10-24 6:53 UTC (permalink / raw) To: Alex Vesker, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Ferruh Yigit, Matan Azrad Cc: dev, Ori Kam, Mark Bloch > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 18:58 > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Ferruh Yigit > <ferruh.yigit@amd.com>; Matan Azrad <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Mark Bloch <mbloch@nvidia.com> > Subject: [v6 11/18] net/mlx5/hws: Add HWS definer layer > > Definers are HW objects that are used for matching, rte items > are translated to definers, each definer holds the fields and > bit-masks used for HW flow matching. The definer layer is used > for finding the most efficient definer for each set of items. > In addition to definer creation we also calculate the field > copy (fc) array used for efficient items to WQE conversion. > > Signed-off-by: Mark Bloch <mbloch@nvidia.com> > Signed-off-by: Alex Vesker <valex@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 12/18] net/mlx5/hws: Add HWS context object 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (10 preceding siblings ...) 2022-10-20 15:57 ` [v6 11/18] net/mlx5/hws: Add HWS definer layer Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-24 6:53 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 13/18] net/mlx5/hws: Add HWS table object Alex Vesker ` (6 subsequent siblings) 18 siblings, 1 reply; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Context is the first mlx5dr object created, all sub object: table, matcher, rule, action are created using the context. The context holds the capabilities and send queues used for configuring the offloads to the HW. Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_context.c | 223 ++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_context.h | 40 +++++ 2 files changed, 263 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.h diff --git a/drivers/net/mlx5/hws/mlx5dr_context.c b/drivers/net/mlx5/hws/mlx5dr_context.c new file mode 100644 index 0000000000..ae86694a51 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_context.c @@ -0,0 +1,223 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static int mlx5dr_context_pools_init(struct mlx5dr_context *ctx) +{ + struct mlx5dr_pool_attr pool_attr = {0}; + uint8_t max_log_sz; + int i; + + if (mlx5dr_pat_init_pattern_cache(&ctx->pattern_cache)) + return rte_errno; + + /* Create an STC pool per FT type */ + pool_attr.pool_type = MLX5DR_POOL_TYPE_STC; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_STC_POOL; + max_log_sz = RTE_MIN(MLX5DR_POOL_STC_LOG_SZ, ctx->caps->stc_alloc_log_max); + pool_attr.alloc_log_sz = RTE_MAX(max_log_sz, ctx->caps->stc_alloc_log_gran); + + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) { + pool_attr.table_type = i; + ctx->stc_pool[i] = mlx5dr_pool_create(ctx, &pool_attr); + if (!ctx->stc_pool[i]) { + DR_LOG(ERR, "Failed to allocate STC pool [%d]", i); + goto free_stc_pools; + } + } + + return 0; + +free_stc_pools: + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) + if (ctx->stc_pool[i]) + mlx5dr_pool_destroy(ctx->stc_pool[i]); + + mlx5dr_pat_uninit_pattern_cache(ctx->pattern_cache); + + return rte_errno; +} + +static void mlx5dr_context_pools_uninit(struct mlx5dr_context *ctx) +{ + int i; + + mlx5dr_pat_uninit_pattern_cache(ctx->pattern_cache); + + for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) { + if (ctx->stc_pool[i]) + mlx5dr_pool_destroy(ctx->stc_pool[i]); + } +} + +static int mlx5dr_context_init_pd(struct mlx5dr_context *ctx, + struct ibv_pd *pd) +{ + struct mlx5dv_pd mlx5_pd = {0}; + struct mlx5dv_obj obj; + int ret; + + if (pd) { + ctx->pd = pd; + } else { + ctx->pd = mlx5_glue->alloc_pd(ctx->ibv_ctx); + if (!ctx->pd) { + DR_LOG(ERR, "Failed to allocate PD"); + rte_errno = errno; + return rte_errno; + } + ctx->flags |= MLX5DR_CONTEXT_FLAG_PRIVATE_PD; + } + + obj.pd.in = ctx->pd; + obj.pd.out = &mlx5_pd; + + ret = mlx5_glue->dv_init_obj(&obj, MLX5DV_OBJ_PD); + if (ret) + goto free_private_pd; + + ctx->pd_num = mlx5_pd.pdn; + + return 0; + +free_private_pd: + if (ctx->flags & MLX5DR_CONTEXT_FLAG_PRIVATE_PD) + mlx5_glue->dealloc_pd(ctx->pd); + + return ret; +} + +static int mlx5dr_context_uninit_pd(struct mlx5dr_context *ctx) +{ + if (ctx->flags & MLX5DR_CONTEXT_FLAG_PRIVATE_PD) + return mlx5_glue->dealloc_pd(ctx->pd); + + return 0; +} + +static void mlx5dr_context_check_hws_supp(struct mlx5dr_context *ctx) +{ + struct mlx5dr_cmd_query_caps *caps = ctx->caps; + + /* HWS not supported on device / FW */ + if (!caps->wqe_based_update) { + DR_LOG(INFO, "Required HWS WQE based insertion cap not supported"); + return; + } + + /* Current solution requires all rules to set reparse bit */ + if ((!caps->nic_ft.reparse || !caps->fdb_ft.reparse) || + !IS_BIT_SET(caps->rtc_reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS)) { + DR_LOG(INFO, "Required HWS reparse cap not supported"); + return; + } + + /* FW/HW must support 8DW STE */ + if (!IS_BIT_SET(caps->ste_format, MLX5_IFC_RTC_STE_FORMAT_8DW)) { + DR_LOG(INFO, "Required HWS STE format not supported"); + return; + } + + /* Adding rules by hash and by offset are requirements */ + if (!IS_BIT_SET(caps->rtc_index_mode, MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH) || + !IS_BIT_SET(caps->rtc_index_mode, MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET)) { + DR_LOG(INFO, "Required HWS RTC update mode not supported"); + return; + } + + /* Support for SELECT definer ID is required */ + if (!IS_BIT_SET(caps->definer_format_sup, MLX5_IFC_DEFINER_FORMAT_ID_SELECT)) { + DR_LOG(INFO, "Required HWS Dynamic definer not supported"); + return; + } + + ctx->flags |= MLX5DR_CONTEXT_FLAG_HWS_SUPPORT; +} + +static int mlx5dr_context_init_hws(struct mlx5dr_context *ctx, + struct mlx5dr_context_attr *attr) +{ + int ret; + + mlx5dr_context_check_hws_supp(ctx); + + if (!(ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) + return 0; + + ret = mlx5dr_context_init_pd(ctx, attr->pd); + if (ret) + return ret; + + ret = mlx5dr_context_pools_init(ctx); + if (ret) + goto uninit_pd; + + ret = mlx5dr_send_queues_open(ctx, attr->queues, attr->queue_size); + if (ret) + goto pools_uninit; + + return 0; + +pools_uninit: + mlx5dr_context_pools_uninit(ctx); +uninit_pd: + mlx5dr_context_uninit_pd(ctx); + return ret; +} + +static void mlx5dr_context_uninit_hws(struct mlx5dr_context *ctx) +{ + if (!(ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) + return; + + mlx5dr_send_queues_close(ctx); + mlx5dr_context_pools_uninit(ctx); + mlx5dr_context_uninit_pd(ctx); +} + +struct mlx5dr_context *mlx5dr_context_open(struct ibv_context *ibv_ctx, + struct mlx5dr_context_attr *attr) +{ + struct mlx5dr_context *ctx; + int ret; + + ctx = simple_calloc(1, sizeof(*ctx)); + if (!ctx) { + rte_errno = ENOMEM; + return NULL; + } + + ctx->ibv_ctx = ibv_ctx; + pthread_spin_init(&ctx->ctrl_lock, PTHREAD_PROCESS_PRIVATE); + + ctx->caps = simple_calloc(1, sizeof(*ctx->caps)); + if (!ctx->caps) + goto free_ctx; + + ret = mlx5dr_cmd_query_caps(ibv_ctx, ctx->caps); + if (ret) + goto free_caps; + + ret = mlx5dr_context_init_hws(ctx, attr); + if (ret) + goto free_caps; + + return ctx; + +free_caps: + simple_free(ctx->caps); +free_ctx: + simple_free(ctx); + return NULL; +} + +int mlx5dr_context_close(struct mlx5dr_context *ctx) +{ + mlx5dr_context_uninit_hws(ctx); + simple_free(ctx->caps); + pthread_spin_destroy(&ctx->ctrl_lock); + simple_free(ctx); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_context.h b/drivers/net/mlx5/hws/mlx5dr_context.h new file mode 100644 index 0000000000..b0c7802daf --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_context.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_CONTEXT_H_ +#define MLX5DR_CONTEXT_H_ + +enum mlx5dr_context_flags { + MLX5DR_CONTEXT_FLAG_HWS_SUPPORT = 1 << 0, + MLX5DR_CONTEXT_FLAG_PRIVATE_PD = 1 << 1, +}; + +enum mlx5dr_context_shared_stc_type { + MLX5DR_CONTEXT_SHARED_STC_DECAP = 0, + MLX5DR_CONTEXT_SHARED_STC_POP = 1, + MLX5DR_CONTEXT_SHARED_STC_MAX = 2, +}; + +struct mlx5dr_context_common_res { + struct mlx5dr_action_default_stc *default_stc; + struct mlx5dr_action_shared_stc *shared_stc[MLX5DR_CONTEXT_SHARED_STC_MAX]; + struct mlx5dr_cmd_forward_tbl *default_miss; +}; + +struct mlx5dr_context { + struct ibv_context *ibv_ctx; + struct mlx5dr_cmd_query_caps *caps; + struct ibv_pd *pd; + uint32_t pd_num; + struct mlx5dr_pool *stc_pool[MLX5DR_TABLE_TYPE_MAX]; + struct mlx5dr_context_common_res common_res[MLX5DR_TABLE_TYPE_MAX]; + struct mlx5dr_pattern_cache *pattern_cache; + pthread_spinlock_t ctrl_lock; + enum mlx5dr_context_flags flags; + struct mlx5dr_send_engine *send_queue; + size_t queues; + LIST_HEAD(table_head, mlx5dr_table) head; +}; + +#endif /* MLX5DR_CONTEXT_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 12/18] net/mlx5/hws: Add HWS context object 2022-10-20 15:57 ` [v6 12/18] net/mlx5/hws: Add HWS context object Alex Vesker @ 2022-10-24 6:53 ` Slava Ovsiienko 0 siblings, 0 replies; 134+ messages in thread From: Slava Ovsiienko @ 2022-10-24 6:53 UTC (permalink / raw) To: Alex Vesker, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 18:58 > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad > <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com> > Subject: [v6 12/18] net/mlx5/hws: Add HWS context object > > Context is the first mlx5dr object created, all sub object: > table, matcher, rule, action are created using the context. > The context holds the capabilities and send queues used for configuring the > offloads to the HW. > > Signed-off-by: Alex Vesker <valex@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 13/18] net/mlx5/hws: Add HWS table object 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (11 preceding siblings ...) 2022-10-20 15:57 ` [v6 12/18] net/mlx5/hws: Add HWS context object Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-24 6:54 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 14/18] net/mlx5/hws: Add HWS matcher object Alex Vesker ` (5 subsequent siblings) 18 siblings, 1 reply; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit HWS table resides under the context object, each context can have multiple tables with different steering types RX/TX/FDB. The table is not only a logical object but it is also represented in the HW, packets can be steered to the table and from there to other tables. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_table.c | 248 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_table.h | 44 +++++ 2 files changed, 292 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.h diff --git a/drivers/net/mlx5/hws/mlx5dr_table.c b/drivers/net/mlx5/hws/mlx5dr_table.c new file mode 100644 index 0000000000..d3f77e4780 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_table.c @@ -0,0 +1,248 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static void mlx5dr_table_init_next_ft_attr(struct mlx5dr_table *tbl, + struct mlx5dr_cmd_ft_create_attr *ft_attr) +{ + ft_attr->type = tbl->fw_ft_type; + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) + ft_attr->level = tbl->ctx->caps->fdb_ft.max_level - 1; + else + ft_attr->level = tbl->ctx->caps->nic_ft.max_level - 1; + ft_attr->rtc_valid = true; +} + +/* Call this under ctx->ctrl_lock */ +static int +mlx5dr_table_up_default_fdb_miss_tbl(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_ft_create_attr ft_attr = {0}; + struct mlx5dr_cmd_forward_tbl *default_miss; + struct mlx5dr_context *ctx = tbl->ctx; + uint8_t tbl_type = tbl->type; + uint32_t vport; + + if (tbl->type != MLX5DR_TABLE_TYPE_FDB) + return 0; + + if (ctx->common_res[tbl_type].default_miss) { + ctx->common_res[tbl_type].default_miss->refcount++; + return 0; + } + + ft_attr.type = tbl->fw_ft_type; + ft_attr.level = tbl->ctx->caps->fdb_ft.max_level; /* The last level */ + ft_attr.rtc_valid = false; + + assert(ctx->caps->eswitch_manager); + vport = ctx->caps->eswitch_manager_vport_number; + + default_miss = mlx5dr_cmd_miss_ft_create(ctx->ibv_ctx, &ft_attr, vport); + if (!default_miss) { + DR_LOG(ERR, "Failed to default miss table type: 0x%x", tbl_type); + return rte_errno; + } + + ctx->common_res[tbl_type].default_miss = default_miss; + ctx->common_res[tbl_type].default_miss->refcount++; + return 0; +} + +/* Called under pthread_spin_lock(&ctx->ctrl_lock) */ +static void mlx5dr_table_down_default_fdb_miss_tbl(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_forward_tbl *default_miss; + struct mlx5dr_context *ctx = tbl->ctx; + uint8_t tbl_type = tbl->type; + + if (tbl->type != MLX5DR_TABLE_TYPE_FDB) + return; + + default_miss = ctx->common_res[tbl_type].default_miss; + if (--default_miss->refcount) + return; + + mlx5dr_cmd_miss_ft_destroy(default_miss); + + simple_free(default_miss); + ctx->common_res[tbl_type].default_miss = NULL; +} + +static int +mlx5dr_table_connect_to_default_miss_tbl(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + int ret; + + assert(tbl->type == MLX5DR_TABLE_TYPE_FDB); + + mlx5dr_cmd_set_attr_connect_miss_tbl(tbl->ctx, + tbl->fw_ft_type, + tbl->type, + &ft_attr); + + /* Connect to next */ + ret = mlx5dr_cmd_flow_table_modify(ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect FT to default FDB FT"); + return errno; + } + + return 0; +} + +struct mlx5dr_devx_obj * +mlx5dr_table_create_default_ft(struct mlx5dr_table *tbl) +{ + struct mlx5dr_cmd_ft_create_attr ft_attr = {0}; + struct mlx5dr_devx_obj *ft_obj; + int ret; + + mlx5dr_table_init_next_ft_attr(tbl, &ft_attr); + + ft_obj = mlx5dr_cmd_flow_table_create(tbl->ctx->ibv_ctx, &ft_attr); + if (ft_obj && tbl->type == MLX5DR_TABLE_TYPE_FDB) { + /* Take/create ref over the default miss */ + ret = mlx5dr_table_up_default_fdb_miss_tbl(tbl); + if (ret) { + DR_LOG(ERR, "Failed to get default fdb miss"); + goto free_ft_obj; + } + ret = mlx5dr_table_connect_to_default_miss_tbl(tbl, ft_obj); + if (ret) { + DR_LOG(ERR, "Failed connecting to default miss tbl"); + goto down_miss_tbl; + } + } + + return ft_obj; + +down_miss_tbl: + mlx5dr_table_down_default_fdb_miss_tbl(tbl); +free_ft_obj: + mlx5dr_cmd_destroy_obj(ft_obj); + return NULL; +} + +void mlx5dr_table_destroy_default_ft(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft_obj) +{ + mlx5dr_table_down_default_fdb_miss_tbl(tbl); + mlx5dr_cmd_destroy_obj(ft_obj); +} + +static int mlx5dr_table_init(struct mlx5dr_table *tbl) +{ + struct mlx5dr_context *ctx = tbl->ctx; + int ret; + + if (mlx5dr_table_is_root(tbl)) + return 0; + + if (!(tbl->ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT)) { + DR_LOG(ERR, "HWS not supported, cannot create mlx5dr_table"); + rte_errno = EOPNOTSUPP; + return rte_errno; + } + + switch (tbl->type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + tbl->fw_ft_type = FS_FT_NIC_RX; + break; + case MLX5DR_TABLE_TYPE_NIC_TX: + tbl->fw_ft_type = FS_FT_NIC_TX; + break; + case MLX5DR_TABLE_TYPE_FDB: + tbl->fw_ft_type = FS_FT_FDB; + break; + default: + assert(0); + break; + } + + pthread_spin_lock(&ctx->ctrl_lock); + tbl->ft = mlx5dr_table_create_default_ft(tbl); + if (!tbl->ft) { + DR_LOG(ERR, "Failed to create flow table devx object"); + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; + } + + ret = mlx5dr_action_get_default_stc(ctx, tbl->type); + if (ret) + goto tbl_destroy; + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +tbl_destroy: + mlx5dr_table_destroy_default_ft(tbl, tbl->ft); + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void mlx5dr_table_uninit(struct mlx5dr_table *tbl) +{ + if (mlx5dr_table_is_root(tbl)) + return; + pthread_spin_lock(&tbl->ctx->ctrl_lock); + mlx5dr_action_put_default_stc(tbl->ctx, tbl->type); + mlx5dr_table_destroy_default_ft(tbl, tbl->ft); + pthread_spin_unlock(&tbl->ctx->ctrl_lock); +} + +struct mlx5dr_table *mlx5dr_table_create(struct mlx5dr_context *ctx, + struct mlx5dr_table_attr *attr) +{ + struct mlx5dr_table *tbl; + int ret; + + if (attr->type > MLX5DR_TABLE_TYPE_FDB) { + DR_LOG(ERR, "Invalid table type %d", attr->type); + return NULL; + } + + tbl = simple_malloc(sizeof(*tbl)); + if (!tbl) { + rte_errno = ENOMEM; + return NULL; + } + + tbl->ctx = ctx; + tbl->type = attr->type; + tbl->level = attr->level; + LIST_INIT(&tbl->head); + + ret = mlx5dr_table_init(tbl); + if (ret) { + DR_LOG(ERR, "Failed to initialise table"); + goto free_tbl; + } + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_INSERT_HEAD(&ctx->head, tbl, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + return tbl; + +free_tbl: + simple_free(tbl); + return NULL; +} + +int mlx5dr_table_destroy(struct mlx5dr_table *tbl) +{ + struct mlx5dr_context *ctx = tbl->ctx; + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_REMOVE(tbl, next); + pthread_spin_unlock(&ctx->ctrl_lock); + mlx5dr_table_uninit(tbl); + simple_free(tbl); + + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_table.h b/drivers/net/mlx5/hws/mlx5dr_table.h new file mode 100644 index 0000000000..786dddfaa4 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_table.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_TABLE_H_ +#define MLX5DR_TABLE_H_ + +#define MLX5DR_ROOT_LEVEL 0 + +struct mlx5dr_table { + struct mlx5dr_context *ctx; + struct mlx5dr_devx_obj *ft; + enum mlx5dr_table_type type; + uint32_t fw_ft_type; + uint32_t level; + LIST_HEAD(matcher_head, mlx5dr_matcher) head; + LIST_ENTRY(mlx5dr_table) next; +}; + +static inline +uint32_t mlx5dr_table_get_res_fw_ft_type(enum mlx5dr_table_type tbl_type, + bool is_mirror) +{ + if (tbl_type == MLX5DR_TABLE_TYPE_NIC_RX) + return FS_FT_NIC_RX; + else if (tbl_type == MLX5DR_TABLE_TYPE_NIC_TX) + return FS_FT_NIC_TX; + else if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + return is_mirror ? FS_FT_FDB_TX : FS_FT_FDB_RX; + + assert(0); + return 0; +} + +static inline bool mlx5dr_table_is_root(struct mlx5dr_table *tbl) +{ + return (tbl->level == MLX5DR_ROOT_LEVEL); +} + +struct mlx5dr_devx_obj *mlx5dr_table_create_default_ft(struct mlx5dr_table *tbl); + +void mlx5dr_table_destroy_default_ft(struct mlx5dr_table *tbl, + struct mlx5dr_devx_obj *ft_obj); +#endif /* MLX5DR_TABLE_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 13/18] net/mlx5/hws: Add HWS table object 2022-10-20 15:57 ` [v6 13/18] net/mlx5/hws: Add HWS table object Alex Vesker @ 2022-10-24 6:54 ` Slava Ovsiienko 0 siblings, 0 replies; 134+ messages in thread From: Slava Ovsiienko @ 2022-10-24 6:54 UTC (permalink / raw) To: Alex Vesker, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam, Erez Shitrit > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 18:58 > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad > <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Erez Shitrit > <erezsh@nvidia.com> > Subject: [v6 13/18] net/mlx5/hws: Add HWS table object > > HWS table resides under the context object, each context can have multiple > tables with different steering types RX/TX/FDB. > The table is not only a logical object but it is also represented in the HW, > packets can be steered to the table and from there to other tables. > > Signed-off-by: Erez Shitrit <erezsh@nvidia.com> > Signed-off-by: Alex Vesker <valex@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 14/18] net/mlx5/hws: Add HWS matcher object 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (12 preceding siblings ...) 2022-10-20 15:57 ` [v6 13/18] net/mlx5/hws: Add HWS table object Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-24 6:54 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 15/18] net/mlx5/hws: Add HWS rule object Alex Vesker ` (4 subsequent siblings) 18 siblings, 1 reply; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit HWS matcher resides under the table object, each table can have multiple chained matcher with different attributes. Each matcher represents a combination of match and action templates. Each matcher can contain multiple configurations based on the templates. Packets are steered from the table to the matcher and from there to other objects. The matcher allows efficent HW packet field matching and action execution based on the configuration done to it. Signed-off-by: Alex Vesker <valex@nvidia.com> Signed-off-by: Erez Shitrit <erezsh@nvidia.com> --- drivers/common/mlx5/linux/meson.build | 2 + drivers/net/mlx5/hws/mlx5dr_matcher.c | 919 ++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_matcher.h | 76 +++ 3 files changed, 997 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.h diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build index b044f95700..e6b32eb84d 100644 --- a/drivers/common/mlx5/linux/meson.build +++ b/drivers/common/mlx5/linux/meson.build @@ -74,6 +74,8 @@ has_member_args = [ 'struct ibv_counters_init_attr', 'comp_mask' ], [ 'HAVE_MLX5DV_DEVX_UAR_OFFSET', 'infiniband/mlx5dv.h', 'struct mlx5dv_devx_uar', 'mmap_off' ], + [ 'HAVE_MLX5DV_FLOW_MATCHER_FT_TYPE', 'infiniband/mlx5dv.h', + 'struct mlx5dv_flow_matcher_attr', 'ft_type' ], ] # input array for meson symbol search: # [ "MACRO to define if found", "header for the search", diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c new file mode 100644 index 0000000000..d1205c42fa --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c @@ -0,0 +1,919 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static bool mlx5dr_matcher_requires_col_tbl(uint8_t log_num_of_rules) +{ + /* Collision table concatenation is done only for large rule tables */ + return log_num_of_rules > MLX5DR_MATCHER_ASSURED_RULES_TH; +} + +static uint8_t mlx5dr_matcher_rules_to_tbl_depth(uint8_t log_num_of_rules) +{ + if (mlx5dr_matcher_requires_col_tbl(log_num_of_rules)) + return MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH; + + /* For small rule tables we use a single deep table to assure insertion */ + return RTE_MIN(log_num_of_rules, MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH); +} + +static int mlx5dr_matcher_create_end_ft(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_table *tbl = matcher->tbl; + + matcher->end_ft = mlx5dr_table_create_default_ft(tbl); + if (!matcher->end_ft) { + DR_LOG(ERR, "Failed to create matcher end flow table"); + return rte_errno; + } + return 0; +} + +static void mlx5dr_matcher_destroy_end_ft(struct mlx5dr_matcher *matcher) +{ + mlx5dr_table_destroy_default_ft(matcher->tbl, matcher->end_ft); +} + +static int mlx5dr_matcher_connect(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_matcher *prev = NULL; + struct mlx5dr_matcher *next = NULL; + struct mlx5dr_matcher *tmp_matcher; + struct mlx5dr_devx_obj *ft; + int ret; + + /* Find location in matcher list */ + if (LIST_EMPTY(&tbl->head)) { + LIST_INSERT_HEAD(&tbl->head, matcher, next); + goto connect; + } + + LIST_FOREACH(tmp_matcher, &tbl->head, next) { + if (tmp_matcher->attr.priority > matcher->attr.priority) { + next = tmp_matcher; + break; + } + prev = tmp_matcher; + } + + if (next) + LIST_INSERT_BEFORE(next, matcher, next); + else + LIST_INSERT_AFTER(prev, matcher, next); + +connect: + ft_attr.modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID; + ft_attr.type = tbl->fw_ft_type; + + /* Connect to next */ + if (next) { + if (next->match_ste.rtc_0) + ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; + if (next->match_ste.rtc_1) + ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + + ret = mlx5dr_cmd_flow_table_modify(matcher->end_ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect new matcher to next RTC"); + goto remove_from_list; + } + } + + /* Connect to previous */ + ft = prev ? prev->end_ft : tbl->ft; + + if (matcher->match_ste.rtc_0) + ft_attr.rtc_id_0 = matcher->match_ste.rtc_0->id; + if (matcher->match_ste.rtc_1) + ft_attr.rtc_id_1 = matcher->match_ste.rtc_1->id; + + ret = mlx5dr_cmd_flow_table_modify(ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to connect new matcher to previous FT"); + goto remove_from_list; + } + + return 0; + +remove_from_list: + LIST_REMOVE(matcher, next); + return ret; +} + +static int mlx5dr_matcher_disconnect(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_cmd_ft_modify_attr ft_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_matcher *tmp_matcher; + struct mlx5dr_devx_obj *prev_ft; + struct mlx5dr_matcher *next; + int ret; + + prev_ft = matcher->tbl->ft; + LIST_FOREACH(tmp_matcher, &tbl->head, next) { + if (tmp_matcher == matcher) + break; + + prev_ft = tmp_matcher->end_ft; + } + + next = matcher->next.le_next; + + ft_attr.modify_fs = MLX5_IFC_MODIFY_FLOW_TABLE_RTC_ID; + ft_attr.type = matcher->tbl->fw_ft_type; + + if (next) { + /* Connect previous end FT to next RTC if exists */ + if (next->match_ste.rtc_0) + ft_attr.rtc_id_0 = next->match_ste.rtc_0->id; + if (next->match_ste.rtc_1) + ft_attr.rtc_id_1 = next->match_ste.rtc_1->id; + } else { + /* Matcher is last, point prev end FT to default miss */ + mlx5dr_cmd_set_attr_connect_miss_tbl(tbl->ctx, + tbl->fw_ft_type, + tbl->type, + &ft_attr); + } + + ret = mlx5dr_cmd_flow_table_modify(prev_ft, &ft_attr); + if (ret) { + DR_LOG(ERR, "Failed to disconnect matcher"); + return ret; + } + + LIST_REMOVE(matcher, next); + + return 0; +} + +static void mlx5dr_matcher_set_rtc_attr_sz(struct mlx5dr_matcher *matcher, + struct mlx5dr_cmd_rtc_create_attr *rtc_attr, + bool is_match_rtc, + bool is_mirror) +{ + enum mlx5dr_matcher_flow_src flow_src = matcher->attr.optimize_flow_src; + struct mlx5dr_pool_chunk *ste = &matcher->action_ste.ste; + + if ((flow_src == MLX5DR_MATCHER_FLOW_SRC_VPORT && !is_mirror) || + (flow_src == MLX5DR_MATCHER_FLOW_SRC_WIRE && is_mirror)) { + /* Optimize FDB RTC */ + rtc_attr->log_size = 0; + rtc_attr->log_depth = 0; + } else { + /* Keep original values */ + rtc_attr->log_size = is_match_rtc ? matcher->attr.table.sz_row_log : ste->order; + rtc_attr->log_depth = is_match_rtc ? matcher->attr.table.sz_col_log : 0; + } +} + +static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher, + bool is_match_rtc) +{ + const char *rtc_type_str = is_match_rtc ? "match" : "action"; + struct mlx5dr_cmd_rtc_create_attr rtc_attr = {0}; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_action_default_stc *default_stc; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_devx_obj **rtc_0, **rtc_1; + struct mlx5dr_pool *ste_pool, *stc_pool; + struct mlx5dr_devx_obj *devx_obj; + struct mlx5dr_pool_chunk *ste; + int ret; + + if (is_match_rtc) { + rtc_0 = &matcher->match_ste.rtc_0; + rtc_1 = &matcher->match_ste.rtc_1; + ste_pool = matcher->match_ste.pool; + ste = &matcher->match_ste.ste; + ste->order = matcher->attr.table.sz_col_log + + matcher->attr.table.sz_row_log; + rtc_attr.log_size = matcher->attr.table.sz_row_log; + rtc_attr.log_depth = matcher->attr.table.sz_col_log; + rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_HASH; + /* The first match template is used since all share the same definer */ + rtc_attr.definer_id = mlx5dr_definer_get_id(matcher->mt[0]->definer); + rtc_attr.is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + rtc_attr.miss_ft_id = matcher->end_ft->id; + /* Match pool requires implicit allocation */ + ret = mlx5dr_pool_chunk_alloc(ste_pool, ste); + if (ret) { + DR_LOG(ERR, "Failed to allocate STE for %s RTC", rtc_type_str); + return ret; + } + } else { + rtc_0 = &matcher->action_ste.rtc_0; + rtc_1 = &matcher->action_ste.rtc_1; + ste_pool = matcher->action_ste.pool; + ste = &matcher->action_ste.ste; + ste->order = rte_log2_u32(matcher->action_ste.max_stes) + + matcher->attr.table.sz_row_log; + rtc_attr.log_size = ste->order; + rtc_attr.log_depth = 0; + rtc_attr.update_index_mode = MLX5_IFC_RTC_STE_UPDATE_MODE_BY_OFFSET; + /* The action STEs use the default always hit definer */ + rtc_attr.definer_id = ctx->caps->trivial_match_definer; + rtc_attr.is_jumbo = false; + rtc_attr.miss_ft_id = 0; + } + + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + + rtc_attr.pd = ctx->pd_num; + rtc_attr.ste_base = devx_obj->id; + rtc_attr.ste_offset = ste->offset; + rtc_attr.table_type = mlx5dr_table_get_res_fw_ft_type(tbl->type, false); + mlx5dr_matcher_set_rtc_attr_sz(matcher, &rtc_attr, is_match_rtc, false); + + /* STC is a single resource (devx_obj), use any STC for the ID */ + stc_pool = ctx->stc_pool[tbl->type]; + default_stc = ctx->common_res[tbl->type].default_stc; + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, &default_stc->default_hit); + rtc_attr.stc_base = devx_obj->id; + + *rtc_0 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); + if (!*rtc_0) { + DR_LOG(ERR, "Failed to create matcher %s RTC", rtc_type_str); + goto free_ste; + } + + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) { + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + rtc_attr.ste_base = devx_obj->id; + rtc_attr.table_type = mlx5dr_table_get_res_fw_ft_type(tbl->type, true); + + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, &default_stc->default_hit); + rtc_attr.stc_base = devx_obj->id; + mlx5dr_matcher_set_rtc_attr_sz(matcher, &rtc_attr, is_match_rtc, true); + + *rtc_1 = mlx5dr_cmd_rtc_create(ctx->ibv_ctx, &rtc_attr); + if (!*rtc_1) { + DR_LOG(ERR, "Failed to create peer matcher %s RTC0", rtc_type_str); + goto destroy_rtc_0; + } + } + + return 0; + +destroy_rtc_0: + mlx5dr_cmd_destroy_obj(*rtc_0); +free_ste: + if (is_match_rtc) + mlx5dr_pool_chunk_free(ste_pool, ste); + return rte_errno; +} + +static void mlx5dr_matcher_destroy_rtc(struct mlx5dr_matcher *matcher, + bool is_match_rtc) +{ + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_devx_obj *rtc_0, *rtc_1; + struct mlx5dr_pool_chunk *ste; + struct mlx5dr_pool *ste_pool; + + if (is_match_rtc) { + rtc_0 = matcher->match_ste.rtc_0; + rtc_1 = matcher->match_ste.rtc_1; + ste_pool = matcher->match_ste.pool; + ste = &matcher->match_ste.ste; + } else { + rtc_0 = matcher->action_ste.rtc_0; + rtc_1 = matcher->action_ste.rtc_1; + ste_pool = matcher->action_ste.pool; + ste = &matcher->action_ste.ste; + } + + if (tbl->type == MLX5DR_TABLE_TYPE_FDB) + mlx5dr_cmd_destroy_obj(rtc_1); + + mlx5dr_cmd_destroy_obj(rtc_0); + if (is_match_rtc) + mlx5dr_pool_chunk_free(ste_pool, ste); +} + +static void mlx5dr_matcher_set_pool_attr(struct mlx5dr_pool_attr *attr, + struct mlx5dr_matcher *matcher) +{ + switch (matcher->attr.optimize_flow_src) { + case MLX5DR_MATCHER_FLOW_SRC_VPORT: + attr->opt_type = MLX5DR_POOL_OPTIMIZE_ORIG; + break; + case MLX5DR_MATCHER_FLOW_SRC_WIRE: + attr->opt_type = MLX5DR_POOL_OPTIMIZE_MIRROR; + break; + default: + break; + } +} + +static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher) +{ + bool is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_pool_attr pool_attr = {0}; + struct mlx5dr_context *ctx = tbl->ctx; + uint32_t required_stes; + int i, ret; + bool valid; + + for (i = 0; i < matcher->num_of_at; i++) { + struct mlx5dr_action_template *at = matcher->at[i]; + + /* Check if action combinabtion is valid */ + valid = mlx5dr_action_check_combo(at->action_type_arr, matcher->tbl->type); + if (!valid) { + DR_LOG(ERR, "Invalid combination in action template %d", i); + return rte_errno; + } + + /* Process action template to setters */ + ret = mlx5dr_action_template_process(at); + if (ret) { + DR_LOG(ERR, "Failed to process action template %d", i); + return rte_errno; + } + + required_stes = at->num_of_action_stes - (!is_jumbo || at->only_term); + matcher->action_ste.max_stes = RTE_MAX(matcher->action_ste.max_stes, required_stes); + + /* Future: Optimize reparse */ + } + + /* There are no additioanl STEs required for matcher */ + if (!matcher->action_ste.max_stes) + return 0; + + /* Allocate action STE mempool */ + pool_attr.table_type = tbl->type; + pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_STE_ACTION_POOL; + pool_attr.alloc_log_sz = rte_log2_u32(matcher->action_ste.max_stes) + + matcher->attr.table.sz_row_log; + mlx5dr_matcher_set_pool_attr(&pool_attr, matcher); + matcher->action_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); + if (!matcher->action_ste.pool) { + DR_LOG(ERR, "Failed to create action ste pool"); + return rte_errno; + } + + /* Allocate action RTC */ + ret = mlx5dr_matcher_create_rtc(matcher, false); + if (ret) { + DR_LOG(ERR, "Failed to create action RTC"); + goto free_ste_pool; + } + + /* Allocate STC for jumps to STE */ + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE; + stc_attr.ste_table.ste = matcher->action_ste.ste; + stc_attr.ste_table.ste_pool = matcher->action_ste.pool; + stc_attr.ste_table.match_definer_id = ctx->caps->trivial_match_definer; + + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl->type, + &matcher->action_ste.stc); + if (ret) { + DR_LOG(ERR, "Failed to create action jump to table STC"); + goto free_rtc; + } + + return 0; + +free_rtc: + mlx5dr_matcher_destroy_rtc(matcher, false); +free_ste_pool: + mlx5dr_pool_destroy(matcher->action_ste.pool); + return rte_errno; +} + +static void mlx5dr_matcher_unbind_at(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_table *tbl = matcher->tbl; + + if (!matcher->action_ste.max_stes) + return; + + mlx5dr_action_free_single_stc(tbl->ctx, tbl->type, &matcher->action_ste.stc); + mlx5dr_matcher_destroy_rtc(matcher, false); + mlx5dr_pool_destroy(matcher->action_ste.pool); +} + +static int mlx5dr_matcher_bind_mt(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_pool_attr pool_attr = {0}; + int i, created = 0; + int ret = -1; + + for (i = 0; i < matcher->num_of_mt; i++) { + /* Get a definer for each match template */ + ret = mlx5dr_definer_get(ctx, matcher->mt[i]); + if (ret) + goto definer_put; + + created++; + + /* Verify all templates produce the same definer */ + if (i == 0) + continue; + + ret = mlx5dr_definer_compare(matcher->mt[i]->definer, + matcher->mt[i - 1]->definer); + if (ret) { + DR_LOG(ERR, "Match templates cannot be used on the same matcher"); + rte_errno = ENOTSUP; + goto definer_put; + } + } + + /* Create an STE pool per matcher*/ + pool_attr.pool_type = MLX5DR_POOL_TYPE_STE; + pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_MATCHER_STE_POOL; + pool_attr.table_type = matcher->tbl->type; + pool_attr.alloc_log_sz = matcher->attr.table.sz_col_log + + matcher->attr.table.sz_row_log; + mlx5dr_matcher_set_pool_attr(&pool_attr, matcher); + + matcher->match_ste.pool = mlx5dr_pool_create(ctx, &pool_attr); + if (!matcher->match_ste.pool) { + DR_LOG(ERR, "Failed to allocate matcher STE pool"); + goto definer_put; + } + + return 0; + +definer_put: + while (created--) + mlx5dr_definer_put(matcher->mt[created]); + + return ret; +} + +static void mlx5dr_matcher_unbind_mt(struct mlx5dr_matcher *matcher) +{ + int i; + + for (i = 0; i < matcher->num_of_mt; i++) + mlx5dr_definer_put(matcher->mt[i]); + + mlx5dr_pool_destroy(matcher->match_ste.pool); +} + +static int +mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps, + struct mlx5dr_matcher *matcher, + bool is_root) +{ + struct mlx5dr_matcher_attr *attr = &matcher->attr; + + if (matcher->tbl->type != MLX5DR_TABLE_TYPE_FDB && attr->optimize_flow_src) { + DR_LOG(ERR, "NIC domain doesn't support flow_src"); + goto not_supported; + } + + if (is_root) { + if (attr->mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) { + DR_LOG(ERR, "Root matcher supports only rule resource mode"); + goto not_supported; + } + if (attr->optimize_flow_src) { + DR_LOG(ERR, "Root matcher can't specify FDB direction"); + goto not_supported; + } + return 0; + } + + /* Convert number of rules to the required depth */ + if (attr->mode == MLX5DR_MATCHER_RESOURCE_MODE_RULE) + attr->table.sz_col_log = mlx5dr_matcher_rules_to_tbl_depth(attr->rule.num_log); + + if (attr->table.sz_col_log > caps->rtc_log_depth_max) { + DR_LOG(ERR, "Matcher depth exceeds limit %d", caps->rtc_log_depth_max); + goto not_supported; + } + + if (attr->table.sz_col_log + attr->table.sz_row_log > caps->ste_alloc_log_max) { + DR_LOG(ERR, "Total matcher size exceeds limit %d", caps->ste_alloc_log_max); + goto not_supported; + } + + if (attr->table.sz_col_log + attr->table.sz_row_log < caps->ste_alloc_log_gran) { + DR_LOG(ERR, "Total matcher size below limit %d", caps->ste_alloc_log_gran); + goto not_supported; + } + + return 0; + +not_supported: + rte_errno = EOPNOTSUPP; + return rte_errno; +} + +static int mlx5dr_matcher_create_and_connect(struct mlx5dr_matcher *matcher) +{ + int ret; + + /* Select and create the definers for current matcher */ + ret = mlx5dr_matcher_bind_mt(matcher); + if (ret) + return ret; + + /* Calculate and verify action combination */ + ret = mlx5dr_matcher_bind_at(matcher); + if (ret) + goto unbind_mt; + + /* Create matcher end flow table anchor */ + ret = mlx5dr_matcher_create_end_ft(matcher); + if (ret) + goto unbind_at; + + /* Allocate the RTC for the new matcher */ + ret = mlx5dr_matcher_create_rtc(matcher, true); + if (ret) + goto destroy_end_ft; + + /* Connect the matcher to the matcher list */ + ret = mlx5dr_matcher_connect(matcher); + if (ret) + goto destroy_rtc; + + return 0; + +destroy_rtc: + mlx5dr_matcher_destroy_rtc(matcher, true); +destroy_end_ft: + mlx5dr_matcher_destroy_end_ft(matcher); +unbind_at: + mlx5dr_matcher_unbind_at(matcher); +unbind_mt: + mlx5dr_matcher_unbind_mt(matcher); + return ret; +} + +static void mlx5dr_matcher_destroy_and_disconnect(struct mlx5dr_matcher *matcher) +{ + mlx5dr_matcher_disconnect(matcher); + mlx5dr_matcher_destroy_rtc(matcher, true); + mlx5dr_matcher_destroy_end_ft(matcher); + mlx5dr_matcher_unbind_at(matcher); + mlx5dr_matcher_unbind_mt(matcher); +} + +static int +mlx5dr_matcher_create_col_matcher(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_matcher *col_matcher; + int ret; + + if (matcher->attr.mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) + return 0; + + if (!mlx5dr_matcher_requires_col_tbl(matcher->attr.rule.num_log)) + return 0; + + col_matcher = simple_calloc(1, sizeof(*matcher)); + if (!col_matcher) { + rte_errno = ENOMEM; + return rte_errno; + } + + col_matcher->tbl = matcher->tbl; + col_matcher->num_of_mt = matcher->num_of_mt; + memcpy(col_matcher->mt, matcher->mt, matcher->num_of_mt * sizeof(*matcher->mt)); + col_matcher->num_of_at = matcher->num_of_at; + memcpy(col_matcher->at, matcher->at, matcher->num_of_at * sizeof(*matcher->at)); + + col_matcher->attr.priority = matcher->attr.priority; + col_matcher->attr.mode = MLX5DR_MATCHER_RESOURCE_MODE_HTABLE; + col_matcher->attr.optimize_flow_src = matcher->attr.optimize_flow_src; + col_matcher->attr.table.sz_row_log = matcher->attr.rule.num_log; + col_matcher->attr.table.sz_col_log = MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH; + if (col_matcher->attr.table.sz_row_log > MLX5DR_MATCHER_ASSURED_ROW_RATIO) + col_matcher->attr.table.sz_row_log -= MLX5DR_MATCHER_ASSURED_ROW_RATIO; + + ret = mlx5dr_matcher_process_attr(ctx->caps, col_matcher, false); + if (ret) + goto free_col_matcher; + + ret = mlx5dr_matcher_create_and_connect(col_matcher); + if (ret) + goto free_col_matcher; + + matcher->col_matcher = col_matcher; + + return 0; + +free_col_matcher: + simple_free(col_matcher); + DR_LOG(ERR, "Failed to create assured collision matcher"); + return ret; +} + +static void +mlx5dr_matcher_destroy_col_matcher(struct mlx5dr_matcher *matcher) +{ + if (matcher->attr.mode != MLX5DR_MATCHER_RESOURCE_MODE_RULE) + return; + + if (matcher->col_matcher) { + mlx5dr_matcher_destroy_and_disconnect(matcher->col_matcher); + simple_free(matcher->col_matcher); + } +} + +static int mlx5dr_matcher_init(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + + /* Allocate matcher resource and connect to the packet pipe */ + ret = mlx5dr_matcher_create_and_connect(matcher); + if (ret) + goto unlock_err; + + /* Create additional matcher for collision handling */ + ret = mlx5dr_matcher_create_col_matcher(matcher); + if (ret) + goto destory_and_disconnect; + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +destory_and_disconnect: + mlx5dr_matcher_destroy_and_disconnect(matcher); +unlock_err: + pthread_spin_unlock(&ctx->ctrl_lock); + return ret; +} + +static int mlx5dr_matcher_uninit(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + + pthread_spin_lock(&ctx->ctrl_lock); + mlx5dr_matcher_destroy_col_matcher(matcher); + mlx5dr_matcher_destroy_and_disconnect(matcher); + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; +} + +static int mlx5dr_matcher_init_root(struct mlx5dr_matcher *matcher) +{ + enum mlx5dr_table_type type = matcher->tbl->type; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dv_flow_matcher_attr attr = {0}; + struct mlx5dv_flow_match_parameters *mask; + struct mlx5_flow_attr flow_attr = {0}; + struct rte_flow_error rte_error; + uint8_t match_criteria; + int ret; + + switch (type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + case MLX5DR_TABLE_TYPE_NIC_TX: + break; +#ifdef HAVE_MLX5DV_FLOW_MATCHER_FT_TYPE + case MLX5DR_TABLE_TYPE_FDB: + attr.comp_mask = MLX5DV_FLOW_MATCHER_MASK_FT_TYPE; + attr.ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; + break; +#endif + default: + assert(0); + break; + } + + if (matcher->attr.priority > UINT16_MAX) { + DR_LOG(ERR, "Root matcher priority exceeds allowed limit"); + rte_errno = EINVAL; + return rte_errno; + } + + mask = simple_calloc(1, MLX5_ST_SZ_BYTES(fte_match_param) + + offsetof(struct mlx5dv_flow_match_parameters, match_buf)); + if (!mask) { + rte_errno = ENOMEM; + return rte_errno; + } + + flow_attr.tbl_type = type; + + /* On root table matcher, only a single match template is supported */ + ret = flow_dv_translate_items_hws(matcher->mt[0]->items, + &flow_attr, mask->match_buf, + MLX5_SET_MATCHER_HS_M, NULL, + &match_criteria, + &rte_error); + if (ret) { + DR_LOG(ERR, "Failed to convert items to PRM [%s]", rte_error.message); + goto free_mask; + } + + mask->match_sz = MLX5_ST_SZ_BYTES(fte_match_param); + attr.match_mask = mask; + attr.match_criteria_enable = match_criteria; + attr.type = IBV_FLOW_ATTR_NORMAL; + attr.priority = matcher->attr.priority; + + matcher->dv_matcher = + mlx5_glue->dv_create_flow_matcher_root(ctx->ibv_ctx, &attr); + if (!matcher->dv_matcher) { + DR_LOG(ERR, "Failed to create DV flow matcher"); + rte_errno = errno; + goto free_mask; + } + + simple_free(mask); + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_INSERT_HEAD(&matcher->tbl->head, matcher, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_mask: + simple_free(mask); + return rte_errno; +} + +static int mlx5dr_matcher_uninit_root(struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_context *ctx = matcher->tbl->ctx; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + LIST_REMOVE(matcher, next); + pthread_spin_unlock(&ctx->ctrl_lock); + + ret = mlx5_glue->dv_destroy_flow_matcher_root(matcher->dv_matcher); + if (ret) { + DR_LOG(ERR, "Failed to Destroy DV flow matcher"); + rte_errno = errno; + } + + return ret; +} + +static int +mlx5dr_matcher_check_template(uint8_t num_of_mt, uint8_t num_of_at, bool is_root) +{ + uint8_t max_num_of_mt; + + max_num_of_mt = is_root ? + MLX5DR_MATCHER_MAX_MT_ROOT : + MLX5DR_MATCHER_MAX_MT; + + if (!num_of_mt || !num_of_at) { + DR_LOG(ERR, "Number of action/match template cannot be zero"); + goto out_not_sup; + } + + if (num_of_at > MLX5DR_MATCHER_MAX_AT) { + DR_LOG(ERR, "Number of action templates exceeds limit"); + goto out_not_sup; + } + + if (num_of_mt > max_num_of_mt) { + DR_LOG(ERR, "Number of match templates exceeds limit"); + goto out_not_sup; + } + + return 0; + +out_not_sup: + rte_errno = ENOTSUP; + return rte_errno; +} + +struct mlx5dr_matcher * +mlx5dr_matcher_create(struct mlx5dr_table *tbl, + struct mlx5dr_match_template *mt[], + uint8_t num_of_mt, + struct mlx5dr_action_template *at[], + uint8_t num_of_at, + struct mlx5dr_matcher_attr *attr) +{ + bool is_root = mlx5dr_table_is_root(tbl); + struct mlx5dr_matcher *matcher; + int ret; + + ret = mlx5dr_matcher_check_template(num_of_mt, num_of_at, is_root); + if (ret) + return NULL; + + matcher = simple_calloc(1, sizeof(*matcher)); + if (!matcher) { + rte_errno = ENOMEM; + return NULL; + } + + matcher->tbl = tbl; + matcher->attr = *attr; + matcher->num_of_mt = num_of_mt; + memcpy(matcher->mt, mt, num_of_mt * sizeof(*mt)); + matcher->num_of_at = num_of_at; + memcpy(matcher->at, at, num_of_at * sizeof(*at)); + + ret = mlx5dr_matcher_process_attr(tbl->ctx->caps, matcher, is_root); + if (ret) + goto free_matcher; + + if (is_root) + ret = mlx5dr_matcher_init_root(matcher); + else + ret = mlx5dr_matcher_init(matcher); + + if (ret) { + DR_LOG(ERR, "Failed to initialise matcher: %d", ret); + goto free_matcher; + } + + return matcher; + +free_matcher: + simple_free(matcher); + return NULL; +} + +int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher) +{ + if (mlx5dr_table_is_root(matcher->tbl)) + mlx5dr_matcher_uninit_root(matcher); + else + mlx5dr_matcher_uninit(matcher); + + simple_free(matcher); + return 0; +} + +struct mlx5dr_match_template * +mlx5dr_match_template_create(const struct rte_flow_item items[], + enum mlx5dr_match_template_flags flags) +{ + struct mlx5dr_match_template *mt; + struct rte_flow_error error; + int ret, len; + + if (flags > MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH) { + DR_LOG(ERR, "Unsupported match template flag provided"); + rte_errno = EINVAL; + return NULL; + } + + mt = simple_calloc(1, sizeof(*mt)); + if (!mt) { + DR_LOG(ERR, "Failed to allocate match template"); + rte_errno = ENOMEM; + return NULL; + } + + mt->flags = flags; + + /* Duplicate the user given items */ + ret = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, NULL, 0, items, &error); + if (ret <= 0) { + DR_LOG(ERR, "Unable to process items (%s): %s", + error.message ? error.message : "unspecified", + strerror(rte_errno)); + goto free_template; + } + + len = RTE_ALIGN(ret, 16); + mt->items = simple_calloc(1, len); + if (!mt->items) { + DR_LOG(ERR, "Failed to allocate item copy"); + rte_errno = ENOMEM; + goto free_template; + } + + ret = rte_flow_conv(RTE_FLOW_CONV_OP_PATTERN, mt->items, ret, items, &error); + if (ret <= 0) + goto free_dst; + + return mt; + +free_dst: + simple_free(mt->items); +free_template: + simple_free(mt); + return NULL; +} + +int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt) +{ + assert(!mt->refcount); + simple_free(mt->items); + simple_free(mt); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h new file mode 100644 index 0000000000..b7bf94762c --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h @@ -0,0 +1,76 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_MATCHER_H_ +#define MLX5DR_MATCHER_H_ + +/* Max supported match template */ +#define MLX5DR_MATCHER_MAX_MT 2 +#define MLX5DR_MATCHER_MAX_MT_ROOT 1 + +/* Max supported action template */ +#define MLX5DR_MATCHER_MAX_AT 4 + +/* We calculated that concatenating a collision table to the main table with + * 3% of the main table rows will be enough resources for high insertion + * success probability. + * + * The calculation: log2(2^x * 3 / 100) = log2(2^x) + log2(3/100) = x - 5.05 ~ 5 + */ +#define MLX5DR_MATCHER_ASSURED_ROW_RATIO 5 +/* Thrashold to determine if amount of rules require a collision table */ +#define MLX5DR_MATCHER_ASSURED_RULES_TH 10 +/* Required depth of an assured collision table */ +#define MLX5DR_MATCHER_ASSURED_COL_TBL_DEPTH 4 +/* Required depth of the main large table */ +#define MLX5DR_MATCHER_ASSURED_MAIN_TBL_DEPTH 2 + +struct mlx5dr_match_template { + struct rte_flow_item *items; + struct mlx5dr_definer *definer; + struct mlx5dr_definer_fc *fc; + uint32_t fc_sz; + uint64_t item_flags; + uint8_t vport_item_id; + enum mlx5dr_match_template_flags flags; + uint32_t refcount; +}; + +struct mlx5dr_matcher_match_ste { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_devx_obj *rtc_0; + struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_pool *pool; +}; + +struct mlx5dr_matcher_action_ste { + struct mlx5dr_pool_chunk ste; + struct mlx5dr_pool_chunk stc; + struct mlx5dr_devx_obj *rtc_0; + struct mlx5dr_devx_obj *rtc_1; + struct mlx5dr_pool *pool; + uint8_t max_stes; +}; + +struct mlx5dr_matcher { + struct mlx5dr_table *tbl; + struct mlx5dr_matcher_attr attr; + struct mlx5dv_flow_matcher *dv_matcher; + struct mlx5dr_match_template *mt[MLX5DR_MATCHER_MAX_MT]; + uint8_t num_of_mt; + struct mlx5dr_action_template *at[MLX5DR_MATCHER_MAX_AT]; + uint8_t num_of_at; + struct mlx5dr_devx_obj *end_ft; + struct mlx5dr_matcher *col_matcher; + struct mlx5dr_matcher_match_ste match_ste; + struct mlx5dr_matcher_action_ste action_ste; + LIST_ENTRY(mlx5dr_matcher) next; +}; + +int mlx5dr_matcher_conv_items_to_prm(uint64_t *match_buf, + struct rte_flow_item *items, + uint8_t *match_criteria, + bool is_value); + +#endif /* MLX5DR_MATCHER_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 14/18] net/mlx5/hws: Add HWS matcher object 2022-10-20 15:57 ` [v6 14/18] net/mlx5/hws: Add HWS matcher object Alex Vesker @ 2022-10-24 6:54 ` Slava Ovsiienko 0 siblings, 0 replies; 134+ messages in thread From: Slava Ovsiienko @ 2022-10-24 6:54 UTC (permalink / raw) To: Alex Vesker, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam, Erez Shitrit > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 18:58 > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad > <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Erez Shitrit > <erezsh@nvidia.com> > Subject: [v6 14/18] net/mlx5/hws: Add HWS matcher object > > HWS matcher resides under the table object, each table can have multiple > chained matcher with different attributes. Each matcher represents a > combination of match and action templates. > Each matcher can contain multiple configurations based on the templates. > Packets are steered from the table to the matcher and from there to other > objects. The matcher allows efficent HW packet field matching and action > execution based on the configuration done to it. > > Signed-off-by: Alex Vesker <valex@nvidia.com> > Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 15/18] net/mlx5/hws: Add HWS rule object 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (13 preceding siblings ...) 2022-10-20 15:57 ` [v6 14/18] net/mlx5/hws: Add HWS matcher object Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-24 6:54 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 16/18] net/mlx5/hws: Add HWS action object Alex Vesker ` (3 subsequent siblings) 18 siblings, 1 reply; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit HWS rule objects reside under the matcher, each rule holds the configuration for the packet fields to match on and the set of actions to execute over the packet that has the requested fields. Rules can be created asynchronously in parallel over multiple queues to different matchers. Each rule is configured to the HW. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_rule.c | 528 +++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_rule.h | 50 +++ 2 files changed, 578 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.h diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c new file mode 100644 index 0000000000..b27318e6d4 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_rule.c @@ -0,0 +1,528 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +static void mlx5dr_rule_skip(struct mlx5dr_matcher *matcher, + const struct rte_flow_item *items, + bool *skip_rx, bool *skip_tx) +{ + struct mlx5dr_match_template *mt = matcher->mt[0]; + const struct flow_hw_port_info *vport; + const struct rte_flow_item_ethdev *v; + + /* Flow_src is the 1st priority */ + if (matcher->attr.optimize_flow_src) { + *skip_tx = matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_WIRE; + *skip_rx = matcher->attr.optimize_flow_src == MLX5DR_MATCHER_FLOW_SRC_VPORT; + return; + } + + /* By default FDB rules are added to both RX and TX */ + *skip_rx = false; + *skip_tx = false; + + if (mt->item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) { + v = items[mt->vport_item_id].spec; + vport = flow_hw_conv_port_id(v->port_id); + if (unlikely(!vport)) { + DR_LOG(ERR, "Fail to map port ID %d, ignoring", v->port_id); + return; + } + + if (!vport->is_wire) + /* Match vport ID is not WIRE -> Skip RX */ + *skip_rx = true; + else + /* Match vport ID is WIRE -> Skip TX */ + *skip_tx = true; + } +} + +static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe, + struct mlx5dr_rule *rule, + const struct rte_flow_item *items, + void *user_data) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_table *tbl = matcher->tbl; + bool skip_rx, skip_tx; + + dep_wqe->rule = rule; + dep_wqe->user_data = user_data; + + switch (tbl->type) { + case MLX5DR_TABLE_TYPE_NIC_RX: + case MLX5DR_TABLE_TYPE_NIC_TX: + dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; + dep_wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0->id : 0; + dep_wqe->rtc_1 = 0; + dep_wqe->retry_rtc_1 = 0; + break; + + case MLX5DR_TABLE_TYPE_FDB: + mlx5dr_rule_skip(matcher, items, &skip_rx, &skip_tx); + + if (!skip_rx) { + dep_wqe->rtc_0 = matcher->match_ste.rtc_0->id; + dep_wqe->retry_rtc_0 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_0->id : 0; + } else { + dep_wqe->rtc_0 = 0; + dep_wqe->retry_rtc_0 = 0; + } + + if (!skip_tx) { + dep_wqe->rtc_1 = matcher->match_ste.rtc_1->id; + dep_wqe->retry_rtc_1 = matcher->col_matcher ? + matcher->col_matcher->match_ste.rtc_1->id : 0; + } else { + dep_wqe->rtc_1 = 0; + dep_wqe->retry_rtc_1 = 0; + } + + break; + + default: + assert(false); + break; + } +} + +static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue, + struct mlx5dr_rule *rule, + bool err, + void *user_data, + enum mlx5dr_rule_status rule_status_on_succ) +{ + enum rte_flow_op_status comp_status; + + if (!err) { + comp_status = RTE_FLOW_OP_SUCCESS; + rule->status = rule_status_on_succ; + } else { + comp_status = RTE_FLOW_OP_ERROR; + rule->status = MLX5DR_RULE_STATUS_FAILED; + } + + mlx5dr_send_engine_inc_rule(queue); + mlx5dr_send_engine_gen_comp(queue, user_data, comp_status); +} + +static int mlx5dr_rule_alloc_action_ste(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + int ret; + + /* Use rule_idx for locking optimzation, otherwise allocate from pool */ + if (matcher->attr.optimize_using_rule_idx) { + rule->action_ste_idx = attr->rule_idx * matcher->action_ste.max_stes; + } else { + struct mlx5dr_pool_chunk ste = {0}; + + ste.order = rte_log2_u32(matcher->action_ste.max_stes); + ret = mlx5dr_pool_chunk_alloc(matcher->action_ste.pool, &ste); + if (ret) { + DR_LOG(ERR, "Failed to allocate STE for rule actions"); + return ret; + } + rule->action_ste_idx = ste.offset; + } + return 0; +} + +void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + + if (rule->action_ste_idx > -1 && !matcher->attr.optimize_using_rule_idx) { + struct mlx5dr_pool_chunk ste = {0}; + + /* This release is safe only when the rule match part was deleted */ + ste.order = rte_log2_u32(matcher->action_ste.max_stes); + ste.offset = rule->action_ste_idx; + mlx5dr_pool_chunk_free(matcher->action_ste.pool, &ste); + } +} + +static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule, + struct mlx5dr_send_ste_attr *ste_attr, + struct mlx5dr_actions_apply_data *apply) +{ + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_table *tbl = matcher->tbl; + struct mlx5dr_context *ctx = tbl->ctx; + + /* Init rule before reuse */ + rule->rtc_0 = 0; + rule->rtc_1 = 0; + rule->pending_wqes = 0; + rule->action_ste_idx = -1; + rule->status = MLX5DR_RULE_STATUS_CREATING; + + /* Init default send STE attributes */ + ste_attr->gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE; + ste_attr->send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr->send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr->send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + + /* Init default action apply */ + apply->tbl_type = tbl->type; + apply->common_res = &ctx->common_res[tbl->type]; + apply->jump_to_action_stc = matcher->action_ste.stc.offset; + apply->require_dep = 0; +} + +static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dr_action_template *at = rule->matcher->at[at_idx]; + struct mlx5dr_match_template *mt = rule->matcher->mt[mt_idx]; + bool is_jumbo = mlx5dr_definer_is_jumbo(mt->definer); + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_context *ctx = matcher->tbl->ctx; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_ring_dep_wqe *dep_wqe; + struct mlx5dr_actions_wqe_setter *setter; + struct mlx5dr_actions_apply_data apply; + struct mlx5dr_send_engine *queue; + uint8_t total_stes, action_stes; + int i, ret; + + queue = &ctx->send_queue[attr->queue_id]; + if (unlikely(mlx5dr_send_engine_err(queue))) { + rte_errno = EIO; + return rte_errno; + } + + mlx5dr_rule_create_init(rule, &ste_attr, &apply); + + /* Allocate dependent match WQE since rule might have dependent writes. + * The queued dependent WQE can be later aborted or kept as a dependency. + * dep_wqe buffers (ctrl, data) are also reused for all STE writes. + */ + dep_wqe = mlx5dr_send_add_new_dep_wqe(queue); + mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, attr->user_data); + + ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl; + ste_attr.wqe_data = &dep_wqe->wqe_data; + apply.wqe_ctrl = &dep_wqe->wqe_ctrl; + apply.wqe_data = (uint32_t *)&dep_wqe->wqe_data; + apply.rule_action = rule_actions; + apply.queue = queue; + + setter = &at->setters[at->num_of_action_stes]; + total_stes = at->num_of_action_stes + (is_jumbo && !at->only_term); + action_stes = total_stes - 1; + + if (action_stes) { + /* Allocate action STEs for complex rules */ + ret = mlx5dr_rule_alloc_action_ste(rule, attr); + if (ret) { + DR_LOG(ERR, "Failed to allocate action memory %d", ret); + mlx5dr_send_abort_new_dep_wqe(queue); + return ret; + } + /* Skip RX/TX based on the dep_wqe init */ + ste_attr.rtc_0 = dep_wqe->rtc_0 ? matcher->action_ste.rtc_0->id : 0; + ste_attr.rtc_1 = dep_wqe->rtc_1 ? matcher->action_ste.rtc_1->id : 0; + /* Action STEs are written to a specific index last to first */ + ste_attr.direct_index = rule->action_ste_idx + action_stes; + apply.next_direct_idx = ste_attr.direct_index; + } else { + apply.next_direct_idx = 0; + } + + for (i = total_stes; i-- > 0;) { + mlx5dr_action_apply_setter(&apply, setter--, !i && is_jumbo); + + if (i == 0) { + /* Handle last match STE */ + mlx5dr_definer_create_tag(items, mt->fc, mt->fc_sz, + (uint8_t *)dep_wqe->wqe_data.action); + + /* Rule has dependent WQEs, match dep_wqe is queued */ + if (action_stes || apply.require_dep) + break; + + /* Rule has no dependencies, abort dep_wqe and send WQE now */ + mlx5dr_send_abort_new_dep_wqe(queue); + ste_attr.wqe_tag_is_jumbo = is_jumbo; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = dep_wqe->user_data; + ste_attr.send_attr.rule = dep_wqe->rule; + ste_attr.direct_index = 0; + ste_attr.rtc_0 = dep_wqe->rtc_0; + ste_attr.rtc_1 = dep_wqe->rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.retry_rtc_0 = dep_wqe->retry_rtc_0; + ste_attr.retry_rtc_1 = dep_wqe->retry_rtc_1; + } else { + apply.next_direct_idx = --ste_attr.direct_index; + } + + mlx5dr_send_ste(queue, &ste_attr); + } + + /* Backup TAG on the rule for deletion */ + if (is_jumbo) + memcpy(rule->tag.jumbo, dep_wqe->wqe_data.action, MLX5DR_JUMBO_TAG_SZ); + else + memcpy(rule->tag.match, dep_wqe->wqe_data.tag, MLX5DR_MATCH_TAG_SZ); + + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQEs */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + return 0; +} + +static void mlx5dr_rule_destroy_failed_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + + mlx5dr_rule_gen_comp(queue, rule, false, + attr->user_data, MLX5DR_RULE_STATUS_DELETED); + + /* Rule failed now we can safely release action STEs */ + mlx5dr_rule_free_action_ste_idx(rule); + + /* If a rule that was indicated as burst (need to trigger HW) has failed + * insertion we won't ring the HW as nothing is being written to the WQ. + * In such case update the last WQE and ring the HW with that work + */ + if (attr->burst) + return; + + mlx5dr_send_all_dep_wqe(queue); + mlx5dr_send_engine_flush_queue(queue); +} + +static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dr_matcher *matcher = rule->matcher; + struct mlx5dr_wqe_gta_ctrl_seg wqe_ctrl = {0}; + struct mlx5dr_send_ste_attr ste_attr = {0}; + struct mlx5dr_send_engine *queue; + + queue = &ctx->send_queue[attr->queue_id]; + + /* Rule is not completed yet */ + if (rule->status == MLX5DR_RULE_STATUS_CREATING) { + rte_errno = EBUSY; + return rte_errno; + } + + /* Rule failed and doesn't require cleanup */ + if (rule->status == MLX5DR_RULE_STATUS_FAILED) { + mlx5dr_rule_destroy_failed_hws(rule, attr); + return 0; + } + + if (unlikely(mlx5dr_send_engine_err(queue))) { + mlx5dr_rule_destroy_failed_hws(rule, attr); + return 0; + } + + mlx5dr_send_engine_inc_rule(queue); + + /* Send dependent WQE */ + if (!attr->burst) + mlx5dr_send_all_dep_wqe(queue); + + rule->status = MLX5DR_RULE_STATUS_DELETING; + + ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE; + ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + + ste_attr.send_attr.rule = rule; + ste_attr.send_attr.notify_hw = !attr->burst; + ste_attr.send_attr.user_data = attr->user_data; + + ste_attr.rtc_0 = rule->rtc_0; + ste_attr.rtc_1 = rule->rtc_1; + ste_attr.used_id_rtc_0 = &rule->rtc_0; + ste_attr.used_id_rtc_1 = &rule->rtc_1; + ste_attr.wqe_ctrl = &wqe_ctrl; + ste_attr.wqe_tag = &rule->tag; + ste_attr.wqe_tag_is_jumbo = mlx5dr_definer_is_jumbo(matcher->mt[0]->definer); + ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE; + + mlx5dr_send_ste(queue, &ste_attr); + + return 0; +} + +static int mlx5dr_rule_create_root(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *rule_attr, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[]) +{ + struct mlx5dv_flow_matcher *dv_matcher = rule->matcher->dv_matcher; + uint8_t num_actions = rule->matcher->at[at_idx]->num_actions; + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + struct mlx5dv_flow_match_parameters *value; + struct mlx5_flow_attr flow_attr = {0}; + struct mlx5dv_flow_action_attr *attr; + struct rte_flow_error error; + uint8_t match_criteria; + int ret; + + attr = simple_calloc(num_actions, sizeof(*attr)); + if (!attr) { + rte_errno = ENOMEM; + return rte_errno; + } + + value = simple_calloc(1, MLX5_ST_SZ_BYTES(fte_match_param) + + offsetof(struct mlx5dv_flow_match_parameters, match_buf)); + if (!value) { + rte_errno = ENOMEM; + goto free_attr; + } + + flow_attr.tbl_type = rule->matcher->tbl->type; + + ret = flow_dv_translate_items_hws(items, &flow_attr, value->match_buf, + MLX5_SET_MATCHER_HS_V, NULL, + &match_criteria, + &error); + if (ret) { + DR_LOG(ERR, "Failed to convert items to PRM [%s]", error.message); + goto free_value; + } + + /* Convert actions to verb action attr */ + ret = mlx5dr_action_root_build_attr(rule_actions, num_actions, attr); + if (ret) + goto free_value; + + /* Create verb flow */ + value->match_sz = MLX5_ST_SZ_BYTES(fte_match_param); + rule->flow = mlx5_glue->dv_create_flow_root(dv_matcher, + value, + num_actions, + attr); + + mlx5dr_rule_gen_comp(&ctx->send_queue[rule_attr->queue_id], rule, !rule->flow, + rule_attr->user_data, MLX5DR_RULE_STATUS_CREATED); + + simple_free(value); + simple_free(attr); + + return 0; + +free_value: + simple_free(value); +free_attr: + simple_free(attr); + + return -rte_errno; +} + +static int mlx5dr_rule_destroy_root(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + int err = 0; + + if (rule->flow) + err = ibv_destroy_flow(rule->flow); + + mlx5dr_rule_gen_comp(&ctx->send_queue[attr->queue_id], rule, err, + attr->user_data, MLX5DR_RULE_STATUS_DELETED); + + return 0; +} + +int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, + uint8_t mt_idx, + const struct rte_flow_item items[], + uint8_t at_idx, + struct mlx5dr_rule_action rule_actions[], + struct mlx5dr_rule_attr *attr, + struct mlx5dr_rule *rule_handle) +{ + struct mlx5dr_context *ctx; + int ret; + + rule_handle->matcher = matcher; + ctx = matcher->tbl->ctx; + + if (unlikely(!attr->user_data)) { + rte_errno = EINVAL; + return -rte_errno; + } + + /* Check if there is room in queue */ + if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + rte_errno = EBUSY; + return -rte_errno; + } + + assert(matcher->num_of_mt >= mt_idx); + assert(matcher->num_of_at >= at_idx); + + if (unlikely(mlx5dr_table_is_root(matcher->tbl))) + ret = mlx5dr_rule_create_root(rule_handle, + attr, + items, + at_idx, + rule_actions); + else + ret = mlx5dr_rule_create_hws(rule_handle, + attr, + mt_idx, + items, + at_idx, + rule_actions); + return -ret; +} + +int mlx5dr_rule_destroy(struct mlx5dr_rule *rule, + struct mlx5dr_rule_attr *attr) +{ + struct mlx5dr_context *ctx = rule->matcher->tbl->ctx; + int ret; + + if (unlikely(!attr->user_data)) { + rte_errno = EINVAL; + return -rte_errno; + } + + /* Check if there is room in queue */ + if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) { + rte_errno = EBUSY; + return -rte_errno; + } + + if (unlikely(mlx5dr_table_is_root(rule->matcher->tbl))) + ret = mlx5dr_rule_destroy_root(rule, attr); + else + ret = mlx5dr_rule_destroy_hws(rule, attr); + + return -ret; +} + +size_t mlx5dr_rule_get_handle_size(void) +{ + return sizeof(struct mlx5dr_rule); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h new file mode 100644 index 0000000000..96c85674f2 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_rule.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_RULE_H_ +#define MLX5DR_RULE_H_ + +enum { + MLX5DR_STE_CTRL_SZ = 20, + MLX5DR_ACTIONS_SZ = 12, + MLX5DR_MATCH_TAG_SZ = 32, + MLX5DR_JUMBO_TAG_SZ = 44, +}; + +enum mlx5dr_rule_status { + MLX5DR_RULE_STATUS_UNKNOWN, + MLX5DR_RULE_STATUS_CREATING, + MLX5DR_RULE_STATUS_CREATED, + MLX5DR_RULE_STATUS_DELETING, + MLX5DR_RULE_STATUS_DELETED, + MLX5DR_RULE_STATUS_FAILING, + MLX5DR_RULE_STATUS_FAILED, +}; + +struct mlx5dr_rule_match_tag { + union { + uint8_t jumbo[MLX5DR_JUMBO_TAG_SZ]; + struct { + uint8_t reserved[MLX5DR_ACTIONS_SZ]; + uint8_t match[MLX5DR_MATCH_TAG_SZ]; + }; + }; +}; + +struct mlx5dr_rule { + struct mlx5dr_matcher *matcher; + union { + struct mlx5dr_rule_match_tag tag; + struct ibv_flow *flow; + }; + uint32_t rtc_0; /* The RTC into which the STE was inserted */ + uint32_t rtc_1; /* The RTC into which the STE was inserted */ + int action_ste_idx; /* Action STE pool ID */ + uint8_t status; /* enum mlx5dr_rule_status */ + uint8_t pending_wqes; +}; + +void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule); + +#endif /* MLX5DR_RULE_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 15/18] net/mlx5/hws: Add HWS rule object 2022-10-20 15:57 ` [v6 15/18] net/mlx5/hws: Add HWS rule object Alex Vesker @ 2022-10-24 6:54 ` Slava Ovsiienko 0 siblings, 0 replies; 134+ messages in thread From: Slava Ovsiienko @ 2022-10-24 6:54 UTC (permalink / raw) To: Alex Vesker, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam, Erez Shitrit > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 18:58 > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad > <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Erez Shitrit > <erezsh@nvidia.com> > Subject: [v6 15/18] net/mlx5/hws: Add HWS rule object > > HWS rule objects reside under the matcher, each rule holds the configuration > for the packet fields to match on and the set of actions to execute over the > packet that has the requested fields. Rules can be created asynchronously in > parallel over multiple queues to different matchers. Each rule is configured > to the HW. > > Signed-off-by: Erez Shitrit <erezsh@nvidia.com> > Signed-off-by: Alex Vesker <valex@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 16/18] net/mlx5/hws: Add HWS action object 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (14 preceding siblings ...) 2022-10-20 15:57 ` [v6 15/18] net/mlx5/hws: Add HWS rule object Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-20 15:57 ` [v6 17/18] net/mlx5/hws: Add HWS debug layer Alex Vesker ` (2 subsequent siblings) 18 siblings, 0 replies; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Erez Shitrit From: Erez Shitrit <erezsh@nvidia.com> Action objects are used for executing different HW actions over packets. Each action contains the HW resources and parameters needed for action use over the HW when creating a rule. Signed-off-by: Erez Shitrit <erezsh@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_action.c | 2237 +++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_action.h | 253 +++ drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 511 ++++++ drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 83 + 4 files changed, 3084 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.h create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.h diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c new file mode 100644 index 0000000000..755d5d09cf --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_action.c @@ -0,0 +1,2237 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +#define WIRE_PORT 0xFFFF + +#define MLX5DR_ACTION_METER_INIT_COLOR_OFFSET 1 + +/* This is the maximum allowed action order for each table type: + * TX: POP_VLAN, CTR, ASO_METER, AS_CT, PUSH_VLAN, MODIFY, ENCAP, Term + * RX: TAG, DECAP, POP_VLAN, CTR, ASO_METER, ASO_CT, PUSH_VLAN, MODIFY, + * ENCAP, Term + * FDB: DECAP, POP_VLAN, CTR, ASO_METER, ASO_CT, PUSH_VLAN, MODIFY, + * ENCAP, Term + */ +static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_MAX] = { + [MLX5DR_TABLE_TYPE_NIC_RX] = { + BIT(MLX5DR_ACTION_TYP_TAG), + BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_TIR) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, + [MLX5DR_TABLE_TYPE_NIC_TX] = { + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, + [MLX5DR_TABLE_TYPE_FDB] = { + BIT(MLX5DR_ACTION_TYP_TNL_L2_TO_L2) | + BIT(MLX5DR_ACTION_TYP_TNL_L3_TO_L2), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_POP_VLAN), + BIT(MLX5DR_ACTION_TYP_CTR), + BIT(MLX5DR_ACTION_TYP_ASO_METER), + BIT(MLX5DR_ACTION_TYP_ASO_CT), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_PUSH_VLAN), + BIT(MLX5DR_ACTION_TYP_MODIFY_HDR), + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L2) | + BIT(MLX5DR_ACTION_TYP_L2_TO_TNL_L3), + BIT(MLX5DR_ACTION_TYP_FT) | + BIT(MLX5DR_ACTION_TYP_MISS) | + BIT(MLX5DR_ACTION_TYP_VPORT) | + BIT(MLX5DR_ACTION_TYP_DROP), + BIT(MLX5DR_ACTION_TYP_LAST), + }, +}; + +static int mlx5dr_action_get_shared_stc_nic(struct mlx5dr_context *ctx, + enum mlx5dr_context_shared_stc_type stc_type, + uint8_t tbl_type) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_action_shared_stc *shared_stc; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + if (ctx->common_res[tbl_type].shared_stc[stc_type]) { + rte_atomic32_add(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount, 1); + pthread_spin_unlock(&ctx->ctrl_lock); + return 0; + } + + shared_stc = simple_calloc(1, sizeof(*shared_stc)); + if (!shared_stc) { + DR_LOG(ERR, "Failed to allocate memory for shared STCs"); + rte_errno = ENOMEM; + goto unlock_and_out; + } + switch (stc_type) { + case MLX5DR_CONTEXT_SHARED_STC_DECAP: + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + stc_attr.remove_header.decap = 0; + stc_attr.remove_header.start_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + stc_attr.remove_header.end_anchor = MLX5_HEADER_ANCHOR_IPV6_IPV4; + break; + case MLX5DR_CONTEXT_SHARED_STC_POP: + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + stc_attr.remove_words.start_anchor = MLX5_HEADER_ANCHOR_FIRST_VLAN_START; + stc_attr.remove_words.num_of_words = MLX5DR_ACTION_HDR_LEN_L2_VLAN; + break; + default: + DR_LOG(ERR, "No such type : stc_type\n"); + assert(false); + rte_errno = EINVAL; + goto unlock_and_out; + } + + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &shared_stc->remove_header); + if (ret) { + DR_LOG(ERR, "Failed to allocate shared decap l2 STC"); + goto free_shared_stc; + } + + ctx->common_res[tbl_type].shared_stc[stc_type] = shared_stc; + + rte_atomic32_init(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount); + rte_atomic32_set(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount, 1); + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_shared_stc: + simple_free(shared_stc); +unlock_and_out: + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void mlx5dr_action_put_shared_stc_nic(struct mlx5dr_context *ctx, + enum mlx5dr_context_shared_stc_type stc_type, + uint8_t tbl_type) +{ + struct mlx5dr_action_shared_stc *shared_stc; + + pthread_spin_lock(&ctx->ctrl_lock); + if (!rte_atomic32_dec_and_test(&ctx->common_res[tbl_type].shared_stc[stc_type]->refcount)) { + pthread_spin_unlock(&ctx->ctrl_lock); + return; + } + + shared_stc = ctx->common_res[tbl_type].shared_stc[stc_type]; + + mlx5dr_action_free_single_stc(ctx, tbl_type, &shared_stc->remove_header); + simple_free(shared_stc); + ctx->common_res[tbl_type].shared_stc[stc_type] = NULL; + pthread_spin_unlock(&ctx->ctrl_lock); +} + +static int mlx5dr_action_get_shared_stc(struct mlx5dr_action *action, + enum mlx5dr_context_shared_stc_type stc_type) +{ + struct mlx5dr_context *ctx = action->ctx; + int ret; + + if (stc_type >= MLX5DR_CONTEXT_SHARED_STC_MAX) { + assert(false); + rte_errno = EINVAL; + return rte_errno; + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for RX shared STCs (type: %d)", + stc_type); + return ret; + } + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for TX shared STCs(type: %d)", + stc_type); + goto clean_nic_rx_stc; + } + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) { + ret = mlx5dr_action_get_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB); + if (ret) { + DR_LOG(ERR, "Failed to allocate memory for FDB shared STCs (type: %d)", + stc_type); + goto clean_nic_tx_stc; + } + } + + return 0; + +clean_nic_tx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); +clean_nic_rx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + + return ret; +} + +static void mlx5dr_action_put_shared_stc(struct mlx5dr_action *action, + enum mlx5dr_context_shared_stc_type stc_type) +{ + struct mlx5dr_context *ctx = action->ctx; + + if (stc_type >= MLX5DR_CONTEXT_SHARED_STC_MAX) { + assert(false); + return; + } + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_RX); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_NIC_TX); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) + mlx5dr_action_put_shared_stc_nic(ctx, stc_type, MLX5DR_TABLE_TYPE_FDB); +} + +static void mlx5dr_action_print_combo(enum mlx5dr_action_type *user_actions) +{ + DR_LOG(ERR, "Invalid action_type sequence"); + while (*user_actions != MLX5DR_ACTION_TYP_LAST) { + DR_LOG(ERR, "%s", mlx5dr_debug_action_type_to_str(*user_actions)); + user_actions++; + } +} + +bool mlx5dr_action_check_combo(enum mlx5dr_action_type *user_actions, + enum mlx5dr_table_type table_type) +{ + const uint32_t *order_arr = action_order_arr[table_type]; + uint8_t order_idx = 0; + uint8_t user_idx = 0; + bool valid_combo; + + while (order_arr[order_idx] != BIT(MLX5DR_ACTION_TYP_LAST)) { + /* User action order validated move to next user action */ + if (BIT(user_actions[user_idx]) & order_arr[order_idx]) + user_idx++; + + /* Iterate to the next supported action in the order */ + order_idx++; + } + + /* Combination is valid if all user action were processed */ + valid_combo = user_actions[user_idx] == MLX5DR_ACTION_TYP_LAST; + if (!valid_combo) + mlx5dr_action_print_combo(user_actions); + + return valid_combo; +} + +int mlx5dr_action_root_build_attr(struct mlx5dr_rule_action rule_actions[], + uint32_t num_actions, + struct mlx5dv_flow_action_attr *attr) +{ + struct mlx5dr_action *action; + uint32_t i; + + for (i = 0; i < num_actions; i++) { + action = rule_actions[i].action; + + switch (action->type) { + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_TIR: + attr[i].type = MLX5DV_FLOW_ACTION_DEST_DEVX; + attr[i].obj = action->devx_obj; + break; + case MLX5DR_ACTION_TYP_TAG: + attr[i].type = MLX5DV_FLOW_ACTION_TAG; + attr[i].tag_value = rule_actions[i].tag.value; + break; +#ifdef HAVE_MLX5_DR_CREATE_ACTION_DEFAULT_MISS + case MLX5DR_ACTION_TYP_MISS: + attr[i].type = MLX5DV_FLOW_ACTION_DEFAULT_MISS; + break; +#endif + case MLX5DR_ACTION_TYP_DROP: + attr[i].type = MLX5DV_FLOW_ACTION_DROP; + break; + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + attr[i].type = MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION; + attr[i].action = action->flow_action; + break; +#ifdef HAVE_IBV_FLOW_DEVX_COUNTERS + case MLX5DR_ACTION_TYP_CTR: + attr[i].type = MLX5DV_FLOW_ACTION_COUNTERS_DEVX; + attr[i].obj = action->devx_obj; + + if (rule_actions[i].counter.offset) { + DR_LOG(ERR, "Counter offset not supported over root"); + rte_errno = ENOTSUP; + return rte_errno; + } + break; +#endif + default: + DR_LOG(ERR, "Found unsupported action type: %d", action->type); + rte_errno = ENOTSUP; + return rte_errno; + } + } + + return 0; +} + +static bool mlx5dr_action_fixup_stc_attr(struct mlx5dr_cmd_stc_modify_attr *stc_attr, + struct mlx5dr_cmd_stc_modify_attr *fixup_stc_attr, + enum mlx5dr_table_type table_type, + bool is_mirror) +{ + struct mlx5dr_devx_obj *devx_obj; + bool use_fixup = false; + uint32_t fw_tbl_type; + + fw_tbl_type = mlx5dr_table_get_res_fw_ft_type(table_type, is_mirror); + + switch (stc_attr->action_type) { + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE: + if (!is_mirror) + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_attr->ste_table.ste_pool, + &stc_attr->ste_table.ste); + else + devx_obj = + mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_attr->ste_table.ste_pool, + &stc_attr->ste_table.ste); + + *fixup_stc_attr = *stc_attr; + fixup_stc_attr->ste_table.ste_obj_id = devx_obj->id; + use_fixup = true; + break; + + case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT: + if (stc_attr->vport.vport_num != WIRE_PORT) + break; + + if (fw_tbl_type == FS_FT_FDB_RX) { + /* The FW doesn't allow to go back to wire in RX, so change it to DROP */ + fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + fixup_stc_attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + fixup_stc_attr->stc_offset = stc_attr->stc_offset; + } else if (fw_tbl_type == FS_FT_FDB_TX) { + /*The FW doesn't allow to go to wire in the TX by JUMP_TO_VPORT*/ + fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK; + fixup_stc_attr->action_offset = stc_attr->action_offset; + fixup_stc_attr->stc_offset = stc_attr->stc_offset; + fixup_stc_attr->vport.vport_num = 0; + fixup_stc_attr->vport.esw_owner_vhca_id = stc_attr->vport.esw_owner_vhca_id; + } + use_fixup = true; + break; + + default: + break; + } + + return use_fixup; +} + +int mlx5dr_action_alloc_single_stc(struct mlx5dr_context *ctx, + struct mlx5dr_cmd_stc_modify_attr *stc_attr, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc) +{ + struct mlx5dr_cmd_stc_modify_attr cleanup_stc_attr = {0}; + struct mlx5dr_pool *stc_pool = ctx->stc_pool[table_type]; + struct mlx5dr_cmd_stc_modify_attr fixup_stc_attr = {0}; + struct mlx5dr_devx_obj *devx_obj_0; + bool use_fixup; + int ret; + + ret = mlx5dr_pool_chunk_alloc(stc_pool, stc); + if (ret) { + DR_LOG(ERR, "Failed to allocate single action STC"); + return ret; + } + + stc_attr->stc_offset = stc->offset; + devx_obj_0 = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, stc); + + /* According to table/action limitation change the stc_attr */ + use_fixup = mlx5dr_action_fixup_stc_attr(stc_attr, &fixup_stc_attr, table_type, false); + ret = mlx5dr_cmd_stc_modify(devx_obj_0, use_fixup ? &fixup_stc_attr : stc_attr); + if (ret) { + DR_LOG(ERR, "Failed to modify STC action_type %d tbl_type %d", + stc_attr->action_type, table_type); + goto free_chunk; + } + + /* Modify the FDB peer */ + if (table_type == MLX5DR_TABLE_TYPE_FDB) { + struct mlx5dr_devx_obj *devx_obj_1; + + devx_obj_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, stc); + + use_fixup = mlx5dr_action_fixup_stc_attr(stc_attr, &fixup_stc_attr, + table_type, true); + ret = mlx5dr_cmd_stc_modify(devx_obj_1, use_fixup ? &fixup_stc_attr : stc_attr); + if (ret) { + DR_LOG(ERR, "Failed to modify peer STC action_type %d tbl_type %d", + stc_attr->action_type, table_type); + goto clean_devx_obj_0; + } + } + + return 0; + +clean_devx_obj_0: + cleanup_stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + cleanup_stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + cleanup_stc_attr.stc_offset = stc->offset; + mlx5dr_cmd_stc_modify(devx_obj_0, &cleanup_stc_attr); +free_chunk: + mlx5dr_pool_chunk_free(stc_pool, stc); + return rte_errno; +} + +void mlx5dr_action_free_single_stc(struct mlx5dr_context *ctx, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc) +{ + struct mlx5dr_pool *stc_pool = ctx->stc_pool[table_type]; + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_devx_obj *devx_obj; + + /* Modify the STC not to point to an object */ + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + stc_attr.stc_offset = stc->offset; + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, stc); + mlx5dr_cmd_stc_modify(devx_obj, &stc_attr); + + if (table_type == MLX5DR_TABLE_TYPE_FDB) { + devx_obj = mlx5dr_pool_chunk_get_base_devx_obj_mirror(stc_pool, stc); + mlx5dr_cmd_stc_modify(devx_obj, &stc_attr); + } + + mlx5dr_pool_chunk_free(stc_pool, stc); +} + +static uint32_t mlx5dr_action_get_mh_stc_type(__be64 pattern) +{ + uint8_t action_type = MLX5_GET(set_action_in, &pattern, action_type); + + switch (action_type) { + case MLX5_MODIFICATION_TYPE_SET: + return MLX5_IFC_STC_ACTION_TYPE_SET; + case MLX5_MODIFICATION_TYPE_ADD: + return MLX5_IFC_STC_ACTION_TYPE_ADD; + case MLX5_MODIFICATION_TYPE_COPY: + return MLX5_IFC_STC_ACTION_TYPE_COPY; + default: + assert(false); + DR_LOG(ERR, "Unsupported action type: 0x%x\n", action_type); + rte_errno = ENOTSUP; + return MLX5_IFC_STC_ACTION_TYPE_NOP; + } +} + +static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action, + struct mlx5dr_devx_obj *obj, + struct mlx5dr_cmd_stc_modify_attr *attr) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TAG: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_TAG; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + break; + case MLX5DR_ACTION_TYP_DROP: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_DROP; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + break; + case MLX5DR_ACTION_TYP_MISS: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ALLOW; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + /* TODO Need to support default miss for FDB */ + break; + case MLX5DR_ACTION_TYP_CTR: + attr->id = obj->id; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_COUNTER; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW0; + break; + case MLX5DR_ACTION_TYP_TIR: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->dest_tir_num = obj->id; + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + if (action->modify_header.num_of_actions == 1) { + attr->modify_action.data = action->modify_header.single_action; + attr->action_type = mlx5dr_action_get_mh_stc_type(attr->modify_action.data); + + if (attr->action_type == MLX5_IFC_STC_ACTION_TYPE_ADD || + attr->action_type == MLX5_IFC_STC_ACTION_TYPE_SET) + MLX5_SET(set_action_in, &attr->modify_action.data, data, 0); + } else { + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST; + attr->modify_header.arg_id = action->modify_header.arg_obj->id; + attr->modify_header.pattern_id = action->modify_header.pattern_obj->id; + } + break; + case MLX5DR_ACTION_TYP_FT: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT; + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->dest_table_id = obj->id; + break; + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + attr->remove_header.decap = 1; + attr->remove_header.start_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->remove_header.end_anchor = MLX5_HEADER_ANCHOR_INNER_MAC; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.arg_id = action->reformat.arg_obj->id; + attr->insert_header.header_size = action->reformat.header_size; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.arg_id = action->reformat.arg_obj->id; + attr->insert_header.header_size = action->reformat.header_size; + break; + case MLX5DR_ACTION_TYP_ASO_METER: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ASO; + attr->aso.aso_type = ASO_OPC_MOD_POLICER; + attr->aso.devx_obj_id = obj->id; + attr->aso.return_reg_id = action->aso.return_reg_id; + break; + case MLX5DR_ACTION_TYP_ASO_CT: + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ASO; + attr->aso.aso_type = ASO_OPC_MOD_CONNECTION_TRACKING; + attr->aso.devx_obj_id = obj->id; + attr->aso.return_reg_id = action->aso.return_reg_id; + break; + case MLX5DR_ACTION_TYP_VPORT: + attr->action_offset = MLX5DR_ACTION_OFFSET_HIT; + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT; + attr->vport.vport_num = action->vport.vport_num; + attr->vport.esw_owner_vhca_id = action->vport.esw_owner_vhca_id; + break; + case MLX5DR_ACTION_TYP_POP_VLAN: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW5; + attr->remove_words.start_anchor = MLX5_HEADER_ANCHOR_FIRST_VLAN_START; + attr->remove_words.num_of_words = MLX5DR_ACTION_HDR_LEN_L2_VLAN / 2; + break; + case MLX5DR_ACTION_TYP_PUSH_VLAN: + attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT; + attr->action_offset = MLX5DR_ACTION_OFFSET_DW6; + attr->insert_header.encap = 0; + attr->insert_header.is_inline = 1; + attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START; + attr->insert_header.insert_offset = MLX5DR_ACTION_HDR_LEN_L2_MACS; + attr->insert_header.header_size = MLX5DR_ACTION_HDR_LEN_L2_VLAN; + break; + default: + DR_LOG(ERR, "Invalid action type %d", action->type); + assert(false); + } +} + +static int +mlx5dr_action_create_stcs(struct mlx5dr_action *action, + struct mlx5dr_devx_obj *obj) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_context *ctx = action->ctx; + int ret; + + mlx5dr_action_fill_stc_attr(action, obj, &stc_attr); + + /* Block unsupported parallel devx obj modify over the same base */ + pthread_spin_lock(&ctx->ctrl_lock); + + /* Allocate STC for RX */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); + if (ret) + goto out_err; + } + + /* Allocate STC for TX */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); + if (ret) + goto free_nic_rx_stc; + } + + /* Allocate STC for FDB */ + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) { + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, + MLX5DR_TABLE_TYPE_FDB, + &action->stc[MLX5DR_TABLE_TYPE_FDB]); + if (ret) + goto free_nic_tx_stc; + } + + pthread_spin_unlock(&ctx->ctrl_lock); + + return 0; + +free_nic_tx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_free_single_stc(ctx, + MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); +free_nic_rx_stc: + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_free_single_stc(ctx, + MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); +out_err: + pthread_spin_unlock(&ctx->ctrl_lock); + return rte_errno; +} + +static void +mlx5dr_action_destroy_stcs(struct mlx5dr_action *action) +{ + struct mlx5dr_context *ctx = action->ctx; + + /* Block unsupported parallel devx obj modify over the same base */ + pthread_spin_lock(&ctx->ctrl_lock); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_RX) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_NIC_RX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_RX]); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_TX) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_NIC_TX, + &action->stc[MLX5DR_TABLE_TYPE_NIC_TX]); + + if (action->flags & MLX5DR_ACTION_FLAG_HWS_FDB) + mlx5dr_action_free_single_stc(ctx, MLX5DR_TABLE_TYPE_FDB, + &action->stc[MLX5DR_TABLE_TYPE_FDB]); + + pthread_spin_unlock(&ctx->ctrl_lock); +} + +static bool +mlx5dr_action_is_root_flags(uint32_t flags) +{ + return flags & (MLX5DR_ACTION_FLAG_ROOT_RX | + MLX5DR_ACTION_FLAG_ROOT_TX | + MLX5DR_ACTION_FLAG_ROOT_FDB); +} + +static bool +mlx5dr_action_is_hws_flags(uint32_t flags) +{ + return flags & (MLX5DR_ACTION_FLAG_HWS_RX | + MLX5DR_ACTION_FLAG_HWS_TX | + MLX5DR_ACTION_FLAG_HWS_FDB); +} + +static struct mlx5dr_action * +mlx5dr_action_create_generic(struct mlx5dr_context *ctx, + uint32_t flags, + enum mlx5dr_action_type action_type) +{ + struct mlx5dr_action *action; + + if (!mlx5dr_action_is_root_flags(flags) && + !mlx5dr_action_is_hws_flags(flags)) { + DR_LOG(ERR, "Action flags must specify root or non root (HWS)"); + rte_errno = ENOTSUP; + return NULL; + } + + action = simple_calloc(1, sizeof(*action)); + if (!action) { + DR_LOG(ERR, "Failed to allocate memory for action [%d]", action_type); + rte_errno = ENOMEM; + return NULL; + } + + action->ctx = ctx; + action->flags = flags; + action->type = action_type; + + return action; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx, + struct mlx5dr_table *tbl, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_table_is_root(tbl)) { + DR_LOG(ERR, "Root table cannot be set as destination"); + rte_errno = ENOTSUP; + return NULL; + } + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_FT); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = tbl->ft->obj; + } else { + ret = mlx5dr_action_create_stcs(action, tbl->ft); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *obj, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_TIR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = obj->obj; + } else { + ret = mlx5dr_action_create_stcs(action, obj); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_drop(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_DROP); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_default_miss(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_MISS); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_tag(struct mlx5dr_context *ctx, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_TAG); + if (!action) + return NULL; + + if (mlx5dr_action_is_hws_flags(flags)) { + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static struct mlx5dr_action * +mlx5dr_action_create_aso(struct mlx5dr_context *ctx, + enum mlx5dr_action_type action_type, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "ASO action cannot be used over root table"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + action->aso.devx_obj = devx_obj; + action->aso.return_reg_id = return_reg_id; + + ret = mlx5dr_action_create_stcs(action, devx_obj); + if (ret) + goto free_action; + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_aso_meter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + return mlx5dr_action_create_aso(ctx, MLX5DR_ACTION_TYP_ASO_METER, + devx_obj, return_reg_id, flags); +} + +struct mlx5dr_action * +mlx5dr_action_create_aso_ct(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags) +{ + return mlx5dr_action_create_aso(ctx, MLX5DR_ACTION_TYP_ASO_CT, + devx_obj, return_reg_id, flags); +} + +struct mlx5dr_action * +mlx5dr_action_create_counter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *obj, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_hws_flags(flags) && + mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Same action cannot be used for root and non root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_CTR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + action->devx_obj = obj->obj; + } else { + ret = mlx5dr_action_create_stcs(action, obj); + if (ret) + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static int mlx5dr_action_create_dest_vport_hws(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint32_t ib_port_num) +{ + struct mlx5dr_cmd_query_vport_caps vport_caps = {0}; + int ret; + + ret = mlx5dr_cmd_query_ib_port(ctx->ibv_ctx, &vport_caps, ib_port_num); + if (ret) { + DR_LOG(ERR, "Failed querying port %d\n", ib_port_num); + return ret; + } + action->vport.vport_num = vport_caps.vport_num; + action->vport.esw_owner_vhca_id = vport_caps.esw_owner_vhca_id; + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed creating stc for port %d\n", ib_port_num); + return ret; + } + + return 0; +} + +struct mlx5dr_action * +mlx5dr_action_create_dest_vport(struct mlx5dr_context *ctx, + uint32_t ib_port_num, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (!(flags & MLX5DR_ACTION_FLAG_HWS_FDB)) { + DR_LOG(ERR, "Vport action is supported for FDB only\n"); + rte_errno = EINVAL; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_VPORT); + if (!action) + return NULL; + + ret = mlx5dr_action_create_dest_vport_hws(ctx, action, ib_port_num); + if (ret) { + DR_LOG(ERR, "Failed to create vport action HWS\n"); + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_push_vlan(struct mlx5dr_context *ctx, uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Push vlan action not supported for root"); + rte_errno = ENOTSUP; + return NULL; + } + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_PUSH_VLAN); + if (!action) + return NULL; + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed creating stc for push vlan\n"); + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +struct mlx5dr_action * +mlx5dr_action_create_pop_vlan(struct mlx5dr_context *ctx, uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + if (mlx5dr_action_is_root_flags(flags)) { + DR_LOG(ERR, "Pop vlan action not supported for root"); + rte_errno = ENOTSUP; + return NULL; + } + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_POP_VLAN); + if (!action) + return NULL; + + ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); + if (ret) { + DR_LOG(ERR, "Failed to create remove stc for reformat"); + goto free_action; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed creating stc for pop vlan\n"); + goto free_shared; + } + + return action; + +free_shared: + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); +free_action: + simple_free(action); + return NULL; +} + +static int +mlx5dr_action_conv_reformat_type_to_action(uint32_t reformat_type, + enum mlx5dr_action_type *action_type) +{ + switch (reformat_type) { + case MLX5DR_ACTION_REFORMAT_TYPE_TNL_L2_TO_L2: + *action_type = MLX5DR_ACTION_TYP_TNL_L2_TO_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L2: + *action_type = MLX5DR_ACTION_TYP_L2_TO_TNL_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_TNL_L3_TO_L2: + *action_type = MLX5DR_ACTION_TYP_TNL_L3_TO_L2; + break; + case MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3: + *action_type = MLX5DR_ACTION_TYP_L2_TO_TNL_L3; + break; + default: + DR_LOG(ERR, "Invalid reformat type requested"); + rte_errno = ENOTSUP; + return rte_errno; + } + return 0; +} + +static void +mlx5dr_action_conv_reformat_to_verbs(uint32_t action_type, + uint32_t *verb_reformat_type) +{ + switch (action_type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TUNNEL_TO_L2; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TO_L2_TUNNEL; + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L3_TUNNEL_TO_L2; + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + *verb_reformat_type = + MLX5DV_FLOW_ACTION_PACKET_REFORMAT_TYPE_L2_TO_L3_TUNNEL; + break; + } +} + +static int +mlx5dr_action_conv_flags_to_ft_type(uint32_t flags, enum mlx5dv_flow_table_type *ft_type) +{ + if (flags & MLX5DR_ACTION_FLAG_ROOT_RX) { + *ft_type = MLX5DV_FLOW_TABLE_TYPE_NIC_RX; + } else if (flags & MLX5DR_ACTION_FLAG_ROOT_TX) { + *ft_type = MLX5DV_FLOW_TABLE_TYPE_NIC_TX; +#ifdef HAVE_MLX5DV_FLOW_MATCHER_FT_TYPE + } else if (flags & MLX5DR_ACTION_FLAG_ROOT_FDB) { + *ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB; +#endif + } else { + rte_errno = ENOTSUP; + return 1; + } + + return 0; +} + +static int +mlx5dr_action_create_reformat_root(struct mlx5dr_action *action, + size_t data_sz, + void *data) +{ + enum mlx5dv_flow_table_type ft_type = 0; /*fix compilation warn*/ + uint32_t verb_reformat_type = 0; + int ret; + + /* Convert action to FT type and verbs reformat type */ + ret = mlx5dr_action_conv_flags_to_ft_type(action->flags, &ft_type); + if (ret) + return rte_errno; + + mlx5dr_action_conv_reformat_to_verbs(action->type, &verb_reformat_type); + + /* Create the reformat type for root table */ + action->flow_action = + mlx5_glue->dv_create_flow_action_packet_reformat_root(action->ctx->ibv_ctx, + data_sz, + data, + verb_reformat_type, + ft_type); + if (!action->flow_action) { + rte_errno = errno; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_action_handle_reformat_args(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + uint32_t args_log_size; + int ret; + + if (data_sz % 2 != 0) { + DR_LOG(ERR, "Data size should be multiply of 2"); + rte_errno = EINVAL; + return rte_errno; + } + action->reformat.header_size = data_sz; + + args_log_size = mlx5dr_arg_data_size_to_arg_log_size(data_sz); + if (args_log_size >= MLX5DR_ARG_CHUNK_SIZE_MAX) { + DR_LOG(ERR, "Data size is bigger than supported"); + rte_errno = EINVAL; + return rte_errno; + } + args_log_size += bulk_size; + + if (!mlx5dr_arg_is_valid_arg_request_size(ctx, args_log_size)) { + DR_LOG(ERR, "Arg size %d does not fit FW requests", + args_log_size); + rte_errno = EINVAL; + return rte_errno; + } + + action->reformat.arg_obj = mlx5dr_cmd_arg_create(ctx->ibv_ctx, + args_log_size, + ctx->pd_num); + if (!action->reformat.arg_obj) { + DR_LOG(ERR, "Failed to create arg for reformat"); + return rte_errno; + } + + /* When INLINE need to write the arg data */ + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->reformat.arg_obj->id, + data, + data_sz); + if (ret) { + DR_LOG(ERR, "Failed to write inline arg for reformat"); + goto free_arg; + } + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static int mlx5dr_action_handle_l2_to_tunnel_l2(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + ret = mlx5dr_action_handle_reformat_args(ctx, data_sz, data, bulk_size, + action); + if (ret) { + DR_LOG(ERR, "Failed to create args for reformat"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed to create stc for reformat"); + goto free_arg; + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static int mlx5dr_action_get_shared_stc_offset(struct mlx5dr_context_common_res *common_res, + enum mlx5dr_context_shared_stc_type stc_type) +{ + return common_res->shared_stc[stc_type]->remove_header.offset; +} + +static int mlx5dr_action_handle_l2_to_tunnel_l3(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + ret = mlx5dr_action_handle_reformat_args(ctx, data_sz, data, bulk_size, + action); + if (ret) { + DR_LOG(ERR, "Failed to create args for reformat"); + return ret; + } + + /* The action is remove-l2-header + insert-l3-header */ + ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); + if (ret) { + DR_LOG(ERR, "Failed to create remove stc for reformat"); + goto free_arg; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) { + DR_LOG(ERR, "Failed to create insert stc for reformat"); + goto down_shared; + } + + return 0; + +down_shared: + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); +free_arg: + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + return ret; +} + +static void mlx5dr_action_prepare_decap_l3_actions(size_t data_sz, + uint8_t *mh_data, + int *num_of_actions) +{ + int actions; + uint32_t i; + + /* Remove L2L3 outer headers */ + MLX5_SET(stc_ste_param_remove, mh_data, action_type, + MLX5_MODIFICATION_TYPE_REMOVE); + MLX5_SET(stc_ste_param_remove, mh_data, decap, 0x1); + MLX5_SET(stc_ste_param_remove, mh_data, remove_start_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + MLX5_SET(stc_ste_param_remove, mh_data, remove_end_anchor, + MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4); + mh_data += MLX5DR_ACTION_DOUBLE_SIZE; /* Assume every action is 2 dw */ + actions = 1; + + /* Add the new header using inline action 4Byte at a time, the header + * is added in reversed order to the beginning of the packet to avoid + * incorrect parsing by the HW. Since header is 14B or 18B an extra + * two bytes are padded and later removed. + */ + for (i = 0; i < data_sz / MLX5DR_ACTION_INLINE_DATA_SIZE + 1; i++) { + MLX5_SET(stc_ste_param_insert, mh_data, action_type, + MLX5_MODIFICATION_TYPE_INSERT); + MLX5_SET(stc_ste_param_insert, mh_data, inline_data, 0x1); + MLX5_SET(stc_ste_param_insert, mh_data, insert_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + MLX5_SET(stc_ste_param_insert, mh_data, insert_size, 2); + mh_data += MLX5DR_ACTION_DOUBLE_SIZE; + actions++; + } + + /* Remove first 2 extra bytes */ + MLX5_SET(stc_ste_param_remove_words, mh_data, action_type, + MLX5_MODIFICATION_TYPE_REMOVE_WORDS); + MLX5_SET(stc_ste_param_remove_words, mh_data, remove_start_anchor, + MLX5_HEADER_ANCHOR_PACKET_START); + /* The hardware expects here size in words (2 bytes) */ + MLX5_SET(stc_ste_param_remove_words, mh_data, remove_size, 1); + actions++; + + *num_of_actions = actions; +} + +static int +mlx5dr_action_handle_tunnel_l3_to_l2(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + uint8_t mh_data[MLX5DR_ACTION_REFORMAT_DATA_SIZE] = {0}; + int num_of_actions; + int mh_data_size; + int ret; + + if (data_sz != MLX5DR_ACTION_HDR_LEN_L2 && + data_sz != MLX5DR_ACTION_HDR_LEN_L2_W_VLAN) { + DR_LOG(ERR, "Data size is not supported for decap-l3\n"); + rte_errno = EINVAL; + return rte_errno; + } + + mlx5dr_action_prepare_decap_l3_actions(data_sz, mh_data, &num_of_actions); + + mh_data_size = num_of_actions * MLX5DR_MODIFY_ACTION_SIZE; + + ret = mlx5dr_pat_arg_create_modify_header(ctx, action, mh_data_size, + (__be64 *)mh_data, bulk_size); + if (ret) { + DR_LOG(ERR, "Failed allocating modify-header for decap-l3\n"); + return ret; + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_mh_obj; + + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) { + mlx5dr_action_prepare_decap_l3_data(data, mh_data, num_of_actions); + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->modify_header.arg_obj->id, + (uint8_t *)mh_data, + num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); + if (ret) { + DR_LOG(ERR, "Failed writing INLINE arg decap_l3"); + goto clean_stc; + } + } + + return 0; + +clean_stc: + mlx5dr_action_destroy_stcs(action); +free_mh_obj: + mlx5dr_pat_arg_destroy_modify_header(ctx, action); + return ret; +} + +static int +mlx5dr_action_create_reformat_hws(struct mlx5dr_context *ctx, + size_t data_sz, + void *data, + uint32_t bulk_size, + struct mlx5dr_action *action) +{ + int ret; + + switch (action->type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + ret = mlx5dr_action_create_stcs(action, NULL); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + ret = mlx5dr_action_handle_l2_to_tunnel_l2(ctx, data_sz, data, bulk_size, action); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + ret = mlx5dr_action_handle_l2_to_tunnel_l3(ctx, data_sz, data, bulk_size, action); + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + ret = mlx5dr_action_handle_tunnel_l3_to_l2(ctx, data_sz, data, bulk_size, action); + break; + + default: + assert(false); + rte_errno = ENOTSUP; + return rte_errno; + } + + return ret; +} + +struct mlx5dr_action * +mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, + enum mlx5dr_action_reformat_type reformat_type, + size_t data_sz, + void *inline_data, + uint32_t log_bulk_size, + uint32_t flags) +{ + enum mlx5dr_action_type action_type; + struct mlx5dr_action *action; + int ret; + + ret = mlx5dr_action_conv_reformat_type_to_action(reformat_type, &action_type); + if (ret) + return NULL; + + action = mlx5dr_action_create_generic(ctx, flags, action_type); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + if (log_bulk_size) { + DR_LOG(ERR, "Bulk reformat not supported over root"); + rte_errno = ENOTSUP; + goto free_action; + } + + ret = mlx5dr_action_create_reformat_root(action, data_sz, inline_data); + if (ret) + goto free_action; + + return action; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Reformat flags don't fit HWS (flags: %x0x)\n", + flags); + rte_errno = EINVAL; + goto free_action; + } + + ret = mlx5dr_action_create_reformat_hws(ctx, data_sz, inline_data, log_bulk_size, action); + if (ret) { + DR_LOG(ERR, "Failed to create reformat.\n"); + rte_errno = EINVAL; + goto free_action; + } + + return action; + +free_action: + simple_free(action); + return NULL; +} + +static int +mlx5dr_action_create_modify_header_root(struct mlx5dr_action *action, + size_t actions_sz, + __be64 *actions) +{ + enum mlx5dv_flow_table_type ft_type = 0; + int ret; + + ret = mlx5dr_action_conv_flags_to_ft_type(action->flags, &ft_type); + if (ret) + return rte_errno; + + action->flow_action = + mlx5_glue->dv_create_flow_action_modify_header_root(action->ctx->ibv_ctx, + actions_sz, + (uint64_t *)actions, + ft_type); + if (!action->flow_action) { + rte_errno = errno; + return rte_errno; + } + + return 0; +} + +struct mlx5dr_action * +mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, + size_t pattern_sz, + __be64 pattern[], + uint32_t log_bulk_size, + uint32_t flags) +{ + struct mlx5dr_action *action; + int ret; + + action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_MODIFY_HDR); + if (!action) + return NULL; + + if (mlx5dr_action_is_root_flags(flags)) { + if (log_bulk_size) { + DR_LOG(ERR, "Bulk modify-header not supported over root"); + rte_errno = ENOTSUP; + goto free_action; + } + ret = mlx5dr_action_create_modify_header_root(action, pattern_sz, pattern); + if (ret) + goto free_action; + + return action; + } + + if (!mlx5dr_action_is_hws_flags(flags) || + ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) { + DR_LOG(ERR, "Flags don't fit hws (flags: %x0x, log_bulk_size: %d)\n", + flags, log_bulk_size); + rte_errno = EINVAL; + goto free_action; + } + + if (pattern_sz / MLX5DR_MODIFY_ACTION_SIZE == 1) { + /* Optimize single modiy action to be used inline */ + action->modify_header.single_action = pattern[0]; + action->modify_header.num_of_actions = 1; + action->modify_header.single_action_type = + MLX5_GET(set_action_in, pattern, action_type); + } else { + /* Use multi action pattern and argument */ + ret = mlx5dr_pat_arg_create_modify_header(ctx, action, pattern_sz, + pattern, log_bulk_size); + if (ret) { + DR_LOG(ERR, "Failed allocating modify-header\n"); + goto free_action; + } + } + + ret = mlx5dr_action_create_stcs(action, NULL); + if (ret) + goto free_mh_obj; + + return action; + +free_mh_obj: + if (action->modify_header.num_of_actions > 1) + mlx5dr_pat_arg_destroy_modify_header(ctx, action); +free_action: + simple_free(action); + return NULL; +} + +static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TIR: + case MLX5DR_ACTION_TYP_MISS: + case MLX5DR_ACTION_TYP_TAG: + case MLX5DR_ACTION_TYP_DROP: + case MLX5DR_ACTION_TYP_CTR: + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_ASO_METER: + case MLX5DR_ACTION_TYP_ASO_CT: + case MLX5DR_ACTION_TYP_PUSH_VLAN: + mlx5dr_action_destroy_stcs(action); + break; + case MLX5DR_ACTION_TYP_POP_VLAN: + mlx5dr_action_destroy_stcs(action); + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP); + break; + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + mlx5dr_action_destroy_stcs(action); + if (action->modify_header.num_of_actions > 1) + mlx5dr_pat_arg_destroy_modify_header(action->ctx, action); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + mlx5dr_action_destroy_stcs(action); + mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP); + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + break; + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + mlx5dr_action_destroy_stcs(action); + mlx5dr_cmd_destroy_obj(action->reformat.arg_obj); + break; + } +} + +static void mlx5dr_action_destroy_root(struct mlx5dr_action *action) +{ + switch (action->type) { + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + case MLX5DR_ACTION_TYP_MODIFY_HDR: + ibv_destroy_flow_action(action->flow_action); + break; + } +} + +int mlx5dr_action_destroy(struct mlx5dr_action *action) +{ + if (mlx5dr_action_is_root_flags(action->flags)) + mlx5dr_action_destroy_root(action); + else + mlx5dr_action_destroy_hws(action); + + simple_free(action); + return 0; +} + +/* Called under pthread_spin_lock(&ctx->ctrl_lock) */ +int mlx5dr_action_get_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type) +{ + struct mlx5dr_cmd_stc_modify_attr stc_attr = {0}; + struct mlx5dr_action_default_stc *default_stc; + int ret; + + if (ctx->common_res[tbl_type].default_stc) { + ctx->common_res[tbl_type].default_stc->refcount++; + return 0; + } + + default_stc = simple_calloc(1, sizeof(*default_stc)); + if (!default_stc) { + DR_LOG(ERR, "Failed to allocate memory for default STCs"); + rte_errno = ENOMEM; + return rte_errno; + } + + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_NOP; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW0; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_ctr); + if (ret) { + DR_LOG(ERR, "Failed to allocate default counter STC"); + goto free_default_stc; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw5); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW5 STC"); + goto free_nop_ctr; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW6; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw6); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW6 STC"); + goto free_nop_dw5; + } + + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW7; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->nop_dw7); + if (ret) { + DR_LOG(ERR, "Failed to allocate default NOP DW7 STC"); + goto free_nop_dw6; + } + + stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_ALLOW; + stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT; + ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type, + &default_stc->default_hit); + if (ret) { + DR_LOG(ERR, "Failed to allocate default allow STC"); + goto free_nop_dw7; + } + + ctx->common_res[tbl_type].default_stc = default_stc; + ctx->common_res[tbl_type].default_stc->refcount++; + + return 0; + +free_nop_dw7: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw7); +free_nop_dw6: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw6); +free_nop_dw5: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw5); +free_nop_ctr: + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_ctr); +free_default_stc: + simple_free(default_stc); + return rte_errno; +} + +void mlx5dr_action_put_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type) +{ + struct mlx5dr_action_default_stc *default_stc; + + default_stc = ctx->common_res[tbl_type].default_stc; + + default_stc = ctx->common_res[tbl_type].default_stc; + if (--default_stc->refcount) + return; + + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->default_hit); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw7); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw6); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_dw5); + mlx5dr_action_free_single_stc(ctx, tbl_type, &default_stc->nop_ctr); + simple_free(default_stc); + ctx->common_res[tbl_type].default_stc = NULL; +} + +static void mlx5dr_action_modify_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions) +{ + mlx5dr_arg_write(queue, NULL, arg_idx, arg_data, + num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); +} + +void +mlx5dr_action_prepare_decap_l3_data(uint8_t *src, uint8_t *dst, + uint16_t num_of_actions) +{ + uint8_t *e_src; + int i; + + /* num_of_actions = remove l3l2 + 4/5 inserts + remove extra 2 bytes + * copy from end of src to the start of dst. + * move to the end, 2 is the leftover from 14B or 18B + */ + if (num_of_actions == DECAP_L3_NUM_ACTIONS_W_NO_VLAN) + e_src = src + MLX5DR_ACTION_HDR_LEN_L2; + else + e_src = src + MLX5DR_ACTION_HDR_LEN_L2_W_VLAN; + + /* Move dst over the first remove action + zero data */ + dst += MLX5DR_ACTION_DOUBLE_SIZE; + /* Move dst over the first insert ctrl action */ + dst += MLX5DR_ACTION_DOUBLE_SIZE / 2; + /* Actions: + * no vlan: r_h-insert_4b-insert_4b-insert_4b-insert_4b-remove_2b. + * with vlan: r_h-insert_4b-insert_4b-insert_4b-insert_4b-insert_4b-remove_2b. + * the loop is without the last insertion. + */ + for (i = 0; i < num_of_actions - 3; i++) { + e_src -= MLX5DR_ACTION_INLINE_DATA_SIZE; + memcpy(dst, e_src, MLX5DR_ACTION_INLINE_DATA_SIZE); /* data */ + dst += MLX5DR_ACTION_DOUBLE_SIZE; + } + /* Copy the last 2 bytes after a gap of 2 bytes which will be removed */ + e_src -= MLX5DR_ACTION_INLINE_DATA_SIZE / 2; + dst += MLX5DR_ACTION_INLINE_DATA_SIZE / 2; + memcpy(dst, e_src, 2); +} + +static struct mlx5dr_actions_wqe_setter * +mlx5dr_action_setter_find_first(struct mlx5dr_actions_wqe_setter *setter, + uint8_t req_flags) +{ + /* Use a new setter if requested flags are taken */ + while (setter->flags & req_flags) + setter++; + + /* Use current setter in required flags are not used */ + return setter; +} + +static void +mlx5dr_action_apply_stc(struct mlx5dr_actions_apply_data *apply, + enum mlx5dr_action_stc_idx stc_idx, + uint8_t action_idx) +{ + struct mlx5dr_action *action = apply->rule_action[action_idx].action; + + apply->wqe_ctrl->stc_ix[stc_idx] = + htobe32(action->stc[apply->tbl_type].offset); +} + +static void +mlx5dr_action_setter_push_vlan(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_double]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = rule_action->push_vlan.vlan_hdr; + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; +} + +static void +mlx5dr_action_setter_modify_header(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + uint32_t arg_sz, arg_idx; + uint8_t *single_action; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + + if (action->modify_header.num_of_actions == 1) { + if (action->modify_header.single_action_type == + MLX5_MODIFICATION_TYPE_COPY) { + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + return; + } + + if (action->flags & MLX5DR_ACTION_FLAG_SHARED) + single_action = (uint8_t *)&action->modify_header.single_action; + else + single_action = rule_action->modify_header.data; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = + *(__be32 *)MLX5_ADDR_OF(set_action_in, single_action, data); + } else { + /* Argument offset multiple with number of args per these actions */ + arg_sz = mlx5dr_arg_get_arg_size(action->modify_header.num_of_actions); + arg_idx = rule_action->modify_header.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + if (!(action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_action_modify_write(apply->queue, + action->modify_header.arg_obj->id + arg_idx, + rule_action->modify_header.data, + action->modify_header.num_of_actions); + } + } +} + +static void +mlx5dr_action_setter_insert_ptr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + uint32_t arg_idx, arg_sz; + + rule_action = &apply->rule_action[setter->idx_double]; + + /* Argument offset multiple on args required for header size */ + arg_sz = mlx5dr_arg_data_size_to_arg_size(rule_action->action->reformat.header_size); + arg_idx = rule_action->reformat.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + if (!(rule_action->action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_arg_write(apply->queue, NULL, + rule_action->action->reformat.arg_obj->id + arg_idx, + rule_action->reformat.data, + rule_action->action->reformat.header_size); + } +} + +static void +mlx5dr_action_setter_tnl_l3_to_l2(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + struct mlx5dr_action *action; + uint32_t arg_sz, arg_idx; + + rule_action = &apply->rule_action[setter->idx_double]; + action = rule_action->action; + + /* Argument offset multiple on args required for num of actions */ + arg_sz = mlx5dr_arg_get_arg_size(action->modify_header.num_of_actions); + arg_idx = rule_action->reformat.offset * arg_sz; + + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(arg_idx); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + + if (!(action->flags & MLX5DR_ACTION_FLAG_SHARED)) { + apply->require_dep = 1; + mlx5dr_arg_decapl3_write(apply->queue, + action->modify_header.arg_obj->id + arg_idx, + rule_action->reformat.data, + action->modify_header.num_of_actions); + } +} + +static void +mlx5dr_action_setter_aso(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + uint32_t exe_aso_ctrl; + uint32_t offset; + + rule_action = &apply->rule_action[setter->idx_double]; + + switch (rule_action->action->type) { + case MLX5DR_ACTION_TYP_ASO_METER: + /* exe_aso_ctrl format: + * [STC only and reserved bits 29b][init_color 2b][meter_id 1b] + */ + offset = rule_action->aso_meter.offset / MLX5_ASO_METER_NUM_PER_OBJ; + exe_aso_ctrl = rule_action->aso_meter.offset % MLX5_ASO_METER_NUM_PER_OBJ; + exe_aso_ctrl |= rule_action->aso_meter.init_color << + MLX5DR_ACTION_METER_INIT_COLOR_OFFSET; + break; + case MLX5DR_ACTION_TYP_ASO_CT: + /* exe_aso_ctrl CT format: + * [STC only and reserved bits 31b][direction 1b] + */ + offset = rule_action->aso_ct.offset / MLX5_ASO_CT_NUM_PER_OBJ; + exe_aso_ctrl = rule_action->aso_ct.direction; + break; + default: + DR_LOG(ERR, "Unsupported ASO action type: %d", rule_action->action->type); + rte_errno = ENOTSUP; + return; + } + + /* aso_object_offset format: [24B] */ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = htobe32(offset); + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = htobe32(exe_aso_ctrl); + + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW6, setter->idx_double); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; +} + +static void +mlx5dr_action_setter_tag(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_single]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = htobe32(rule_action->tag.value); + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW5, setter->idx_single); +} + +static void +mlx5dr_action_setter_ctrl_ctr(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + struct mlx5dr_rule_action *rule_action; + + rule_action = &apply->rule_action[setter->idx_ctr]; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW0] = htobe32(rule_action->counter.offset); + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_CTRL, setter->idx_ctr); +} + +static void +mlx5dr_action_setter_single(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW5, setter->idx_single); +} + +static void +mlx5dr_action_setter_single_double_pop(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(mlx5dr_action_get_shared_stc_offset(apply->common_res, + MLX5DR_CONTEXT_SHARED_STC_POP)); +} + +static void +mlx5dr_action_setter_hit(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = 0; + mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_HIT, setter->idx_hit); +} + +static void +mlx5dr_action_setter_default_hit(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_HIT] = + htobe32(apply->common_res->default_stc->default_hit.offset); +} + +static void +mlx5dr_action_setter_hit_next_action(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_HIT_LSB] = htobe32(apply->next_direct_idx << 6); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_HIT] = htobe32(apply->jump_to_action_stc); +} + +static void +mlx5dr_action_setter_common_decap(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(mlx5dr_action_get_shared_stc_offset(apply->common_res, + MLX5DR_CONTEXT_SHARED_STC_DECAP)); +} + +int mlx5dr_action_template_process(struct mlx5dr_action_template *at) +{ + struct mlx5dr_actions_wqe_setter *start_setter = at->setters + 1; + enum mlx5dr_action_type *action_type = at->action_type_arr; + struct mlx5dr_actions_wqe_setter *setter = at->setters; + struct mlx5dr_actions_wqe_setter *pop_setter = NULL; + struct mlx5dr_actions_wqe_setter *last_setter; + int i; + + /* Note: Given action combination must be valid */ + + /* Check if action were already processed */ + if (at->num_of_action_stes) + return 0; + + for (i = 0; i < MLX5DR_ACTION_MAX_STE; i++) + setter[i].set_hit = &mlx5dr_action_setter_hit_next_action; + + /* The same action template setters can be used with jumbo or match + * STE, to support both cases we reseve the first setter for cases + * with jumbo STE to allow jump to the first action STE. + * This extra setter can be reduced in some cases on rule creation. + */ + setter = start_setter; + last_setter = start_setter; + + for (i = 0; i < at->num_actions; i++) { + switch (action_type[i]) { + case MLX5DR_ACTION_TYP_DROP: + case MLX5DR_ACTION_TYP_TIR: + case MLX5DR_ACTION_TYP_FT: + case MLX5DR_ACTION_TYP_VPORT: + case MLX5DR_ACTION_TYP_MISS: + /* Hit action */ + last_setter->flags |= ASF_HIT; + last_setter->set_hit = &mlx5dr_action_setter_hit; + last_setter->idx_hit = i; + break; + + case MLX5DR_ACTION_TYP_POP_VLAN: + /* Single remove header to header */ + if (pop_setter) { + /* We have 2 pops, use the shared */ + pop_setter->set_single = &mlx5dr_action_setter_single_double_pop; + break; + } + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY); + setter->flags |= ASF_SINGLE1 | ASF_REPARSE | ASF_REMOVE; + setter->set_single = &mlx5dr_action_setter_single; + setter->idx_single = i; + pop_setter = setter; + break; + + case MLX5DR_ACTION_TYP_PUSH_VLAN: + /* Double insert inline */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_REPARSE | ASF_MODIFY; + setter->set_double = &mlx5dr_action_setter_push_vlan; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_MODIFY_HDR: + /* Double modify header list */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_modify_header; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_ASO_METER: + case MLX5DR_ACTION_TYP_ASO_CT: + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE); + setter->flags |= ASF_DOUBLE; + setter->set_double = &mlx5dr_action_setter_aso; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_TNL_L2_TO_L2: + /* Single remove header to header */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY); + setter->flags |= ASF_SINGLE1 | ASF_REMOVE | ASF_REPARSE; + setter->set_single = &mlx5dr_action_setter_single; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_L2_TO_TNL_L2: + /* Double insert header with pointer */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE); + setter->flags |= ASF_DOUBLE | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_insert_ptr; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_L2_TO_TNL_L3: + /* Single remove + Double insert header with pointer */ + setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_DOUBLE); + setter->flags |= ASF_SINGLE1 | ASF_DOUBLE | ASF_REPARSE | ASF_REMOVE; + setter->set_double = &mlx5dr_action_setter_insert_ptr; + setter->idx_double = i; + setter->set_single = &mlx5dr_action_setter_common_decap; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_TNL_L3_TO_L2: + /* Double modify header list with remove and push inline */ + setter = mlx5dr_action_setter_find_first(last_setter, + ASF_DOUBLE | ASF_REMOVE); + setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE; + setter->set_double = &mlx5dr_action_setter_tnl_l3_to_l2; + setter->idx_double = i; + break; + + case MLX5DR_ACTION_TYP_TAG: + /* Single TAG action, search for any room from the start */ + setter = mlx5dr_action_setter_find_first(start_setter, ASF_SINGLE1); + setter->flags |= ASF_SINGLE1; + setter->set_single = &mlx5dr_action_setter_tag; + setter->idx_single = i; + break; + + case MLX5DR_ACTION_TYP_CTR: + /* Control counter action + * TODO: Current counter executed first. Support is needed + * for single ation counter action which is done last. + * Example: Decap + CTR + */ + setter = mlx5dr_action_setter_find_first(start_setter, ASF_CTR); + setter->flags |= ASF_CTR; + setter->set_ctr = &mlx5dr_action_setter_ctrl_ctr; + setter->idx_ctr = i; + break; + + default: + DR_LOG(ERR, "Unsupported action type: %d", action_type[i]); + rte_errno = ENOTSUP; + assert(false); + return rte_errno; + } + + last_setter = RTE_MAX(setter, last_setter); + } + + /* Set default hit on the last STE if no hit action provided */ + if (!(last_setter->flags & ASF_HIT)) + last_setter->set_hit = &mlx5dr_action_setter_default_hit; + + at->num_of_action_stes = last_setter - start_setter + 1; + + /* Check if action template doesn't require any action DWs */ + at->only_term = (at->num_of_action_stes == 1) && + !(last_setter->flags & ~(ASF_CTR | ASF_HIT)); + + return 0; +} + +struct mlx5dr_action_template * +mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[]) +{ + struct mlx5dr_action_template *at; + uint8_t num_actions = 0; + int i; + + at = simple_calloc(1, sizeof(*at)); + if (!at) { + DR_LOG(ERR, "Failed to allocate action template"); + rte_errno = ENOMEM; + return NULL; + } + + while (action_type[num_actions++] != MLX5DR_ACTION_TYP_LAST) + ; + + at->num_actions = num_actions - 1; + at->action_type_arr = simple_calloc(num_actions, sizeof(*action_type)); + if (!at->action_type_arr) { + DR_LOG(ERR, "Failed to allocate action type array"); + rte_errno = ENOMEM; + goto free_at; + } + + for (i = 0; i < num_actions; i++) + at->action_type_arr[i] = action_type[i]; + + return at; + +free_at: + simple_free(at); + return NULL; +} + +int mlx5dr_action_template_destroy(struct mlx5dr_action_template *at) +{ + simple_free(at->action_type_arr); + simple_free(at); + return 0; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h new file mode 100644 index 0000000000..f14d91f994 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_action.h @@ -0,0 +1,253 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_ACTION_H_ +#define MLX5DR_ACTION_H_ + +/* Max number of STEs needed for a rule (including match) */ +#define MLX5DR_ACTION_MAX_STE 7 + +enum mlx5dr_action_stc_idx { + MLX5DR_ACTION_STC_IDX_CTRL = 0, + MLX5DR_ACTION_STC_IDX_HIT = 1, + MLX5DR_ACTION_STC_IDX_DW5 = 2, + MLX5DR_ACTION_STC_IDX_DW6 = 3, + MLX5DR_ACTION_STC_IDX_DW7 = 4, + MLX5DR_ACTION_STC_IDX_MAX = 5, + /* STC Jumvo STE combo: CTR, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_JUMBO_STE = 1, + /* STC combo1: CTR, SINGLE, DOUBLE, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_COMBO1 = 3, + /* STC combo2: CTR, 3 x SINGLE, Hit */ + MLX5DR_ACTION_STC_IDX_LAST_COMBO2 = 4, +}; + +enum mlx5dr_action_offset { + MLX5DR_ACTION_OFFSET_DW0 = 0, + MLX5DR_ACTION_OFFSET_DW5 = 5, + MLX5DR_ACTION_OFFSET_DW6 = 6, + MLX5DR_ACTION_OFFSET_DW7 = 7, + MLX5DR_ACTION_OFFSET_HIT = 3, + MLX5DR_ACTION_OFFSET_HIT_LSB = 4, +}; + +enum { + MLX5DR_ACTION_DOUBLE_SIZE = 8, + MLX5DR_ACTION_INLINE_DATA_SIZE = 4, + MLX5DR_ACTION_HDR_LEN_L2_MACS = 12, + MLX5DR_ACTION_HDR_LEN_L2_VLAN = 4, + MLX5DR_ACTION_HDR_LEN_L2_ETHER = 2, + MLX5DR_ACTION_HDR_LEN_L2 = (MLX5DR_ACTION_HDR_LEN_L2_MACS + + MLX5DR_ACTION_HDR_LEN_L2_ETHER), + MLX5DR_ACTION_HDR_LEN_L2_W_VLAN = (MLX5DR_ACTION_HDR_LEN_L2 + + MLX5DR_ACTION_HDR_LEN_L2_VLAN), + MLX5DR_ACTION_REFORMAT_DATA_SIZE = 64, + DECAP_L3_NUM_ACTIONS_W_NO_VLAN = 6, + DECAP_L3_NUM_ACTIONS_W_VLAN = 7, +}; + +enum mlx5dr_action_setter_flag { + ASF_SINGLE1 = 1 << 0, + ASF_SINGLE2 = 1 << 1, + ASF_SINGLE3 = 1 << 2, + ASF_DOUBLE = ASF_SINGLE2 | ASF_SINGLE3, + ASF_REPARSE = 1 << 3, + ASF_REMOVE = 1 << 4, + ASF_MODIFY = 1 << 5, + ASF_CTR = 1 << 6, + ASF_HIT = 1 << 7, +}; + +struct mlx5dr_action_default_stc { + struct mlx5dr_pool_chunk nop_ctr; + struct mlx5dr_pool_chunk nop_dw5; + struct mlx5dr_pool_chunk nop_dw6; + struct mlx5dr_pool_chunk nop_dw7; + struct mlx5dr_pool_chunk default_hit; + uint32_t refcount; +}; + +struct mlx5dr_action_shared_stc { + struct mlx5dr_pool_chunk remove_header; + rte_atomic32_t refcount; +}; + +struct mlx5dr_actions_apply_data { + struct mlx5dr_send_engine *queue; + struct mlx5dr_rule_action *rule_action; + uint32_t *wqe_data; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + uint32_t jump_to_action_stc; + struct mlx5dr_context_common_res *common_res; + enum mlx5dr_table_type tbl_type; + uint32_t next_direct_idx; + uint8_t require_dep; +}; + +struct mlx5dr_actions_wqe_setter; + +typedef void (*mlx5dr_action_setter_fp) + (struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter); + +struct mlx5dr_actions_wqe_setter { + mlx5dr_action_setter_fp set_single; + mlx5dr_action_setter_fp set_double; + mlx5dr_action_setter_fp set_hit; + mlx5dr_action_setter_fp set_ctr; + uint8_t idx_single; + uint8_t idx_double; + uint8_t idx_ctr; + uint8_t idx_hit; + uint8_t flags; +}; + +struct mlx5dr_action_template { + struct mlx5dr_actions_wqe_setter setters[MLX5DR_ACTION_MAX_STE]; + enum mlx5dr_action_type *action_type_arr; + uint8_t num_of_action_stes; + uint8_t num_actions; + uint8_t only_term; +}; + +struct mlx5dr_action { + uint8_t type; + uint8_t flags; + struct mlx5dr_context *ctx; + union { + struct { + struct mlx5dr_pool_chunk stc[MLX5DR_TABLE_TYPE_MAX]; + union { + struct { + struct mlx5dr_devx_obj *pattern_obj; + struct mlx5dr_devx_obj *arg_obj; + __be64 single_action; + uint8_t single_action_type; + uint16_t num_of_actions; + } modify_header; + struct { + struct mlx5dr_devx_obj *arg_obj; + uint32_t header_size; + } reformat; + struct { + struct mlx5dr_devx_obj *devx_obj; + uint8_t return_reg_id; + } aso; + struct { + uint16_t vport_num; + uint16_t esw_owner_vhca_id; + } vport; + }; + }; + + struct ibv_flow_action *flow_action; + struct mlx5dv_devx_obj *devx_obj; + struct ibv_qp *qp; + }; +}; + +int mlx5dr_action_root_build_attr(struct mlx5dr_rule_action rule_actions[], + uint32_t num_actions, + struct mlx5dv_flow_action_attr *attr); + +int mlx5dr_action_get_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type); + +void mlx5dr_action_put_default_stc(struct mlx5dr_context *ctx, + uint8_t tbl_type); + +void mlx5dr_action_prepare_decap_l3_data(uint8_t *src, uint8_t *dst, + uint16_t num_of_actions); + +int mlx5dr_action_template_process(struct mlx5dr_action_template *at); + +bool mlx5dr_action_check_combo(enum mlx5dr_action_type *user_actions, + enum mlx5dr_table_type table_type); + +int mlx5dr_action_alloc_single_stc(struct mlx5dr_context *ctx, + struct mlx5dr_cmd_stc_modify_attr *stc_attr, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc); + +void mlx5dr_action_free_single_stc(struct mlx5dr_context *ctx, + uint32_t table_type, + struct mlx5dr_pool_chunk *stc); + +static inline void +mlx5dr_action_setter_default_single(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = + htobe32(apply->common_res->default_stc->nop_dw5.offset); +} + +static inline void +mlx5dr_action_setter_default_double(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = + htobe32(apply->common_res->default_stc->nop_dw6.offset); + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = + htobe32(apply->common_res->default_stc->nop_dw7.offset); +} + +static inline void +mlx5dr_action_setter_default_ctr(struct mlx5dr_actions_apply_data *apply, + __rte_unused struct mlx5dr_actions_wqe_setter *setter) +{ + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW0] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_CTRL] = + htobe32(apply->common_res->default_stc->nop_ctr.offset); +} + +static inline void +mlx5dr_action_apply_setter(struct mlx5dr_actions_apply_data *apply, + struct mlx5dr_actions_wqe_setter *setter, + bool is_jumbo) +{ + uint8_t num_of_actions; + + /* Set control counter */ + if (setter->flags & ASF_CTR) + setter->set_ctr(apply, setter); + else + mlx5dr_action_setter_default_ctr(apply, setter); + + /* Set single and double on match */ + if (!is_jumbo) { + if (setter->flags & ASF_SINGLE1) + setter->set_single(apply, setter); + else + mlx5dr_action_setter_default_single(apply, setter); + + if (setter->flags & ASF_DOUBLE) + setter->set_double(apply, setter); + else + mlx5dr_action_setter_default_double(apply, setter); + + num_of_actions = setter->flags & ASF_DOUBLE ? + MLX5DR_ACTION_STC_IDX_LAST_COMBO1 : + MLX5DR_ACTION_STC_IDX_LAST_COMBO2; + } else { + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0; + apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = 0; + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0; + num_of_actions = MLX5DR_ACTION_STC_IDX_LAST_JUMBO_STE; + } + + /* Set next/final hit action */ + setter->set_hit(apply, setter); + + /* Set number of actions */ + apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_CTRL] |= + htobe32(num_of_actions << 29); +} + +#endif /* MLX5DR_ACTION_H_ */ diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.c b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c new file mode 100644 index 0000000000..46fdc8ce68 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c @@ -0,0 +1,511 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_data_size_to_arg_log_size(uint16_t data_size) +{ + /* Return the roundup of log2(data_size) */ + if (data_size <= MLX5DR_ARG_DATA_SIZE) + return MLX5DR_ARG_CHUNK_SIZE_1; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 2) + return MLX5DR_ARG_CHUNK_SIZE_2; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 4) + return MLX5DR_ARG_CHUNK_SIZE_3; + if (data_size <= MLX5DR_ARG_DATA_SIZE * 8) + return MLX5DR_ARG_CHUNK_SIZE_4; + + return MLX5DR_ARG_CHUNK_SIZE_MAX; +} + +uint32_t mlx5dr_arg_data_size_to_arg_size(uint16_t data_size) +{ + return BIT(mlx5dr_arg_data_size_to_arg_log_size(data_size)); +} + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_get_arg_log_size(uint16_t num_of_actions) +{ + return mlx5dr_arg_data_size_to_arg_log_size(num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); +} + +uint32_t mlx5dr_arg_get_arg_size(uint16_t num_of_actions) +{ + return BIT(mlx5dr_arg_get_arg_log_size(num_of_actions)); +} + +/* Cache and cache element handling */ +int mlx5dr_pat_init_pattern_cache(struct mlx5dr_pattern_cache **cache) +{ + struct mlx5dr_pattern_cache *new_cache; + + new_cache = simple_calloc(1, sizeof(*new_cache)); + if (!new_cache) { + rte_errno = ENOMEM; + return rte_errno; + } + LIST_INIT(&new_cache->head); + pthread_spin_init(&new_cache->lock, PTHREAD_PROCESS_PRIVATE); + + *cache = new_cache; + + return 0; +} + +void mlx5dr_pat_uninit_pattern_cache(struct mlx5dr_pattern_cache *cache) +{ + simple_free(cache); +} + +static bool mlx5dr_pat_compare_pattern(enum mlx5dr_action_type cur_type, + int cur_num_of_actions, + __be64 cur_actions[], + enum mlx5dr_action_type type, + int num_of_actions, + __be64 actions[]) +{ + int i; + + if (cur_num_of_actions != num_of_actions || cur_type != type) + return false; + + /* All decap-l3 look the same, only change is the num of actions */ + if (type == MLX5DR_ACTION_TYP_TNL_L3_TO_L2) + return true; + + for (i = 0; i < num_of_actions; i++) { + u8 action_id = + MLX5_GET(set_action_in, &actions[i], action_type); + + if (action_id == MLX5_MODIFICATION_TYPE_COPY) { + if (actions[i] != cur_actions[i]) + return false; + } else { + /* Compare just the control, not the values */ + if ((__be32)actions[i] != + (__be32)cur_actions[i]) + return false; + } + } + + return true; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_find_cached_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pat; + + LIST_FOREACH(cached_pat, &cache->head, next) { + if (mlx5dr_pat_compare_pattern(cached_pat->type, + cached_pat->mh_data.num_of_actions, + (__be64 *)cached_pat->mh_data.data, + action->type, + num_of_actions, + actions)) + return cached_pat; + } + + return NULL; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_get_existing_cached_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + cached_pattern = mlx5dr_pat_find_cached_pattern(cache, action, num_of_actions, actions); + if (cached_pattern) { + /* LRU: move it to be first in the list */ + LIST_REMOVE(cached_pattern, next); + LIST_INSERT_HEAD(&cache->head, cached_pattern, next); + rte_atomic32_add(&cached_pattern->refcount, 1); + } + + return cached_pattern; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_get_cached_pattern_by_action(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + LIST_FOREACH(cached_pattern, &cache->head, next) { + if (cached_pattern->mh_data.pattern_obj->id == action->modify_header.pattern_obj->id) + return cached_pattern; + } + + return NULL; +} + +static struct mlx5dr_pat_cached_pattern * +mlx5dr_pat_add_pattern_to_cache(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_devx_obj *pattern_obj, + enum mlx5dr_action_type type, + uint16_t num_of_actions, + __be64 *actions) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + cached_pattern = simple_calloc(1, sizeof(*cached_pattern)); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to allocate cached_pattern"); + rte_errno = ENOMEM; + return NULL; + } + + cached_pattern->type = type; + cached_pattern->mh_data.num_of_actions = num_of_actions; + cached_pattern->mh_data.pattern_obj = pattern_obj; + cached_pattern->mh_data.data = + simple_malloc(num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); + if (!cached_pattern->mh_data.data) { + DR_LOG(ERR, "Failed to allocate mh_data.data"); + rte_errno = ENOMEM; + goto free_cached_obj; + } + + memcpy(cached_pattern->mh_data.data, actions, + num_of_actions * MLX5DR_MODIFY_ACTION_SIZE); + + LIST_INSERT_HEAD(&cache->head, cached_pattern, next); + + rte_atomic32_init(&cached_pattern->refcount); + rte_atomic32_set(&cached_pattern->refcount, 1); + + return cached_pattern; + +free_cached_obj: + simple_free(cached_pattern); + return NULL; +} + +static void +mlx5dr_pat_remove_pattern(struct mlx5dr_pat_cached_pattern *cached_pattern) +{ + LIST_REMOVE(cached_pattern, next); + simple_free(cached_pattern->mh_data.data); + simple_free(cached_pattern); +} + +static void +mlx5dr_pat_put_pattern(struct mlx5dr_pattern_cache *cache, + struct mlx5dr_action *action) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + + pthread_spin_lock(&cache->lock); + cached_pattern = mlx5dr_pat_get_cached_pattern_by_action(cache, action); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to find pattern according to action with pt"); + assert(false); + goto out; + } + + if (!rte_atomic32_dec_and_test(&cached_pattern->refcount)) + goto out; + + mlx5dr_pat_remove_pattern(cached_pattern); + +out: + pthread_spin_unlock(&cache->lock); +} + +static int mlx5dr_pat_get_pattern(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint16_t num_of_actions, + size_t pattern_sz, + __be64 *pattern) +{ + struct mlx5dr_pat_cached_pattern *cached_pattern; + int ret = 0; + + pthread_spin_lock(&ctx->pattern_cache->lock); + + cached_pattern = mlx5dr_pat_get_existing_cached_pattern(ctx->pattern_cache, + action, + num_of_actions, + pattern); + if (cached_pattern) { + action->modify_header.pattern_obj = cached_pattern->mh_data.pattern_obj; + goto out_unlock; + } + + action->modify_header.pattern_obj = + mlx5dr_cmd_header_modify_pattern_create(ctx->ibv_ctx, + pattern_sz, + (uint8_t *)pattern); + if (!action->modify_header.pattern_obj) { + DR_LOG(ERR, "Failed to create pattern FW object"); + + ret = rte_errno; + goto out_unlock; + } + + cached_pattern = + mlx5dr_pat_add_pattern_to_cache(ctx->pattern_cache, + action->modify_header.pattern_obj, + action->type, + num_of_actions, + pattern); + if (!cached_pattern) { + DR_LOG(ERR, "Failed to add pattern to cache"); + ret = rte_errno; + goto clean_pattern; + } + +out_unlock: + pthread_spin_unlock(&ctx->pattern_cache->lock); + return ret; + +clean_pattern: + mlx5dr_cmd_destroy_obj(action->modify_header.pattern_obj); + pthread_spin_unlock(&ctx->pattern_cache->lock); + return ret; +} + +static void +mlx5d_arg_init_send_attr(struct mlx5dr_send_engine_post_attr *send_attr, + void *comp_data, + uint32_t arg_idx) +{ + send_attr->opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS; + send_attr->opmod = MLX5DR_WQE_GTA_OPMOD_MOD_ARG; + send_attr->len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA; + send_attr->id = arg_idx; + send_attr->user_data = comp_data; +} + +void mlx5dr_arg_decapl3_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_arg *wqe_arg; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + size_t wqe_len; + + mlx5d_arg_init_send_attr(&send_attr, NULL, arg_idx); + + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + mlx5dr_action_prepare_decap_l3_data(arg_data, (uint8_t *)wqe_arg, + num_of_actions); + mlx5dr_send_engine_post_end(&ctrl, &send_attr); +} + +static int +mlx5dr_arg_poll_for_comp(struct mlx5dr_context *ctx, uint16_t queue_id) +{ + struct rte_flow_op_result comp[1]; + int ret; + + while (true) { + ret = mlx5dr_send_queue_poll(ctx, queue_id, comp, 1); + if (ret) { + if (ret < 0) { + DR_LOG(ERR, "Failed mlx5dr_send_queue_poll"); + } else if (comp[0].status == RTE_FLOW_OP_ERROR) { + DR_LOG(ERR, "Got comp with error"); + rte_errno = ENOENT; + } + break; + } + } + return (ret == 1 ? 0 : ret); +} + +void mlx5dr_arg_write(struct mlx5dr_send_engine *queue, + void *comp_data, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size) +{ + struct mlx5dr_send_engine_post_attr send_attr = {0}; + struct mlx5dr_wqe_gta_data_seg_arg *wqe_arg; + struct mlx5dr_send_engine_post_ctrl ctrl; + struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl; + int i, full_iter, leftover; + size_t wqe_len; + + mlx5d_arg_init_send_attr(&send_attr, comp_data, arg_idx); + + /* Each WQE can hold 64B of data, it might require multiple iteration */ + full_iter = data_size / MLX5DR_ARG_DATA_SIZE; + leftover = data_size & (MLX5DR_ARG_DATA_SIZE - 1); + + for (i = 0; i < full_iter; i++) { + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + memcpy(wqe_arg, arg_data, wqe_len); + send_attr.id = arg_idx++; + mlx5dr_send_engine_post_end(&ctrl, &send_attr); + + /* Move to next argument data */ + arg_data += MLX5DR_ARG_DATA_SIZE; + } + + if (leftover) { + ctrl = mlx5dr_send_engine_post_start(queue); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_ctrl, &wqe_len); + memset(wqe_ctrl, 0, wqe_len); + mlx5dr_send_engine_post_req_wqe(&ctrl, (void *)&wqe_arg, &wqe_len); + memcpy(wqe_arg, arg_data, leftover); + send_attr.id = arg_idx; + mlx5dr_send_engine_post_end(&ctrl, &send_attr); + } +} + +int mlx5dr_arg_write_inline_arg_data(struct mlx5dr_context *ctx, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size) +{ + struct mlx5dr_send_engine *queue; + int ret; + + pthread_spin_lock(&ctx->ctrl_lock); + + /* Get the control queue */ + queue = &ctx->send_queue[ctx->queues - 1]; + + mlx5dr_arg_write(queue, arg_data, arg_idx, arg_data, data_size); + + mlx5dr_send_engine_flush_queue(queue); + + /* Poll for completion */ + ret = mlx5dr_arg_poll_for_comp(ctx, ctx->queues - 1); + if (ret) + DR_LOG(ERR, "Failed to get completions for shared action"); + + pthread_spin_unlock(&ctx->ctrl_lock); + + return ret; +} + +bool mlx5dr_arg_is_valid_arg_request_size(struct mlx5dr_context *ctx, + uint32_t arg_size) +{ + if (arg_size < ctx->caps->log_header_modify_argument_granularity || + arg_size > ctx->caps->log_header_modify_argument_max_alloc) { + return false; + } + return true; +} + +static int +mlx5dr_arg_create_modify_header_arg(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + uint16_t num_of_actions, + __be64 *pattern, + uint32_t bulk_size) +{ + uint32_t flags = action->flags; + uint16_t args_log_size; + int ret = 0; + + /* Alloc bulk of args */ + args_log_size = mlx5dr_arg_get_arg_log_size(num_of_actions); + if (args_log_size >= MLX5DR_ARG_CHUNK_SIZE_MAX) { + DR_LOG(ERR, "Exceed number of allowed actions %u", + num_of_actions); + rte_errno = EINVAL; + return rte_errno; + } + + if (!mlx5dr_arg_is_valid_arg_request_size(ctx, args_log_size + bulk_size)) { + DR_LOG(ERR, "Arg size %d does not fit FW capability", + args_log_size + bulk_size); + rte_errno = EINVAL; + return rte_errno; + } + + action->modify_header.arg_obj = + mlx5dr_cmd_arg_create(ctx->ibv_ctx, args_log_size + bulk_size, + ctx->pd_num); + if (!action->modify_header.arg_obj) { + DR_LOG(ERR, "Failed allocating arg in order: %d", + args_log_size + bulk_size); + return rte_errno; + } + + /* When INLINE need to write the arg data */ + if (flags & MLX5DR_ACTION_FLAG_SHARED) + ret = mlx5dr_arg_write_inline_arg_data(ctx, + action->modify_header.arg_obj->id, + (uint8_t *)pattern, + num_of_actions * + MLX5DR_MODIFY_ACTION_SIZE); + if (ret) { + DR_LOG(ERR, "Failed writing INLINE arg in order: %d", + args_log_size + bulk_size); + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + return rte_errno; + } + + return 0; +} + +int mlx5dr_pat_arg_create_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + size_t pattern_sz, + __be64 pattern[], + uint32_t bulk_size) +{ + uint16_t num_of_actions; + int ret; + + num_of_actions = pattern_sz / MLX5DR_MODIFY_ACTION_SIZE; + if (num_of_actions == 0) { + DR_LOG(ERR, "Invalid number of actions %u\n", num_of_actions); + rte_errno = EINVAL; + return rte_errno; + } + + action->modify_header.num_of_actions = num_of_actions; + + ret = mlx5dr_arg_create_modify_header_arg(ctx, action, + num_of_actions, + pattern, + bulk_size); + if (ret) { + DR_LOG(ERR, "Failed to allocate arg"); + return ret; + } + + ret = mlx5dr_pat_get_pattern(ctx, action, num_of_actions, pattern_sz, + pattern); + if (ret) { + DR_LOG(ERR, "Failed to allocate pattern"); + goto free_arg; + } + + return 0; + +free_arg: + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + return rte_errno; +} + +void mlx5dr_pat_arg_destroy_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action) +{ + mlx5dr_cmd_destroy_obj(action->modify_header.arg_obj); + mlx5dr_pat_put_pattern(ctx->pattern_cache, action); +} diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.h b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h new file mode 100644 index 0000000000..8a4670427f --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_PAT_ARG_H_ +#define MLX5DR_PAT_ARG_H_ + +/* Modify-header arg pool */ +enum mlx5dr_arg_chunk_size { + MLX5DR_ARG_CHUNK_SIZE_1, + /* Keep MIN updated when changing */ + MLX5DR_ARG_CHUNK_SIZE_MIN = MLX5DR_ARG_CHUNK_SIZE_1, + MLX5DR_ARG_CHUNK_SIZE_2, + MLX5DR_ARG_CHUNK_SIZE_3, + MLX5DR_ARG_CHUNK_SIZE_4, + MLX5DR_ARG_CHUNK_SIZE_MAX, +}; + +enum { + MLX5DR_MODIFY_ACTION_SIZE = 8, + MLX5DR_ARG_DATA_SIZE = 64, +}; + +struct mlx5dr_pattern_cache { + /* Protect pattern list */ + pthread_spinlock_t lock; + LIST_HEAD(pattern_head, mlx5dr_pat_cached_pattern) head; +}; + +struct mlx5dr_pat_cached_pattern { + enum mlx5dr_action_type type; + struct { + struct mlx5dr_devx_obj *pattern_obj; + struct dr_icm_chunk *chunk; + uint8_t *data; + uint16_t num_of_actions; + } mh_data; + rte_atomic32_t refcount; + LIST_ENTRY(mlx5dr_pat_cached_pattern) next; +}; + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_get_arg_log_size(uint16_t num_of_actions); + +uint32_t mlx5dr_arg_get_arg_size(uint16_t num_of_actions); + +enum mlx5dr_arg_chunk_size +mlx5dr_arg_data_size_to_arg_log_size(uint16_t data_size); + +uint32_t mlx5dr_arg_data_size_to_arg_size(uint16_t data_size); + +int mlx5dr_pat_init_pattern_cache(struct mlx5dr_pattern_cache **cache); + +void mlx5dr_pat_uninit_pattern_cache(struct mlx5dr_pattern_cache *cache); + +int mlx5dr_pat_arg_create_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action, + size_t pattern_sz, + __be64 pattern[], + uint32_t bulk_size); + +void mlx5dr_pat_arg_destroy_modify_header(struct mlx5dr_context *ctx, + struct mlx5dr_action *action); + +bool mlx5dr_arg_is_valid_arg_request_size(struct mlx5dr_context *ctx, + uint32_t arg_size); + +void mlx5dr_arg_write(struct mlx5dr_send_engine *queue, + void *comp_data, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size); + +void mlx5dr_arg_decapl3_write(struct mlx5dr_send_engine *queue, + uint32_t arg_idx, + uint8_t *arg_data, + uint16_t num_of_actions); + +int mlx5dr_arg_write_inline_arg_data(struct mlx5dr_context *ctx, + uint32_t arg_idx, + uint8_t *arg_data, + size_t data_size); +#endif /* MLX5DR_PAT_ARG_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 17/18] net/mlx5/hws: Add HWS debug layer 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (15 preceding siblings ...) 2022-10-20 15:57 ` [v6 16/18] net/mlx5/hws: Add HWS action object Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-24 6:54 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 18/18] net/mlx5/hws: Enable HWS Alex Vesker 2022-10-24 10:56 ` [v6 00/18] net/mlx5: Add HW steering low level support Raslan Darawsheh 18 siblings, 1 reply; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad Cc: dev, orika, Hamdan Igbaria From: Hamdan Igbaria <hamdani@nvidia.com> The debug layer is used to generate a debug CSV file containing details of the context, table, matcher, rules and other useful debug information. Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com> Signed-off-by: Alex Vesker <valex@nvidia.com> --- drivers/net/mlx5/hws/mlx5dr_debug.c | 462 ++++++++++++++++++++++++++++ drivers/net/mlx5/hws/mlx5dr_debug.h | 28 ++ 2 files changed, 490 insertions(+) create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.c create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.h diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c new file mode 100644 index 0000000000..890a761c48 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_debug.c @@ -0,0 +1,462 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#include "mlx5dr_internal.h" + +const char *mlx5dr_debug_action_type_str[] = { + [MLX5DR_ACTION_TYP_LAST] = "LAST", + [MLX5DR_ACTION_TYP_TNL_L2_TO_L2] = "TNL_L2_TO_L2", + [MLX5DR_ACTION_TYP_L2_TO_TNL_L2] = "L2_TO_TNL_L2", + [MLX5DR_ACTION_TYP_TNL_L3_TO_L2] = "TNL_L3_TO_L2", + [MLX5DR_ACTION_TYP_L2_TO_TNL_L3] = "L2_TO_TNL_L3", + [MLX5DR_ACTION_TYP_DROP] = "DROP", + [MLX5DR_ACTION_TYP_TIR] = "TIR", + [MLX5DR_ACTION_TYP_FT] = "FT", + [MLX5DR_ACTION_TYP_CTR] = "CTR", + [MLX5DR_ACTION_TYP_TAG] = "TAG", + [MLX5DR_ACTION_TYP_MODIFY_HDR] = "MODIFY_HDR", + [MLX5DR_ACTION_TYP_VPORT] = "VPORT", + [MLX5DR_ACTION_TYP_MISS] = "DEFAULT_MISS", + [MLX5DR_ACTION_TYP_POP_VLAN] = "POP_VLAN", + [MLX5DR_ACTION_TYP_PUSH_VLAN] = "PUSH_VLAN", + [MLX5DR_ACTION_TYP_ASO_METER] = "ASO_METER", + [MLX5DR_ACTION_TYP_ASO_CT] = "ASO_CT", +}; + +static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX, + "Missing mlx5dr_debug_action_type_str"); + +const char *mlx5dr_debug_action_type_to_str(enum mlx5dr_action_type action_type) +{ + return mlx5dr_debug_action_type_str[action_type]; +} + +static int +mlx5dr_debug_dump_matcher_template_definer(FILE *f, + struct mlx5dr_match_template *mt) +{ + struct mlx5dr_definer *definer = mt->definer; + int i, ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,", + MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_DEFINER, + (uint64_t)(uintptr_t)definer, + (uint64_t)(uintptr_t)mt, + definer->obj->id, + definer->type); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (i = 0; i < DW_SELECTORS; i++) { + ret = fprintf(f, "0x%x%s", definer->dw_selector[i], + (i == DW_SELECTORS - 1) ? "," : "-"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + for (i = 0; i < BYTE_SELECTORS; i++) { + ret = fprintf(f, "0x%x%s", definer->byte_selector[i], + (i == BYTE_SELECTORS - 1) ? "," : "-"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + for (i = 0; i < MLX5DR_JUMBO_TAG_SZ; i++) { + ret = fprintf(f, "%02x", definer->mask.jumbo[i]); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + ret = fprintf(f, "\n"); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int +mlx5dr_debug_dump_matcher_match_template(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + int i, ret; + + for (i = 0; i < matcher->num_of_mt; i++) { + struct mlx5dr_match_template *mt = matcher->mt[i]; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_MATCHER_MATCH_TEMPLATE, + (uint64_t)(uintptr_t)mt, + (uint64_t)(uintptr_t)matcher, + is_root ? 0 : mt->fc_sz, + mt->flags); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + if (!is_root) { + ret = mlx5dr_debug_dump_matcher_template_definer(f, mt); + if (ret) + return ret; + } + } + + return 0; +} + +static int +mlx5dr_debug_dump_matcher_action_template(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + enum mlx5dr_action_type action_type; + int i, j, ret; + + for (i = 0; i < matcher->num_of_at; i++) { + struct mlx5dr_action_template *at = matcher->at[i]; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,%d", + MLX5DR_DEBUG_RES_TYPE_MATCHER_ACTION_TEMPLATE, + (uint64_t)(uintptr_t)at, + (uint64_t)(uintptr_t)matcher, + at->only_term ? 0 : 1, + is_root ? 0 : at->num_of_action_stes, + at->num_actions); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (j = 0; j < at->num_actions; j++) { + action_type = at->action_type_arr[j]; + ret = fprintf(f, ",%s", mlx5dr_debug_action_type_to_str(action_type)); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + + fprintf(f, "\n"); + } + + return 0; +} + +static int +mlx5dr_debug_dump_matcher_attr(FILE *f, struct mlx5dr_matcher *matcher) +{ + struct mlx5dr_matcher_attr *attr = &matcher->attr; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_MATCHER_ATTR, + (uint64_t)(uintptr_t)matcher, + attr->priority, + attr->mode, + attr->table.sz_row_log, + attr->table.sz_col_log, + attr->optimize_using_rule_idx, + attr->optimize_flow_src); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_matcher(FILE *f, struct mlx5dr_matcher *matcher) +{ + bool is_root = matcher->tbl->level == MLX5DR_ROOT_LEVEL; + enum mlx5dr_table_type tbl_type = matcher->tbl->type; + struct mlx5dr_devx_obj *ste_0, *ste_1 = NULL; + struct mlx5dr_pool_chunk *ste; + struct mlx5dr_pool *ste_pool; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,0x%" PRIx64, + MLX5DR_DEBUG_RES_TYPE_MATCHER, + (uint64_t)(uintptr_t)matcher, + (uint64_t)(uintptr_t)matcher->tbl, + matcher->num_of_mt, + is_root ? 0 : matcher->end_ft->id, + matcher->col_matcher ? (uint64_t)(uintptr_t)matcher->col_matcher : 0); + if (ret < 0) + goto out_err; + + ste = &matcher->match_ste.ste; + ste_pool = matcher->match_ste.pool; + if (ste_pool) { + ste_0 = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + ste_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + } else { + ste_0 = NULL; + ste_1 = NULL; + } + + ret = fprintf(f, ",%d,%d,%d,%d", + matcher->match_ste.rtc_0 ? matcher->match_ste.rtc_0->id : 0, + ste_0 ? (int)ste_0->id : -1, + matcher->match_ste.rtc_1 ? matcher->match_ste.rtc_1->id : 0, + ste_1 ? (int)ste_1->id : -1); + if (ret < 0) + goto out_err; + + ste = &matcher->action_ste.ste; + ste_pool = matcher->action_ste.pool; + if (ste_pool) { + ste_0 = mlx5dr_pool_chunk_get_base_devx_obj(ste_pool, ste); + if (tbl_type == MLX5DR_TABLE_TYPE_FDB) + ste_1 = mlx5dr_pool_chunk_get_base_devx_obj_mirror(ste_pool, ste); + } else { + ste_0 = NULL; + ste_1 = NULL; + } + + ret = fprintf(f, ",%d,%d,%d,%d\n", + matcher->action_ste.rtc_0 ? matcher->action_ste.rtc_0->id : 0, + ste_0 ? (int)ste_0->id : -1, + matcher->action_ste.rtc_1 ? matcher->action_ste.rtc_1->id : 0, + ste_1 ? (int)ste_1->id : -1); + if (ret < 0) + goto out_err; + + ret = mlx5dr_debug_dump_matcher_attr(f, matcher); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_matcher_match_template(f, matcher); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_matcher_action_template(f, matcher); + if (ret) + return ret; + + return 0; + +out_err: + rte_errno = EINVAL; + return rte_errno; +} + +static int mlx5dr_debug_dump_table(FILE *f, struct mlx5dr_table *tbl) +{ + bool is_root = tbl->level == MLX5DR_ROOT_LEVEL; + struct mlx5dr_matcher *matcher; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",0x%" PRIx64 ",%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_TABLE, + (uint64_t)(uintptr_t)tbl, + (uint64_t)(uintptr_t)tbl->ctx, + is_root ? 0 : tbl->ft->id, + tbl->type, + is_root ? 0 : tbl->fw_ft_type, + tbl->level); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + LIST_FOREACH(matcher, &tbl->head, next) { + ret = mlx5dr_debug_dump_matcher(f, matcher); + if (ret) + return ret; + } + + return 0; +} + +static int +mlx5dr_debug_dump_context_send_engine(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_send_engine *send_queue; + int ret, i, j; + + for (i = 0; i < (int)ctx->queues; i++) { + send_queue = &ctx->send_queue[i]; + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_ENGINE, + (uint64_t)(uintptr_t)ctx, + i, + send_queue->used_entries, + send_queue->th_entries, + send_queue->rings, + send_queue->num_entries, + send_queue->err, + send_queue->completed.ci, + send_queue->completed.pi, + send_queue->completed.mask); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + for (j = 0; j < MLX5DR_NUM_SEND_RINGS; j++) { + struct mlx5dr_send_ring *send_ring = &send_queue->send_ring[j]; + struct mlx5dr_send_ring_cq *cq = &send_ring->send_cq; + struct mlx5dr_send_ring_sq *sq = &send_ring->send_sq; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_RING, + (uint64_t)(uintptr_t)ctx, + j, + i, + cq->cqn, + cq->cons_index, + cq->ncqe_mask, + cq->buf_sz, + cq->ncqe, + cq->cqe_log_sz, + cq->poll_wqe, + cq->cqe_sz, + sq->sqn, + sq->obj->id, + sq->cur_post, + sq->buf_mask); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + } + } + + return 0; +} + +static int mlx5dr_debug_dump_context_caps(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_cmd_query_caps *caps = ctx->caps; + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%s,%d,%d,%d,%d,", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_CAPS, + (uint64_t)(uintptr_t)ctx, + caps->fw_ver, + caps->wqe_based_update, + caps->ste_format, + caps->ste_alloc_log_max, + caps->log_header_modify_argument_max_alloc); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + ret = fprintf(f, "%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d,%d\n", + caps->flex_protocols, + caps->rtc_reparse_mode, + caps->rtc_index_mode, + caps->ste_alloc_log_gran, + caps->stc_alloc_log_max, + caps->stc_alloc_log_gran, + caps->rtc_log_depth_max, + caps->format_select_gtpu_dw_0, + caps->format_select_gtpu_dw_1, + caps->format_select_gtpu_dw_2, + caps->format_select_gtpu_ext_dw_0, + caps->nic_ft.max_level, + caps->nic_ft.reparse, + caps->fdb_ft.max_level, + caps->fdb_ft.reparse, + caps->log_header_modify_argument_granularity); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_context_attr(FILE *f, struct mlx5dr_context *ctx) +{ + int ret; + + ret = fprintf(f, "%u,0x%" PRIx64 ",%d,%zu,%d\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT_ATTR, + (uint64_t)(uintptr_t)ctx, + ctx->pd_num, + ctx->queues, + ctx->send_queue->num_entries); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + return 0; +} + +static int mlx5dr_debug_dump_context_info(FILE *f, struct mlx5dr_context *ctx) +{ + int ret; + + ret = fprintf(f, "%d,0x%" PRIx64 ",%d,%s,%s\n", + MLX5DR_DEBUG_RES_TYPE_CONTEXT, + (uint64_t)(uintptr_t)ctx, + ctx->flags & MLX5DR_CONTEXT_FLAG_HWS_SUPPORT, + mlx5_glue->get_device_name(ctx->ibv_ctx->device), + DEBUG_VERSION); + if (ret < 0) { + rte_errno = EINVAL; + return rte_errno; + } + + ret = mlx5dr_debug_dump_context_attr(f, ctx); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_context_caps(f, ctx); + if (ret) + return ret; + + return 0; +} + +static int mlx5dr_debug_dump_context(FILE *f, struct mlx5dr_context *ctx) +{ + struct mlx5dr_table *tbl; + int ret; + + ret = mlx5dr_debug_dump_context_info(f, ctx); + if (ret) + return ret; + + ret = mlx5dr_debug_dump_context_send_engine(f, ctx); + if (ret) + return ret; + + LIST_FOREACH(tbl, &ctx->head, next) { + ret = mlx5dr_debug_dump_table(f, tbl); + if (ret) + return ret; + } + + return 0; +} + +int mlx5dr_debug_dump(struct mlx5dr_context *ctx, FILE *f) +{ + int ret; + + if (!f || !ctx) { + rte_errno = EINVAL; + return -rte_errno; + } + + pthread_spin_lock(&ctx->ctrl_lock); + ret = mlx5dr_debug_dump_context(f, ctx); + pthread_spin_unlock(&ctx->ctrl_lock); + + return -ret; +} diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.h b/drivers/net/mlx5/hws/mlx5dr_debug.h new file mode 100644 index 0000000000..cf00170f7d --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_debug.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_DEBUG_H_ +#define MLX5DR_DEBUG_H_ + +#define DEBUG_VERSION "1.0.DPDK" + +enum mlx5dr_debug_res_type { + MLX5DR_DEBUG_RES_TYPE_CONTEXT = 4000, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_ATTR = 4001, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_CAPS = 4002, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_ENGINE = 4003, + MLX5DR_DEBUG_RES_TYPE_CONTEXT_SEND_RING = 4004, + + MLX5DR_DEBUG_RES_TYPE_TABLE = 4100, + + MLX5DR_DEBUG_RES_TYPE_MATCHER = 4200, + MLX5DR_DEBUG_RES_TYPE_MATCHER_ATTR = 4201, + MLX5DR_DEBUG_RES_TYPE_MATCHER_MATCH_TEMPLATE = 4202, + MLX5DR_DEBUG_RES_TYPE_MATCHER_ACTION_TEMPLATE = 4204, + MLX5DR_DEBUG_RES_TYPE_MATCHER_TEMPLATE_DEFINER = 4203, +}; + +const char *mlx5dr_debug_action_type_to_str(enum mlx5dr_action_type action_type); + +#endif /* MLX5DR_DEBUG_H_ */ -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 17/18] net/mlx5/hws: Add HWS debug layer 2022-10-20 15:57 ` [v6 17/18] net/mlx5/hws: Add HWS debug layer Alex Vesker @ 2022-10-24 6:54 ` Slava Ovsiienko 0 siblings, 0 replies; 134+ messages in thread From: Slava Ovsiienko @ 2022-10-24 6:54 UTC (permalink / raw) To: Alex Vesker, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam, Hamdan Igbaria > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 18:58 > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad > <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com>; Hamdan Igbaria > <hamdani@nvidia.com> > Subject: [v6 17/18] net/mlx5/hws: Add HWS debug layer > > From: Hamdan Igbaria <hamdani@nvidia.com> > > The debug layer is used to generate a debug CSV file containing details of > the context, table, matcher, rules and other useful debug information. > > Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com> > Signed-off-by: Alex Vesker <valex@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 134+ messages in thread
* [v6 18/18] net/mlx5/hws: Enable HWS 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (16 preceding siblings ...) 2022-10-20 15:57 ` [v6 17/18] net/mlx5/hws: Add HWS debug layer Alex Vesker @ 2022-10-20 15:57 ` Alex Vesker 2022-10-24 6:54 ` Slava Ovsiienko 2022-10-24 10:56 ` [v6 00/18] net/mlx5: Add HW steering low level support Raslan Darawsheh 18 siblings, 1 reply; 134+ messages in thread From: Alex Vesker @ 2022-10-20 15:57 UTC (permalink / raw) To: valex, viacheslavo, thomas, suanmingm, Matan Azrad; +Cc: dev, orika Replace stub implenation of HWS with mlx5dr code. Signed-off-by: Alex Vesker <valex@nvidia.com> --- doc/guides/nics/mlx5.rst | 5 +- doc/guides/rel_notes/release_22_11.rst | 4 + drivers/common/mlx5/linux/meson.build | 9 +- drivers/net/mlx5/hws/meson.build | 18 + drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} | 201 ++++++++-- drivers/net/mlx5/hws/mlx5dr_internal.h | 93 +++++ drivers/net/mlx5/meson.build | 7 +- drivers/net/mlx5/mlx5.c | 6 +- drivers/net/mlx5/mlx5.h | 7 +- drivers/net/mlx5/mlx5_devx.c | 2 +- drivers/net/mlx5/mlx5_dr.c | 383 ------------------- drivers/net/mlx5/mlx5_flow.c | 2 + drivers/net/mlx5/mlx5_flow.h | 11 +- drivers/net/mlx5/mlx5_flow_hw.c | 10 +- 14 files changed, 326 insertions(+), 432 deletions(-) create mode 100644 drivers/net/mlx5/hws/meson.build rename drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} (67%) create mode 100644 drivers/net/mlx5/hws/mlx5dr_internal.h delete mode 100644 drivers/net/mlx5/mlx5_dr.c diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index bb436892a0..303eb17714 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -539,7 +539,10 @@ Limitations - WQE based high scaling and safer flow insertion/destruction. - Set ``dv_flow_en`` to 2 in order to enable HW steering. - - Async queue-based ``rte_flow_q`` APIs supported only. + - Async queue-based ``rte_flow_async`` APIs supported only. + - NIC ConnectX-5 and before are not supported. + - Partial match with item template is not supported. + - IPv6 5-tuple matching is not supported. - Match on GRE header supports the following fields: diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst index 1c3daf141d..cdc5837f1d 100644 --- a/doc/guides/rel_notes/release_22_11.rst +++ b/doc/guides/rel_notes/release_22_11.rst @@ -249,6 +249,10 @@ New Features sysfs entries to adjust the minimum and maximum uncore frequency values, which works on Linux with Intel hardware only. +* **Updated Nvidia mlx5 driver.** + + * Added fully support for queue based async HW steering to the PMD. + * **Rewritten pmdinfo script.** The ``dpdk-pmdinfo.py`` script was rewritten to produce valid JSON only. diff --git a/drivers/common/mlx5/linux/meson.build b/drivers/common/mlx5/linux/meson.build index e6b32eb84d..84e2a1ad8c 100644 --- a/drivers/common/mlx5/linux/meson.build +++ b/drivers/common/mlx5/linux/meson.build @@ -8,6 +8,7 @@ dlopen_ibverbs = (get_option('ibverbs_link') == 'dlopen') LIB_GLUE_BASE = 'librte_common_mlx5_glue.so' LIB_GLUE_VERSION = abi_version LIB_GLUE = LIB_GLUE_BASE + '.' + LIB_GLUE_VERSION +mlx5_config = configuration_data() if dlopen_ibverbs dpdk_conf.set('RTE_IBVERBS_LINK_DLOPEN', 1) cflags += [ @@ -223,15 +224,15 @@ if libmtcr_ul_found [ 'HAVE_MLX5_MSTFLINT', 'mstflint/mtcr.h', 'mopen' ], ] endif -config = configuration_data() + foreach arg:has_sym_args - config.set(arg[0], cc.has_header_symbol(arg[1], arg[2], dependencies: libs)) + mlx5_config.set(arg[0], cc.has_header_symbol(arg[1], arg[2], dependencies: libs)) endforeach foreach arg:has_member_args file_prefix = '#include <' + arg[1] + '>' - config.set(arg[0], cc.has_member(arg[2], arg[3], prefix : file_prefix, dependencies: libs)) + mlx5_config.set(arg[0], cc.has_member(arg[2], arg[3], prefix : file_prefix, dependencies: libs)) endforeach -configure_file(output : 'mlx5_autoconf.h', configuration : config) +configure_file(output : 'mlx5_autoconf.h', configuration : mlx5_config) # Build Glue Library if dlopen_ibverbs diff --git a/drivers/net/mlx5/hws/meson.build b/drivers/net/mlx5/hws/meson.build new file mode 100644 index 0000000000..d2bb864fd2 --- /dev/null +++ b/drivers/net/mlx5/hws/meson.build @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright (c) 2022 NVIDIA Corporation & Affiliates + +includes += include_directories('.') +sources += files( + 'mlx5dr_context.c', + 'mlx5dr_table.c', + 'mlx5dr_matcher.c', + 'mlx5dr_rule.c', + 'mlx5dr_action.c', + 'mlx5dr_buddy.c', + 'mlx5dr_pool.c', + 'mlx5dr_cmd.c', + 'mlx5dr_send.c', + 'mlx5dr_definer.c', + 'mlx5dr_debug.c', + 'mlx5dr_pat_arg.c', +) diff --git a/drivers/net/mlx5/mlx5_dr.h b/drivers/net/mlx5/hws/mlx5dr.h similarity index 67% rename from drivers/net/mlx5/mlx5_dr.h rename to drivers/net/mlx5/hws/mlx5dr.h index d0b2c15652..f8de27c615 100644 --- a/drivers/net/mlx5/mlx5_dr.h +++ b/drivers/net/mlx5/hws/mlx5dr.h @@ -1,9 +1,9 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved. + * Copyright (c) 2022 NVIDIA Corporation & Affiliates */ -#ifndef MLX5_DR_H_ -#define MLX5_DR_H_ +#ifndef MLX5DR_H_ +#define MLX5DR_H_ #include <rte_flow.h> @@ -11,6 +11,7 @@ struct mlx5dr_context; struct mlx5dr_table; struct mlx5dr_matcher; struct mlx5dr_rule; +struct ibv_context; enum mlx5dr_table_type { MLX5DR_TABLE_TYPE_NIC_RX, @@ -26,6 +27,27 @@ enum mlx5dr_matcher_resource_mode { MLX5DR_MATCHER_RESOURCE_MODE_HTABLE, }; +enum mlx5dr_action_type { + MLX5DR_ACTION_TYP_LAST, + MLX5DR_ACTION_TYP_TNL_L2_TO_L2, + MLX5DR_ACTION_TYP_L2_TO_TNL_L2, + MLX5DR_ACTION_TYP_TNL_L3_TO_L2, + MLX5DR_ACTION_TYP_L2_TO_TNL_L3, + MLX5DR_ACTION_TYP_DROP, + MLX5DR_ACTION_TYP_TIR, + MLX5DR_ACTION_TYP_FT, + MLX5DR_ACTION_TYP_CTR, + MLX5DR_ACTION_TYP_TAG, + MLX5DR_ACTION_TYP_MODIFY_HDR, + MLX5DR_ACTION_TYP_VPORT, + MLX5DR_ACTION_TYP_MISS, + MLX5DR_ACTION_TYP_POP_VLAN, + MLX5DR_ACTION_TYP_PUSH_VLAN, + MLX5DR_ACTION_TYP_ASO_METER, + MLX5DR_ACTION_TYP_ASO_CT, + MLX5DR_ACTION_TYP_MAX, +}; + enum mlx5dr_action_flags { MLX5DR_ACTION_FLAG_ROOT_RX = 1 << 0, MLX5DR_ACTION_FLAG_ROOT_TX = 1 << 1, @@ -33,7 +55,10 @@ enum mlx5dr_action_flags { MLX5DR_ACTION_FLAG_HWS_RX = 1 << 3, MLX5DR_ACTION_FLAG_HWS_TX = 1 << 4, MLX5DR_ACTION_FLAG_HWS_FDB = 1 << 5, - MLX5DR_ACTION_FLAG_INLINE = 1 << 6, + /* Shared action can be used over a few threads, since data is written + * only once at the creation of the action. + */ + MLX5DR_ACTION_FLAG_SHARED = 1 << 6, }; enum mlx5dr_action_reformat_type { @@ -43,6 +68,18 @@ enum mlx5dr_action_reformat_type { MLX5DR_ACTION_REFORMAT_TYPE_L2_TO_TNL_L3, }; +enum mlx5dr_action_aso_meter_color { + MLX5DR_ACTION_ASO_METER_COLOR_RED = 0x0, + MLX5DR_ACTION_ASO_METER_COLOR_YELLOW = 0x1, + MLX5DR_ACTION_ASO_METER_COLOR_GREEN = 0x2, + MLX5DR_ACTION_ASO_METER_COLOR_UNDEFINED = 0x3, +}; + +enum mlx5dr_action_aso_ct_flags { + MLX5DR_ACTION_ASO_CT_DIRECTION_INITIATOR = 0 << 0, + MLX5DR_ACTION_ASO_CT_DIRECTION_RESPONDER = 1 << 0, +}; + enum mlx5dr_match_template_flags { /* Allow relaxed matching by skipping derived dependent match fields. */ MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH = 1, @@ -56,7 +93,7 @@ enum mlx5dr_send_queue_actions { struct mlx5dr_context_attr { uint16_t queues; uint16_t queue_size; - size_t initial_log_ste_memory; + size_t initial_log_ste_memory; /* Currently not in use */ /* Optional PD used for allocating res ources */ struct ibv_pd *pd; }; @@ -66,9 +103,21 @@ struct mlx5dr_table_attr { uint32_t level; }; +enum mlx5dr_matcher_flow_src { + MLX5DR_MATCHER_FLOW_SRC_ANY = 0x0, + MLX5DR_MATCHER_FLOW_SRC_WIRE = 0x1, + MLX5DR_MATCHER_FLOW_SRC_VPORT = 0x2, +}; + struct mlx5dr_matcher_attr { + /* Processing priority inside table */ uint32_t priority; + /* Provide all rules with unique rule_idx in num_log range to reduce locking */ + bool optimize_using_rule_idx; + /* Resource mode and corresponding size */ enum mlx5dr_matcher_resource_mode mode; + /* Optimize insertion in case packet origin is the same for all rules */ + enum mlx5dr_matcher_flow_src optimize_flow_src; union { struct { uint8_t sz_row_log; @@ -84,6 +133,8 @@ struct mlx5dr_matcher_attr { struct mlx5dr_rule_attr { uint16_t queue_id; void *user_data; + /* Valid if matcher optimize_using_rule_idx is set */ + uint32_t rule_idx; uint32_t burst:1; }; @@ -92,6 +143,9 @@ struct mlx5dr_devx_obj { uint32_t id; }; +/* In actions that take offset, the offset is unique, and the user should not + * reuse the same index because data changing is not atomic. + */ struct mlx5dr_rule_action { struct mlx5dr_action *action; union { @@ -116,31 +170,17 @@ struct mlx5dr_rule_action { struct { rte_be32_t vlan_hdr; } push_vlan; - }; -}; -enum { - MLX5DR_MATCH_TAG_SZ = 32, - MLX5DR_JAMBO_TAG_SZ = 44, -}; - -enum mlx5dr_rule_status { - MLX5DR_RULE_STATUS_UNKNOWN, - MLX5DR_RULE_STATUS_CREATING, - MLX5DR_RULE_STATUS_CREATED, - MLX5DR_RULE_STATUS_DELETING, - MLX5DR_RULE_STATUS_DELETED, - MLX5DR_RULE_STATUS_FAILED, -}; + struct { + uint32_t offset; + enum mlx5dr_action_aso_meter_color init_color; + } aso_meter; -struct mlx5dr_rule { - struct mlx5dr_matcher *matcher; - union { - uint8_t match_tag[MLX5DR_MATCH_TAG_SZ]; - struct ibv_flow *flow; + struct { + uint32_t offset; + enum mlx5dr_action_aso_ct_flags direction; + } aso_ct; }; - enum mlx5dr_rule_status status; - uint32_t rtc_used; /* The RTC into which the STE was inserted */ }; /* Open a context used for direct rule insertion using hardware steering. @@ -153,7 +193,7 @@ struct mlx5dr_rule { * @return pointer to mlx5dr_context on success NULL otherwise. */ struct mlx5dr_context * -mlx5dr_context_open(void *ibv_ctx, +mlx5dr_context_open(struct ibv_context *ibv_ctx, struct mlx5dr_context_attr *attr); /* Close a context used for direct hardware steering. @@ -205,6 +245,26 @@ mlx5dr_match_template_create(const struct rte_flow_item items[], */ int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt); +/* Create new action template based on action_type array, the action template + * will be used for matcher creation. + * + * @param[in] action_type + * An array of actions based on the order of actions which will be provided + * with rule_actions to mlx5dr_rule_create. The last action is marked + * using MLX5DR_ACTION_TYP_LAST. + * @return pointer to mlx5dr_action_template on success NULL otherwise + */ +struct mlx5dr_action_template * +mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[]); + +/* Destroy action template. + * + * @param[in] at + * Action template to destroy. + * @return zero on success non zero otherwise. + */ +int mlx5dr_action_template_destroy(struct mlx5dr_action_template *at); + /* Create a new direct rule matcher. Each matcher can contain multiple rules. * Matchers on the table will be processed by priority. Matching fields and * mask are described by the match template. In some cases multiple match @@ -216,6 +276,10 @@ int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt); * Array of match templates to be used on matcher. * @param[in] num_of_mt * Number of match templates in mt array. + * @param[in] at + * Array of action templates to be used on matcher. + * @param[in] num_of_at + * Number of action templates in mt array. * @param[in] attr * Attributes used for matcher creation. * @return pointer to mlx5dr_matcher on success NULL otherwise. @@ -224,6 +288,8 @@ struct mlx5dr_matcher * mlx5dr_matcher_create(struct mlx5dr_table *table, struct mlx5dr_match_template *mt[], uint8_t num_of_mt, + struct mlx5dr_action_template *at[], + uint8_t num_of_at, struct mlx5dr_matcher_attr *attr); /* Destroy direct rule matcher. @@ -245,11 +311,13 @@ size_t mlx5dr_rule_get_handle_size(void); * @param[in] matcher * The matcher in which the new rule will be created. * @param[in] mt_idx - * Match template index to create the rule with. + * Match template index to create the match with. * @param[in] items * The items used for the value matching. * @param[in] rule_actions * Rule action to be executed on match. + * @param[in] at_idx + * Action template index to apply the actions with. * @param[in] num_of_actions * Number of rule actions. * @param[in] attr @@ -261,8 +329,8 @@ size_t mlx5dr_rule_get_handle_size(void); int mlx5dr_rule_create(struct mlx5dr_matcher *matcher, uint8_t mt_idx, const struct rte_flow_item items[], + uint8_t at_idx, struct mlx5dr_rule_action rule_actions[], - uint8_t num_of_actions, struct mlx5dr_rule_attr *attr, struct mlx5dr_rule *rule_handle); @@ -317,6 +385,21 @@ mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx, struct mlx5dr_table *tbl, uint32_t flags); +/* Create direct rule goto vport action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] ib_port_num + * Destination ib_port number. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_dest_vport(struct mlx5dr_context *ctx, + uint32_t ib_port_num, + uint32_t flags); + /* Create direct rule goto TIR action. * * @param[in] ctx @@ -400,10 +483,66 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx, struct mlx5dr_action * mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, size_t pattern_sz, - rte_be64_t pattern[], + __be64 pattern[], uint32_t log_bulk_size, uint32_t flags); +/* Create direct rule ASO flow meter action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] devx_obj + * The DEVX ASO object. + * @param[in] return_reg_c + * Copy the ASO object value into this reg_c, after a packet hits a rule with this ASO object. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_aso_meter(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_c, + uint32_t flags); + +/* Create direct rule ASO CT action. + * + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] devx_obj + * The DEVX ASO object. + * @param[in] return_reg_id + * Copy the ASO object value into this reg_id, after a packet hits a rule with this ASO object. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_aso_ct(struct mlx5dr_context *ctx, + struct mlx5dr_devx_obj *devx_obj, + uint8_t return_reg_id, + uint32_t flags); + +/* Create direct rule pop vlan action. + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_pop_vlan(struct mlx5dr_context *ctx, uint32_t flags); + +/* Create direct rule push vlan action. + * @param[in] ctx + * The context in which the new action will be created. + * @param[in] flags + * Action creation flags. (enum mlx5dr_action_flags) + * @return pointer to mlx5dr_action on success NULL otherwise. + */ +struct mlx5dr_action * +mlx5dr_action_create_push_vlan(struct mlx5dr_context *ctx, uint32_t flags); + /* Destroy direct rule action. * * @param[in] action diff --git a/drivers/net/mlx5/hws/mlx5dr_internal.h b/drivers/net/mlx5/hws/mlx5dr_internal.h new file mode 100644 index 0000000000..586b3e3ea3 --- /dev/null +++ b/drivers/net/mlx5/hws/mlx5dr_internal.h @@ -0,0 +1,93 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2022 NVIDIA Corporation & Affiliates + */ + +#ifndef MLX5DR_INTERNAL_H_ +#define MLX5DR_INTERNAL_H_ + +#include <stdint.h> +#include <sys/queue.h> +/* Verbs headers do not support -pedantic. */ +#ifdef PEDANTIC +#pragma GCC diagnostic ignored "-Wpedantic" +#endif +#include <infiniband/verbs.h> +#include <infiniband/mlx5dv.h> +#ifdef PEDANTIC +#pragma GCC diagnostic error "-Wpedantic" +#endif +#include <rte_flow.h> +#include <rte_gtp.h> + +#include "mlx5_prm.h" +#include "mlx5_glue.h" +#include "mlx5_flow.h" +#include "mlx5_utils.h" +#include "mlx5_malloc.h" + +#include "mlx5dr.h" +#include "mlx5dr_pool.h" +#include "mlx5dr_context.h" +#include "mlx5dr_table.h" +#include "mlx5dr_matcher.h" +#include "mlx5dr_send.h" +#include "mlx5dr_rule.h" +#include "mlx5dr_cmd.h" +#include "mlx5dr_action.h" +#include "mlx5dr_definer.h" +#include "mlx5dr_debug.h" +#include "mlx5dr_pat_arg.h" + +#define DW_SIZE 4 +#define BITS_IN_BYTE 8 +#define BITS_IN_DW (BITS_IN_BYTE * DW_SIZE) + +#define BIT(_bit) (1ULL << (_bit)) +#define IS_BIT_SET(_value, _bit) ((_value) & (1ULL << (_bit))) + +#ifndef ARRAY_SIZE +#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) +#endif + +#ifdef RTE_LIBRTE_MLX5_DEBUG +/* Prevent double function name print when debug is set */ +#define DR_LOG DRV_LOG +#else +/* Print function name as part of the log */ +#define DR_LOG(level, ...) \ + DRV_LOG(level, RTE_FMT("[%s]: " RTE_FMT_HEAD(__VA_ARGS__,), __func__, RTE_FMT_TAIL(__VA_ARGS__,))) +#endif + +static inline void *simple_malloc(size_t size) +{ + return mlx5_malloc(MLX5_MEM_SYS, + size, + MLX5_MALLOC_ALIGNMENT, + SOCKET_ID_ANY); +} + +static inline void *simple_calloc(size_t nmemb, size_t size) +{ + return mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO, + nmemb * size, + MLX5_MALLOC_ALIGNMENT, + SOCKET_ID_ANY); +} + +static inline void simple_free(void *addr) +{ + mlx5_free(addr); +} + +static inline bool is_mem_zero(const uint8_t *mem, size_t size) +{ + assert(size); + return (*mem == 0) && memcmp(mem, mem + 1, size - 1) == 0; +} + +static inline uint64_t roundup_pow_of_two(uint64_t n) +{ + return n == 1 ? 1 : 1ULL << log2above(n); +} + +#endif /* MLX5DR_INTERNAL_H_ */ diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index 6a84d96380..6b947eaab5 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -14,10 +14,8 @@ sources = files( 'mlx5.c', 'mlx5_ethdev.c', 'mlx5_flow.c', - 'mlx5_dr.c', 'mlx5_flow_meter.c', 'mlx5_flow_dv.c', - 'mlx5_flow_hw.c', 'mlx5_flow_aso.c', 'mlx5_flow_flex.c', 'mlx5_mac.c', @@ -42,6 +40,7 @@ sources = files( if is_linux sources += files( + 'mlx5_flow_hw.c', 'mlx5_flow_verbs.c', ) if (dpdk_conf.has('RTE_ARCH_X86_64') @@ -72,3 +71,7 @@ endif testpmd_sources += files('mlx5_testpmd.c') subdir(exec_env) + +if (is_linux and mlx5_config.get('HAVE_IBV_FLOW_DV_SUPPORT', false)) + subdir('hws') +endif diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index b39ef1ecbe..a34fbcf74d 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1700,7 +1700,7 @@ mlx5_free_table_hash_list(struct mlx5_priv *priv) *tbls = NULL; } -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#ifdef HAVE_MLX5_HWS_SUPPORT /** * Allocate HW steering group hash list. * @@ -1749,7 +1749,7 @@ mlx5_alloc_table_hash_list(struct mlx5_priv *priv __rte_unused) int err = 0; /* Tables are only used in DV and DR modes. */ -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#ifdef HAVE_MLX5_HWS_SUPPORT struct mlx5_dev_ctx_shared *sh = priv->sh; char s[MLX5_NAME_SIZE]; @@ -1942,7 +1942,7 @@ mlx5_dev_close(struct rte_eth_dev *dev) /* Free the eCPRI flex parser resource. */ mlx5_flex_parser_ecpri_release(dev); mlx5_flex_item_port_cleanup(dev); -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#ifdef HAVE_MLX5_HWS_SUPPORT flow_hw_resource_release(dev); flow_hw_clear_port_info(dev); if (priv->sh->config.dv_flow_en == 2) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index aa328c3bc9..fc8e1190f3 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -34,7 +34,12 @@ #include "mlx5_os.h" #include "mlx5_autoconf.h" #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) -#include "mlx5_dr.h" +#ifndef RTE_EXEC_ENV_WINDOWS +#define HAVE_MLX5_HWS_SUPPORT 1 +#else +#define __be64 uint64_t +#endif +#include "hws/mlx5dr.h" #endif #define MLX5_SH(dev) (((struct mlx5_priv *)(dev)->data->dev_private)->sh) diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index fe303a73bb..137e7dd4ac 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -907,7 +907,7 @@ mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq, rte_errno = errno; goto error; } -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#ifdef HAVE_MLX5_HWS_SUPPORT if (hrxq->hws_flags) { hrxq->action = mlx5dr_action_create_dest_tir (priv->dr_ctx, diff --git a/drivers/net/mlx5/mlx5_dr.c b/drivers/net/mlx5/mlx5_dr.c deleted file mode 100644 index 7218708986..0000000000 --- a/drivers/net/mlx5/mlx5_dr.c +++ /dev/null @@ -1,383 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2021 NVIDIA CORPORATION. All rights reserved. - */ -#include <rte_flow.h> - -#include "mlx5_defs.h" -#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) -#include "mlx5_dr.h" - -/* - * The following null stubs are prepared in order not to break the linkage - * before the HW steering low-level implementation is added. - */ - -/* Open a context used for direct rule insertion using hardware steering. - * Each context can contain multiple tables of different types. - * - * @param[in] ibv_ctx - * The ibv context to used for HWS. - * @param[in] attr - * Attributes used for context open. - * @return pointer to mlx5dr_context on success NULL otherwise. - */ -__rte_weak struct mlx5dr_context * -mlx5dr_context_open(void *ibv_ctx, - struct mlx5dr_context_attr *attr) -{ - (void)ibv_ctx; - (void)attr; - return NULL; -} - -/* Close a context used for direct hardware steering. - * - * @param[in] ctx - * mlx5dr context to close. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_context_close(struct mlx5dr_context *ctx) -{ - (void)ctx; - return 0; -} - -/* Create a new direct rule table. Each table can contain multiple matchers. - * - * @param[in] ctx - * The context in which the new table will be opened. - * @param[in] attr - * Attributes used for table creation. - * @return pointer to mlx5dr_table on success NULL otherwise. - */ -__rte_weak struct mlx5dr_table * -mlx5dr_table_create(struct mlx5dr_context *ctx, - struct mlx5dr_table_attr *attr) -{ - (void)ctx; - (void)attr; - return NULL; -} - -/* Destroy direct rule table. - * - * @param[in] tbl - * mlx5dr table to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int mlx5dr_table_destroy(struct mlx5dr_table *tbl) -{ - (void)tbl; - return 0; -} - -/* Create new match template based on items mask, the match template - * will be used for matcher creation. - * - * @param[in] items - * Describe the mask for template creation - * @param[in] flags - * Template creation flags - * @return pointer to mlx5dr_match_template on success NULL otherwise - */ -__rte_weak struct mlx5dr_match_template * -mlx5dr_match_template_create(const struct rte_flow_item items[], - enum mlx5dr_match_template_flags flags) -{ - (void)items; - (void)flags; - return NULL; -} - -/* Destroy match template. - * - * @param[in] mt - * Match template to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt) -{ - (void)mt; - return 0; -} - -/* Create a new direct rule matcher. Each matcher can contain multiple rules. - * Matchers on the table will be processed by priority. Matching fields and - * mask are described by the match template. In some cases multiple match - * templates can be used on the same matcher. - * - * @param[in] table - * The table in which the new matcher will be opened. - * @param[in] mt - * Array of match templates to be used on matcher. - * @param[in] num_of_mt - * Number of match templates in mt array. - * @param[in] attr - * Attributes used for matcher creation. - * @return pointer to mlx5dr_matcher on success NULL otherwise. - */ -__rte_weak struct mlx5dr_matcher * -mlx5dr_matcher_create(struct mlx5dr_table *table __rte_unused, - struct mlx5dr_match_template *mt[] __rte_unused, - uint8_t num_of_mt __rte_unused, - struct mlx5dr_matcher_attr *attr __rte_unused) -{ - return NULL; -} - -/* Destroy direct rule matcher. - * - * @param[in] matcher - * Matcher to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher __rte_unused) -{ - return 0; -} - -/* Enqueue create rule operation. - * - * @param[in] matcher - * The matcher in which the new rule will be created. - * @param[in] mt_idx - * Match template index to create the rule with. - * @param[in] items - * The items used for the value matching. - * @param[in] rule_actions - * Rule action to be executed on match. - * @param[in] num_of_actions - * Number of rule actions. - * @param[in] attr - * Rule creation attributes. - * @param[in, out] rule_handle - * A valid rule handle. The handle doesn't require any initialization. - * @return zero on successful enqueue non zero otherwise. - */ -__rte_weak int -mlx5dr_rule_create(struct mlx5dr_matcher *matcher __rte_unused, - uint8_t mt_idx __rte_unused, - const struct rte_flow_item items[] __rte_unused, - struct mlx5dr_rule_action rule_actions[] __rte_unused, - uint8_t num_of_actions __rte_unused, - struct mlx5dr_rule_attr *attr __rte_unused, - struct mlx5dr_rule *rule_handle __rte_unused) -{ - return 0; -} - -/* Enqueue destroy rule operation. - * - * @param[in] rule - * The rule destruction to enqueue. - * @param[in] attr - * Rule destruction attributes. - * @return zero on successful enqueue non zero otherwise. - */ -__rte_weak int -mlx5dr_rule_destroy(struct mlx5dr_rule *rule __rte_unused, - struct mlx5dr_rule_attr *attr __rte_unused) -{ - return 0; -} - -/* Create direct rule drop action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_drop(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule default miss action. - * Defaults are RX: Drop TX: Wire. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_default_miss(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule goto table action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] tbl - * Destination table. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_table(struct mlx5dr_context *ctx __rte_unused, - struct mlx5dr_table *tbl __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule goto TIR action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] obj - * Direct rule TIR devx object. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx __rte_unused, - struct mlx5dr_devx_obj *obj __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule TAG action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_tag(struct mlx5dr_context *ctx __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule counter action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] obj - * Direct rule counter devx object. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_counter(struct mlx5dr_context *ctx, - struct mlx5dr_devx_obj *obj, - uint32_t flags); - -/* Create direct rule reformat action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] reformat_type - * Type of reformat. - * @param[in] data_sz - * Size in bytes of data. - * @param[in] inline_data - * Header data array in case of inline action. - * @param[in] log_bulk_size - * Number of unique values used with this pattern. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_reformat(struct mlx5dr_context *ctx __rte_unused, - enum mlx5dr_action_reformat_type reformat_type __rte_unused, - size_t data_sz __rte_unused, - void *inline_data __rte_unused, - uint32_t log_bulk_size __rte_unused, - uint32_t flags __rte_unused) -{ - return NULL; -} - -/* Create direct rule modify header action. - * - * @param[in] ctx - * The context in which the new action will be created. - * @param[in] pattern_sz - * Byte size of the pattern array. - * @param[in] pattern - * PRM format modify pattern action array. - * @param[in] log_bulk_size - * Number of unique values used with this pattern. - * @param[in] flags - * Action creation flags. (enum mlx5dr_action_flags) - * @return pointer to mlx5dr_action on success NULL otherwise. - */ -__rte_weak struct mlx5dr_action * -mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx, - size_t pattern_sz, - rte_be64_t pattern[], - uint32_t log_bulk_size, - uint32_t flags); - -/* Destroy direct rule action. - * - * @param[in] action - * The action to destroy. - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_action_destroy(struct mlx5dr_action *action __rte_unused) -{ - return 0; -} - -/* Poll queue for rule creation and deletions completions. - * - * @param[in] ctx - * The context to which the queue belong to. - * @param[in] queue_id - * The id of the queue to poll. - * @param[in, out] res - * Completion array. - * @param[in] res_nb - * Maximum number of results to return. - * @return negative number on failure, the number of completions otherwise. - */ -__rte_weak int -mlx5dr_send_queue_poll(struct mlx5dr_context *ctx __rte_unused, - uint16_t queue_id __rte_unused, - struct rte_flow_op_result res[] __rte_unused, - uint32_t res_nb __rte_unused) -{ - return 0; -} - -/* Perform an action on the queue - * - * @param[in] ctx - * The context to which the queue belong to. - * @param[in] queue_id - * The id of the queue to perform the action on. - * @param[in] actions - * Actions to perform on the queue. (enum mlx5dr_send_queue_actions) - * @return zero on success non zero otherwise. - */ -__rte_weak int -mlx5dr_send_queue_action(struct mlx5dr_context *ctx __rte_unused, - uint16_t queue_id __rte_unused, - uint32_t actions __rte_unused) -{ - return 0; -} - -#endif diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 1543d7f75e..1e32031443 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -93,6 +93,8 @@ const struct mlx5_flow_driver_ops *flow_drv_ops[] = { [MLX5_FLOW_TYPE_MIN] = &mlx5_flow_null_drv_ops, #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) [MLX5_FLOW_TYPE_DV] = &mlx5_flow_dv_drv_ops, +#endif +#ifdef HAVE_MLX5_HWS_SUPPORT [MLX5_FLOW_TYPE_HW] = &mlx5_flow_hw_drv_ops, #endif [MLX5_FLOW_TYPE_VERBS] = &mlx5_flow_verbs_drv_ops, diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 98ae7c6bda..a274808802 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -17,6 +17,7 @@ #include <mlx5_prm.h> #include "mlx5.h" +#include "hws/mlx5dr.h" /* E-Switch Manager port, used for rte_flow_item_port_id. */ #define MLX5_PORT_ESW_MGR UINT32_MAX @@ -1046,6 +1047,10 @@ struct rte_flow { #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H) +#ifdef PEDANTIC +#pragma GCC diagnostic ignored "-Wpedantic" +#endif + /* HWS flow struct. */ struct rte_flow_hw { uint32_t idx; /* Flow index from indexed pool. */ @@ -1056,9 +1061,13 @@ struct rte_flow_hw { struct mlx5_hrxq *hrxq; /* TIR action. */ }; struct rte_flow_template_table *table; /* The table flow allcated from. */ - struct mlx5dr_rule rule; /* HWS layer data struct. */ + uint8_t rule[0]; /* HWS layer data struct. */ } __rte_packed; +#ifdef PEDANTIC +#pragma GCC diagnostic error "-Wpedantic" +#endif + /* rte flow action translate to DR action struct. */ struct mlx5_action_construct_data { LIST_ENTRY(mlx5_action_construct_data) next; diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 03725649c8..8de6757737 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -1110,8 +1110,8 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev, actions, rule_acts, &acts_num); ret = mlx5dr_rule_create(table->matcher, pattern_template_index, items, - rule_acts, acts_num, - &rule_attr, &flow->rule); + action_template_index, rule_acts, + &rule_attr, (struct mlx5dr_rule *)flow->rule); if (likely(!ret)) return (struct rte_flow *)flow; /* Flow created fail, return the descriptor and flow memory. */ @@ -1174,7 +1174,7 @@ flow_hw_async_flow_destroy(struct rte_eth_dev *dev, job->user_data = user_data; job->flow = fh; rule_attr.user_data = job; - ret = mlx5dr_rule_destroy(&fh->rule, &rule_attr); + ret = mlx5dr_rule_destroy((struct mlx5dr_rule *)fh->rule, &rule_attr); if (likely(!ret)) return 0; priv->hw_q[queue].job_idx++; @@ -1440,7 +1440,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, .data = &flow_attr, }; struct mlx5_indexed_pool_config cfg = { - .size = sizeof(struct rte_flow_hw), + .size = sizeof(struct rte_flow_hw) + mlx5dr_rule_get_handle_size(), .trunk_size = 1 << 12, .per_core_cache = 1 << 13, .need_lock = 1, @@ -1501,7 +1501,7 @@ flow_hw_table_create(struct rte_eth_dev *dev, tbl->its[i] = item_templates[i]; } tbl->matcher = mlx5dr_matcher_create - (tbl->grp->tbl, mt, nb_item_templates, &matcher_attr); + (tbl->grp->tbl, mt, nb_item_templates, NULL, 0, &matcher_attr); if (!tbl->matcher) goto it_error; tbl->nb_item_templates = nb_item_templates; -- 2.18.1 ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 18/18] net/mlx5/hws: Enable HWS 2022-10-20 15:57 ` [v6 18/18] net/mlx5/hws: Enable HWS Alex Vesker @ 2022-10-24 6:54 ` Slava Ovsiienko 0 siblings, 0 replies; 134+ messages in thread From: Slava Ovsiienko @ 2022-10-24 6:54 UTC (permalink / raw) To: Alex Vesker, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou, Matan Azrad Cc: dev, Ori Kam > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 18:58 > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com>; Matan Azrad > <matan@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com> > Subject: [v6 18/18] net/mlx5/hws: Enable HWS > > Replace stub implenation of HWS with mlx5dr code. > > Signed-off-by: Alex Vesker <valex@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 134+ messages in thread
* RE: [v6 00/18] net/mlx5: Add HW steering low level support 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker ` (17 preceding siblings ...) 2022-10-20 15:57 ` [v6 18/18] net/mlx5/hws: Enable HWS Alex Vesker @ 2022-10-24 10:56 ` Raslan Darawsheh 18 siblings, 0 replies; 134+ messages in thread From: Raslan Darawsheh @ 2022-10-24 10:56 UTC (permalink / raw) To: Alex Vesker, Alex Vesker, Slava Ovsiienko, NBU-Contact-Thomas Monjalon (EXTERNAL), Suanming Mou Cc: dev, Ori Kam Hi, > -----Original Message----- > From: Alex Vesker <valex@nvidia.com> > Sent: Thursday, October 20, 2022 6:58 PM > To: Alex Vesker <valex@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) > <thomas@monjalon.net>; Suanming Mou <suanmingm@nvidia.com> > Cc: dev@dpdk.org; Ori Kam <orika@nvidia.com> > Subject: [v6 00/18] net/mlx5: Add HW steering low level support > > Mellanox ConnetX devices supports packet matching, packet modification > and redirection. These functionalities are also referred to as flow-steering. > To configure a steering rule, the rule is written to the device owned memory, > this memory is accessed and cached by the device when processing a packet. > > The highlight of this patchset is supporting HW Steering (HWS) which is the > new technology supported in new ConnectX devices, HWS allows configuring > steering rules directly to the HW using special HW queues with minimal CPU > effort. > > This patchset is the internal low layer implementation for HWS used by the > mlx5 PMD. The mlx5dr (direct rule) is layer that bridges between the PMD > and the HW by configuring the HW offloads based on the PMD logic > > v2: > Fix check patch and cosmetic changes > > v3: > -Fix unsupported items > -Fix compilation with mlx5dv dependency > > v4: > -Fix compile on Windows > > v5: > -Fix compile on old rdma-core or no rdma core > > v6: > -Fix meson style and improve configure > -Checkpatch and compilation fixes > -Fix action number issue > > Alex Vesker (8): > net/mlx5: Add additional glue functions for HWS > net/mlx5/hws: Add HWS send layer > net/mlx5/hws: Add HWS definer layer > net/mlx5/hws: Add HWS context object > net/mlx5/hws: Add HWS table object > net/mlx5/hws: Add HWS matcher object > net/mlx5/hws: Add HWS rule object > net/mlx5/hws: Enable HWS > > Bing Zhao (2): > common/mlx5: query set capability of registers > net/mlx5: provide the available tag registers > > Dariusz Sosnowski (1): > net/mlx5: add port to metadata conversion > > Erez Shitrit (3): > net/mlx5/hws: Add HWS command layer > net/mlx5/hws: Add HWS pool and buddy > net/mlx5/hws: Add HWS action object > > Hamdan Igbaria (1): > net/mlx5/hws: Add HWS debug layer > > Suanming Mou (3): > net/mlx5: split flow item translation > net/mlx5: split flow item matcher and value translation > net/mlx5: add hardware steering item translation function > > doc/guides/nics/features/default.ini | 1 + > doc/guides/nics/features/mlx5.ini | 1 + > doc/guides/nics/mlx5.rst | 5 +- > doc/guides/rel_notes/release_22_11.rst | 4 + > drivers/common/mlx5/linux/meson.build | 11 +- > drivers/common/mlx5/linux/mlx5_glue.c | 121 +- > drivers/common/mlx5/linux/mlx5_glue.h | 17 + > drivers/common/mlx5/mlx5_devx_cmds.c | 30 + > drivers/common/mlx5/mlx5_devx_cmds.h | 2 + > drivers/common/mlx5/mlx5_prm.h | 652 ++++- > drivers/net/mlx5/hws/meson.build | 18 + > drivers/net/mlx5/{mlx5_dr.h => hws/mlx5dr.h} | 201 +- > drivers/net/mlx5/hws/mlx5dr_action.c | 2237 +++++++++++++++ > drivers/net/mlx5/hws/mlx5dr_action.h | 253 ++ > drivers/net/mlx5/hws/mlx5dr_buddy.c | 200 ++ > drivers/net/mlx5/hws/mlx5dr_buddy.h | 22 + > drivers/net/mlx5/hws/mlx5dr_cmd.c | 948 +++++++ > drivers/net/mlx5/hws/mlx5dr_cmd.h | 230 ++ > drivers/net/mlx5/hws/mlx5dr_context.c | 223 ++ > drivers/net/mlx5/hws/mlx5dr_context.h | 40 + > drivers/net/mlx5/hws/mlx5dr_debug.c | 462 ++++ > drivers/net/mlx5/hws/mlx5dr_debug.h | 28 + > drivers/net/mlx5/hws/mlx5dr_definer.c | 1968 ++++++++++++++ > drivers/net/mlx5/hws/mlx5dr_definer.h | 585 ++++ > drivers/net/mlx5/hws/mlx5dr_internal.h | 93 + > drivers/net/mlx5/hws/mlx5dr_matcher.c | 919 +++++++ > drivers/net/mlx5/hws/mlx5dr_matcher.h | 76 + > drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 511 ++++ > drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 83 + > drivers/net/mlx5/hws/mlx5dr_pool.c | 672 +++++ > drivers/net/mlx5/hws/mlx5dr_pool.h | 152 ++ > drivers/net/mlx5/hws/mlx5dr_rule.c | 528 ++++ > drivers/net/mlx5/hws/mlx5dr_rule.h | 50 + > drivers/net/mlx5/hws/mlx5dr_send.c | 844 ++++++ > drivers/net/mlx5/hws/mlx5dr_send.h | 275 ++ > drivers/net/mlx5/hws/mlx5dr_table.c | 248 ++ > drivers/net/mlx5/hws/mlx5dr_table.h | 44 + > drivers/net/mlx5/linux/mlx5_os.c | 12 +- > drivers/net/mlx5/meson.build | 7 +- > drivers/net/mlx5/mlx5.c | 9 +- > drivers/net/mlx5/mlx5.h | 8 +- > drivers/net/mlx5/mlx5_defs.h | 2 + > drivers/net/mlx5/mlx5_devx.c | 2 +- > drivers/net/mlx5/mlx5_dr.c | 383 --- > drivers/net/mlx5/mlx5_flow.c | 29 +- > drivers/net/mlx5/mlx5_flow.h | 174 +- > drivers/net/mlx5/mlx5_flow_dv.c | 2567 +++++++++--------- > drivers/net/mlx5/mlx5_flow_hw.c | 115 +- > 48 files changed, 14368 insertions(+), 1694 deletions(-) create mode 100644 > drivers/net/mlx5/hws/meson.build rename drivers/net/mlx5/{mlx5_dr.h => > hws/mlx5dr.h} (67%) create mode 100644 > drivers/net/mlx5/hws/mlx5dr_action.c > create mode 100644 drivers/net/mlx5/hws/mlx5dr_action.h > create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.c > create mode 100644 drivers/net/mlx5/hws/mlx5dr_buddy.h > create mode 100644 drivers/net/mlx5/hws/mlx5dr_cmd.c create mode > 100644 drivers/net/mlx5/hws/mlx5dr_cmd.h create mode 100644 > drivers/net/mlx5/hws/mlx5dr_context.c > create mode 100644 drivers/net/mlx5/hws/mlx5dr_context.h > create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.c > create mode 100644 drivers/net/mlx5/hws/mlx5dr_debug.h > create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.c > create mode 100644 drivers/net/mlx5/hws/mlx5dr_definer.h > create mode 100644 drivers/net/mlx5/hws/mlx5dr_internal.h > create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.c > create mode 100644 drivers/net/mlx5/hws/mlx5dr_matcher.h > create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.c > create mode 100644 drivers/net/mlx5/hws/mlx5dr_pat_arg.h > create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.c > create mode 100644 drivers/net/mlx5/hws/mlx5dr_pool.h > create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.c > create mode 100644 drivers/net/mlx5/hws/mlx5dr_rule.h > create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.c > create mode 100644 drivers/net/mlx5/hws/mlx5dr_send.h > create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.c > create mode 100644 drivers/net/mlx5/hws/mlx5dr_table.h > delete mode 100644 drivers/net/mlx5/mlx5_dr.c > > -- > 2.18.1 Series applied to next-net-mlx with small modifications to the commit logs, Kindest regards, Raslan Darawsheh ^ permalink raw reply [flat|nested] 134+ messages in thread
end of thread, other threads:[~2022-10-24 10:56 UTC | newest] Thread overview: 134+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2022-09-22 19:03 [v1 00/19] net/mlx5: Add HW steering low level support Alex Vesker 2022-09-22 19:03 ` [v1 01/19] net/mlx5: split flow item translation Alex Vesker 2022-09-22 19:03 ` [v1 02/19] net/mlx5: split flow item matcher and value translation Alex Vesker 2022-09-22 19:03 ` [v1 03/19] net/mlx5: add hardware steering item translation function Alex Vesker 2022-09-22 19:03 ` [v1 04/19] net/mlx5: add port to metadata conversion Alex Vesker 2022-09-22 19:03 ` [v1 05/19] common/mlx5: query set capability of registers Alex Vesker 2022-09-22 19:03 ` [v1 06/19] net/mlx5: provide the available tag registers Alex Vesker 2022-09-22 19:03 ` [v1 07/19] net/mlx5: Add additional glue functions for HWS Alex Vesker 2022-09-22 19:03 ` [v1 08/19] net/mlx5: Remove stub HWS support Alex Vesker 2022-09-22 19:03 ` [v1 09/19] net/mlx5/hws: Add HWS command layer Alex Vesker 2022-09-22 19:03 ` [v1 10/19] net/mlx5/hws: Add HWS pool and buddy Alex Vesker 2022-09-22 19:03 ` [v1 11/19] net/mlx5/hws: Add HWS send layer Alex Vesker 2022-09-22 19:03 ` [v1 12/19] net/mlx5/hws: Add HWS definer layer Alex Vesker 2022-09-22 19:03 ` [v1 13/19] net/mlx5/hws: Add HWS context object Alex Vesker 2022-09-22 19:03 ` [v1 14/19] net/mlx5/hws: Add HWS table object Alex Vesker 2022-09-22 19:03 ` [v1 15/19] net/mlx5/hws: Add HWS matcher object Alex Vesker 2022-09-22 19:03 ` [v1 16/19] net/mlx5/hws: Add HWS rule object Alex Vesker 2022-09-22 19:03 ` [v1 17/19] net/mlx5/hws: Add HWS action object Alex Vesker 2022-09-22 19:03 ` [v1 18/19] net/mlx5/hws: Add HWS debug layer Alex Vesker 2022-09-22 19:03 ` [v1 19/19] net/mlx5/hws: Enable HWS Alex Vesker 2022-10-06 15:03 ` [v2 00/19] net/mlx5: Add HW steering low level support Alex Vesker 2022-10-06 15:03 ` [v2 01/19] net/mlx5: split flow item translation Alex Vesker 2022-10-06 15:03 ` [v2 02/19] net/mlx5: split flow item matcher and value translation Alex Vesker 2022-10-06 15:03 ` [v2 03/19] net/mlx5: add hardware steering item translation function Alex Vesker 2022-10-06 15:03 ` [v2 04/19] net/mlx5: add port to metadata conversion Alex Vesker 2022-10-06 15:03 ` [v2 05/19] common/mlx5: query set capability of registers Alex Vesker 2022-10-06 15:03 ` [v2 06/19] net/mlx5: provide the available tag registers Alex Vesker 2022-10-06 15:03 ` [v2 07/19] net/mlx5: Add additional glue functions for HWS Alex Vesker 2022-10-06 15:03 ` [v2 08/19] net/mlx5: Remove stub HWS support Alex Vesker 2022-10-06 15:03 ` [v2 09/19] net/mlx5/hws: Add HWS command layer Alex Vesker 2022-10-06 15:03 ` [v2 10/19] net/mlx5/hws: Add HWS pool and buddy Alex Vesker 2022-10-06 15:03 ` [v2 11/19] net/mlx5/hws: Add HWS send layer Alex Vesker 2022-10-06 15:03 ` [v2 12/19] net/mlx5/hws: Add HWS definer layer Alex Vesker 2022-10-06 15:03 ` [v2 13/19] net/mlx5/hws: Add HWS context object Alex Vesker 2022-10-06 15:03 ` [v2 14/19] net/mlx5/hws: Add HWS table object Alex Vesker 2022-10-06 15:03 ` [v2 15/19] net/mlx5/hws: Add HWS matcher object Alex Vesker 2022-10-06 15:03 ` [v2 16/19] net/mlx5/hws: Add HWS rule object Alex Vesker 2022-10-06 15:03 ` [v2 17/19] net/mlx5/hws: Add HWS action object Alex Vesker 2022-10-06 15:03 ` [v2 18/19] net/mlx5/hws: Add HWS debug layer Alex Vesker 2022-10-06 15:03 ` [v2 19/19] net/mlx5/hws: Enable HWS Alex Vesker 2022-10-14 11:48 ` [v3 00/18] net/mlx5: Add HW steering low level support Alex Vesker 2022-10-14 11:48 ` [v3 01/18] net/mlx5: split flow item translation Alex Vesker 2022-10-14 11:48 ` [v3 02/18] net/mlx5: split flow item matcher and value translation Alex Vesker 2022-10-14 11:48 ` [v3 03/18] net/mlx5: add hardware steering item translation function Alex Vesker 2022-10-14 11:48 ` [v3 04/18] net/mlx5: add port to metadata conversion Alex Vesker 2022-10-14 11:48 ` [v3 05/18] common/mlx5: query set capability of registers Alex Vesker 2022-10-14 11:48 ` [v3 06/18] net/mlx5: provide the available tag registers Alex Vesker 2022-10-14 11:48 ` [v3 07/18] net/mlx5: Add additional glue functions for HWS Alex Vesker 2022-10-14 11:48 ` [v3 08/18] net/mlx5/hws: Add HWS command layer Alex Vesker 2022-10-14 11:48 ` [v3 09/18] net/mlx5/hws: Add HWS pool and buddy Alex Vesker 2022-10-14 11:48 ` [v3 10/18] net/mlx5/hws: Add HWS send layer Alex Vesker 2022-10-14 11:48 ` [v3 11/18] net/mlx5/hws: Add HWS definer layer Alex Vesker 2022-10-14 11:48 ` [v3 12/18] net/mlx5/hws: Add HWS context object Alex Vesker 2022-10-14 11:48 ` [v3 13/18] net/mlx5/hws: Add HWS table object Alex Vesker 2022-10-14 11:48 ` [v3 14/18] net/mlx5/hws: Add HWS matcher object Alex Vesker 2022-10-14 11:48 ` [v3 15/18] net/mlx5/hws: Add HWS rule object Alex Vesker 2022-10-14 11:48 ` [v3 16/18] net/mlx5/hws: Add HWS action object Alex Vesker 2022-10-14 11:48 ` [v3 17/18] net/mlx5/hws: Add HWS debug layer Alex Vesker 2022-10-14 11:48 ` [v3 18/18] net/mlx5/hws: Enable HWS Alex Vesker 2022-10-19 14:42 ` [v4 00/18] net/mlx5: Add HW steering low level support Alex Vesker 2022-10-19 14:42 ` [v4 01/18] net/mlx5: split flow item translation Alex Vesker 2022-10-19 14:42 ` [v4 02/18] net/mlx5: split flow item matcher and value translation Alex Vesker 2022-10-19 14:42 ` [v4 03/18] net/mlx5: add hardware steering item translation function Alex Vesker 2022-10-19 14:42 ` [v4 04/18] net/mlx5: add port to metadata conversion Alex Vesker 2022-10-19 14:42 ` [v4 05/18] common/mlx5: query set capability of registers Alex Vesker 2022-10-19 14:42 ` [v4 06/18] net/mlx5: provide the available tag registers Alex Vesker 2022-10-19 14:42 ` [v4 07/18] net/mlx5: Add additional glue functions for HWS Alex Vesker 2022-10-19 14:42 ` [v4 08/18] net/mlx5/hws: Add HWS command layer Alex Vesker 2022-10-19 14:42 ` [v4 09/18] net/mlx5/hws: Add HWS pool and buddy Alex Vesker 2022-10-19 14:42 ` [v4 10/18] net/mlx5/hws: Add HWS send layer Alex Vesker 2022-10-19 14:42 ` [v4 11/18] net/mlx5/hws: Add HWS definer layer Alex Vesker 2022-10-19 14:42 ` [v4 12/18] net/mlx5/hws: Add HWS context object Alex Vesker 2022-10-19 14:42 ` [v4 13/18] net/mlx5/hws: Add HWS table object Alex Vesker 2022-10-19 14:42 ` [v4 14/18] net/mlx5/hws: Add HWS matcher object Alex Vesker 2022-10-19 14:42 ` [v4 15/18] net/mlx5/hws: Add HWS rule object Alex Vesker 2022-10-19 14:42 ` [v4 16/18] net/mlx5/hws: Add HWS action object Alex Vesker 2022-10-19 14:42 ` [v4 17/18] net/mlx5/hws: Add HWS debug layer Alex Vesker 2022-10-19 14:42 ` [v4 18/18] net/mlx5/hws: Enable HWS Alex Vesker 2022-10-19 20:57 ` [v5 00/18] net/mlx5: Add HW steering low level support Alex Vesker 2022-10-19 20:57 ` [v5 01/18] net/mlx5: split flow item translation Alex Vesker 2022-10-19 20:57 ` [v5 02/18] net/mlx5: split flow item matcher and value translation Alex Vesker 2022-10-19 20:57 ` [v5 03/18] net/mlx5: add hardware steering item translation function Alex Vesker 2022-10-19 20:57 ` [v5 04/18] net/mlx5: add port to metadata conversion Alex Vesker 2022-10-19 20:57 ` [v5 05/18] common/mlx5: query set capability of registers Alex Vesker 2022-10-19 20:57 ` [v5 06/18] net/mlx5: provide the available tag registers Alex Vesker 2022-10-19 20:57 ` [v5 07/18] net/mlx5: Add additional glue functions for HWS Alex Vesker 2022-10-19 20:57 ` [v5 08/18] net/mlx5/hws: Add HWS command layer Alex Vesker 2022-10-19 20:57 ` [v5 09/18] net/mlx5/hws: Add HWS pool and buddy Alex Vesker 2022-10-19 20:57 ` [v5 10/18] net/mlx5/hws: Add HWS send layer Alex Vesker 2022-10-19 20:57 ` [v5 11/18] net/mlx5/hws: Add HWS definer layer Alex Vesker 2022-10-19 20:57 ` [v5 12/18] net/mlx5/hws: Add HWS context object Alex Vesker 2022-10-19 20:57 ` [v5 13/18] net/mlx5/hws: Add HWS table object Alex Vesker 2022-10-19 20:57 ` [v5 14/18] net/mlx5/hws: Add HWS matcher object Alex Vesker 2022-10-19 20:57 ` [v5 15/18] net/mlx5/hws: Add HWS rule object Alex Vesker 2022-10-19 20:57 ` [v5 16/18] net/mlx5/hws: Add HWS action object Alex Vesker 2022-10-19 20:57 ` [v5 17/18] net/mlx5/hws: Add HWS debug layer Alex Vesker 2022-10-19 20:57 ` [v5 18/18] net/mlx5/hws: Enable HWS Alex Vesker 2022-10-20 15:57 ` [v6 00/18] net/mlx5: Add HW steering low level support Alex Vesker 2022-10-20 15:57 ` [v6 01/18] net/mlx5: split flow item translation Alex Vesker 2022-10-24 6:47 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 02/18] net/mlx5: split flow item matcher and value translation Alex Vesker 2022-10-24 6:49 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 03/18] net/mlx5: add hardware steering item translation function Alex Vesker 2022-10-24 6:50 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 04/18] net/mlx5: add port to metadata conversion Alex Vesker 2022-10-24 6:50 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 05/18] common/mlx5: query set capability of registers Alex Vesker 2022-10-24 6:50 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 06/18] net/mlx5: provide the available tag registers Alex Vesker 2022-10-24 6:51 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 07/18] net/mlx5: Add additional glue functions for HWS Alex Vesker 2022-10-24 6:52 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 08/18] net/mlx5/hws: Add HWS command layer Alex Vesker 2022-10-24 6:52 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 09/18] net/mlx5/hws: Add HWS pool and buddy Alex Vesker 2022-10-24 6:52 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 10/18] net/mlx5/hws: Add HWS send layer Alex Vesker 2022-10-24 6:53 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 11/18] net/mlx5/hws: Add HWS definer layer Alex Vesker 2022-10-24 6:53 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 12/18] net/mlx5/hws: Add HWS context object Alex Vesker 2022-10-24 6:53 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 13/18] net/mlx5/hws: Add HWS table object Alex Vesker 2022-10-24 6:54 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 14/18] net/mlx5/hws: Add HWS matcher object Alex Vesker 2022-10-24 6:54 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 15/18] net/mlx5/hws: Add HWS rule object Alex Vesker 2022-10-24 6:54 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 16/18] net/mlx5/hws: Add HWS action object Alex Vesker 2022-10-20 15:57 ` [v6 17/18] net/mlx5/hws: Add HWS debug layer Alex Vesker 2022-10-24 6:54 ` Slava Ovsiienko 2022-10-20 15:57 ` [v6 18/18] net/mlx5/hws: Enable HWS Alex Vesker 2022-10-24 6:54 ` Slava Ovsiienko 2022-10-24 10:56 ` [v6 00/18] net/mlx5: Add HW steering low level support Raslan Darawsheh
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).