From: Rongwei Liu <rongweil@nvidia.com>
To: <dev@dpdk.org>, <matan@nvidia.com>, <viacheslavo@nvidia.com>,
<orika@nvidia.com>, <suanmingm@nvidia.com>, <thomas@monjalon.net>
Subject: [PATCH v3 5/6] net/mlx5: implement IPv6 routing push remove
Date: Tue, 31 Oct 2023 12:51:30 +0200 [thread overview]
Message-ID: <20231031105131.441078-6-rongweil@nvidia.com> (raw)
In-Reply-To: <20231031105131.441078-1-rongweil@nvidia.com>
Reserve the push data buffer for each job and the maximum
length is set to 128 for now.
Only supports type IPPROTO_ROUTING when translating the rte
flow action.
Remove actions must be shared globally and only supports next layer
as TCP or UDP.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
---
doc/guides/nics/features/mlx5.ini | 2 +
doc/guides/nics/mlx5.rst | 11 +-
doc/guides/rel_notes/release_23_11.rst | 2 +
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_flow.h | 21 +-
drivers/net/mlx5/mlx5_flow_hw.c | 282 ++++++++++++++++++++++++-
6 files changed, 309 insertions(+), 10 deletions(-)
diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini
index 0ed9a6aefc..0739fe9d63 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -108,6 +108,8 @@ flag = Y
inc_tcp_ack = Y
inc_tcp_seq = Y
indirect_list = Y
+ipv6_ext_push = Y
+ipv6_ext_remove = Y
jump = Y
mark = Y
meter = Y
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index be5054e68a..955dedf3db 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -148,7 +148,9 @@ Features
- Matching on GTP extension header with raw encap/decap action.
- Matching on Geneve TLV option header with raw encap/decap action.
- Matching on ESP header SPI field.
+- Matching on flex item with specific pattern.
- Matching on InfiniBand BTH.
+- Modify flex item field.
- Modify IPv4/IPv6 ECN field.
- RSS support in sample action.
- E-Switch mirroring and jump.
@@ -166,7 +168,7 @@ Features
- Sub-Function.
- Matching on represented port.
- Matching on aggregated affinity.
-
+- Push or remove IPv6 routing extension.
Limitations
-----------
@@ -759,6 +761,13 @@ Limitations
to the representor of the source virtual port (SF/VF), while if it is disabled, the
traffic will be routed based on the steering rules in the ingress domain.
+- IPv6 routing extension push or remove:
+
+ - Supported only with HW Steering enabled (``dv_flow_en`` = 2).
+ - Supported in non-zero group (No limits on transfer domain if `fdb_def_rule_en` = 1 which is default).
+ - Only supports TCP or UDP as next layer.
+ - IPv6 routing header must be the only present extension.
+ - Not supported on guest port.
Statistics
----------
diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst
index 93999893bd..5ef309ea59 100644
--- a/doc/guides/rel_notes/release_23_11.rst
+++ b/doc/guides/rel_notes/release_23_11.rst
@@ -157,6 +157,8 @@ New Features
* Added support for ``RTE_FLOW_ACTION_TYPE_INDIRECT_LIST`` flow action.
* Added support for ``RTE_FLOW_ITEM_TYPE_PTYPE`` flow item.
* Added support for ``RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR`` flow action and mirror.
+ * Added support for ``RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH`` flow action.
+ * Added support for ``RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE`` flow action.
* **Updated Solarflare net driver.**
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f13a56ee9e..277bbbf407 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -373,6 +373,7 @@ struct mlx5_hw_q_job {
};
void *user_data; /* Job user data. */
uint8_t *encap_data; /* Encap data. */
+ uint8_t *push_data; /* IPv6 routing push data. */
struct mlx5_modification_cmd *mhdr_cmd;
struct rte_flow_item *items;
union {
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 43608e15d2..c7be1f3553 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -363,6 +363,8 @@ enum mlx5_feature_name {
#define MLX5_FLOW_ACTION_INDIRECT_AGE (1ull << 44)
#define MLX5_FLOW_ACTION_QUOTA (1ull << 46)
#define MLX5_FLOW_ACTION_PORT_REPRESENTOR (1ull << 47)
+#define MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE (1ull << 48)
+#define MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH (1ull << 49)
#define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \
(MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE)
@@ -1269,6 +1271,8 @@ typedef int
const struct rte_flow_action *,
struct mlx5dr_rule_action *);
+#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1)
+
/* rte flow action translate to DR action struct. */
struct mlx5_action_construct_data {
LIST_ENTRY(mlx5_action_construct_data) next;
@@ -1315,6 +1319,10 @@ struct mlx5_action_construct_data {
struct {
cnt_id_t id;
} shared_counter;
+ struct {
+ /* IPv6 extension push data len. */
+ uint16_t len;
+ } ipv6_ext;
struct {
uint32_t id;
uint32_t conf_masked:1;
@@ -1359,6 +1367,7 @@ struct rte_flow_actions_template {
uint16_t *src_off; /* RTE action displacement from app. template */
uint16_t reformat_off; /* Offset of DR reformat action. */
uint16_t mhdr_off; /* Offset of DR modify header action. */
+ uint16_t recom_off; /* Offset of DR IPv6 routing push remove action. */
uint32_t refcnt; /* Reference counter. */
uint8_t flex_item; /* flex item index. */
};
@@ -1384,7 +1393,14 @@ struct mlx5_hw_encap_decap_action {
uint8_t data[]; /* Action data. */
};
-#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1)
+/* Push remove action struct. */
+struct mlx5_hw_push_remove_action {
+ struct mlx5dr_action *action; /* Action object. */
+ /* Is push_remove action shared across flows in table. */
+ uint8_t shared;
+ size_t data_size; /* Action metadata size. */
+ uint8_t data[]; /* Action data. */
+};
/* Modify field action struct. */
struct mlx5_hw_modify_header_action {
@@ -1415,6 +1431,9 @@ struct mlx5_hw_actions {
/* Encap/Decap action. */
struct mlx5_hw_encap_decap_action *encap_decap;
uint16_t encap_decap_pos; /* Encap/Decap action position. */
+ /* Push/remove action. */
+ struct mlx5_hw_push_remove_action *push_remove;
+ uint16_t push_remove_pos; /* Push/remove action position. */
uint32_t mark:1; /* Indicate the mark action. */
cnt_id_t cnt_id; /* Counter id. */
uint32_t mtr_id; /* Meter id. */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 977751394e..592d436099 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -624,6 +624,12 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev,
mlx5_free(acts->encap_decap);
acts->encap_decap = NULL;
}
+ if (acts->push_remove) {
+ if (acts->push_remove->action)
+ mlx5dr_action_destroy(acts->push_remove->action);
+ mlx5_free(acts->push_remove);
+ acts->push_remove = NULL;
+ }
if (acts->mhdr) {
flow_hw_template_destroy_mhdr_action(acts->mhdr);
mlx5_free(acts->mhdr);
@@ -761,6 +767,44 @@ __flow_hw_act_data_encap_append(struct mlx5_priv *priv,
return 0;
}
+/**
+ * Append dynamic push action to the dynamic action list.
+ *
+ * @param[in] dev
+ * Pointer to the port.
+ * @param[in] acts
+ * Pointer to the template HW steering DR actions.
+ * @param[in] type
+ * Action type.
+ * @param[in] action_src
+ * Offset of source rte flow action.
+ * @param[in] action_dst
+ * Offset of destination DR action.
+ * @param[in] len
+ * Length of the data to be updated.
+ *
+ * @return
+ * Data pointer on success, NULL otherwise and rte_errno is set.
+ */
+static __rte_always_inline void *
+__flow_hw_act_data_push_append(struct rte_eth_dev *dev,
+ struct mlx5_hw_actions *acts,
+ enum rte_flow_action_type type,
+ uint16_t action_src,
+ uint16_t action_dst,
+ uint16_t len)
+{
+ struct mlx5_action_construct_data *act_data;
+ struct mlx5_priv *priv = dev->data->dev_private;
+
+ act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst);
+ if (!act_data)
+ return NULL;
+ act_data->ipv6_ext.len = len;
+ LIST_INSERT_HEAD(&acts->act_list, act_data, next);
+ return act_data;
+}
+
static __rte_always_inline int
__flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv,
struct mlx5_hw_actions *acts,
@@ -1924,6 +1968,82 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+mlx5_create_ipv6_ext_reformat(struct rte_eth_dev *dev,
+ const struct mlx5_flow_template_table_cfg *cfg,
+ struct mlx5_hw_actions *acts,
+ struct rte_flow_actions_template *at,
+ uint8_t *push_data, uint8_t *push_data_m,
+ size_t push_size, uint16_t recom_src,
+ enum mlx5dr_action_type recom_type)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ const struct rte_flow_template_table_attr *table_attr = &cfg->attr;
+ const struct rte_flow_attr *attr = &table_attr->flow_attr;
+ enum mlx5dr_table_type type = get_mlx5dr_table_type(attr);
+ struct mlx5_action_construct_data *act_data;
+ struct mlx5dr_action_reformat_header hdr = {0};
+ uint32_t flag, bulk = 0;
+
+ flag = mlx5_hw_act_flag[!!attr->group][type];
+ acts->push_remove = mlx5_malloc(MLX5_MEM_ZERO,
+ sizeof(*acts->push_remove) + push_size,
+ 0, SOCKET_ID_ANY);
+ if (!acts->push_remove)
+ return -ENOMEM;
+
+ switch (recom_type) {
+ case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT:
+ if (!push_data || !push_size)
+ goto err1;
+ if (!push_data_m) {
+ bulk = rte_log2_u32(table_attr->nb_flows);
+ } else {
+ flag |= MLX5DR_ACTION_FLAG_SHARED;
+ acts->push_remove->shared = 1;
+ }
+ acts->push_remove->data_size = push_size;
+ memcpy(acts->push_remove->data, push_data, push_size);
+ hdr.data = push_data;
+ hdr.sz = push_size;
+ break;
+ case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT:
+ flag |= MLX5DR_ACTION_FLAG_SHARED;
+ acts->push_remove->shared = 1;
+ break;
+ default:
+ break;
+ }
+
+ acts->push_remove->action =
+ mlx5dr_action_create_reformat_ipv6_ext(priv->dr_ctx,
+ recom_type, &hdr, bulk, flag);
+ if (!acts->push_remove->action)
+ goto err1;
+ acts->rule_acts[at->recom_off].action = acts->push_remove->action;
+ acts->rule_acts[at->recom_off].ipv6_ext.header = acts->push_remove->data;
+ acts->rule_acts[at->recom_off].ipv6_ext.offset = 0;
+ acts->push_remove_pos = at->recom_off;
+ if (!acts->push_remove->shared) {
+ act_data = __flow_hw_act_data_push_append(dev, acts,
+ RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH,
+ recom_src, at->recom_off, push_size);
+ if (!act_data)
+ goto err;
+ }
+ return 0;
+err:
+ if (acts->push_remove->action)
+ mlx5dr_action_destroy(acts->push_remove->action);
+err1:
+ if (acts->push_remove) {
+ mlx5_free(acts->push_remove);
+ acts->push_remove = NULL;
+ }
+ return -EINVAL;
+}
+
/**
* Translate rte_flow actions to DR action.
*
@@ -1957,19 +2077,24 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
{
struct mlx5_priv *priv = dev->data->dev_private;
const struct rte_flow_template_table_attr *table_attr = &cfg->attr;
+ struct mlx5_hca_flex_attr *hca_attr = &priv->sh->cdev->config.hca_attr.flex;
const struct rte_flow_attr *attr = &table_attr->flow_attr;
struct rte_flow_action *actions = at->actions;
struct rte_flow_action *masks = at->masks;
enum mlx5dr_action_type refmt_type = MLX5DR_ACTION_TYP_LAST;
+ enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST;
const struct rte_flow_action_raw_encap *raw_encap_data;
+ const struct rte_flow_action_ipv6_ext_push *ipv6_ext_data;
const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL;
- uint16_t reformat_src = 0;
+ uint16_t reformat_src = 0, recom_src = 0;
uint8_t *encap_data = NULL, *encap_data_m = NULL;
- size_t data_size = 0;
+ uint8_t *push_data = NULL, *push_data_m = NULL;
+ size_t data_size = 0, push_size = 0;
struct mlx5_hw_modify_header_action mhdr = { 0 };
bool actions_end = false;
uint32_t type;
bool reformat_used = false;
+ bool recom_used = false;
unsigned int of_vlan_offset;
uint16_t jump_pos;
uint32_t ct_idx;
@@ -2175,6 +2300,36 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
reformat_used = true;
refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2;
break;
+ case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH:
+ if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor ||
+ !priv->sh->srh_flex_parser.flex.mapnum) {
+ DRV_LOG(ERR, "SRv6 anchor is not supported.");
+ goto err;
+ }
+ MLX5_ASSERT(!recom_used && !recom_type);
+ recom_used = true;
+ recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT;
+ ipv6_ext_data =
+ (const struct rte_flow_action_ipv6_ext_push *)masks->conf;
+ if (ipv6_ext_data)
+ push_data_m = ipv6_ext_data->data;
+ ipv6_ext_data =
+ (const struct rte_flow_action_ipv6_ext_push *)actions->conf;
+ if (ipv6_ext_data) {
+ push_data = ipv6_ext_data->data;
+ push_size = ipv6_ext_data->size;
+ }
+ recom_src = src_pos;
+ break;
+ case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE:
+ if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor ||
+ !priv->sh->srh_flex_parser.flex.mapnum) {
+ DRV_LOG(ERR, "SRv6 anchor is not supported.");
+ goto err;
+ }
+ recom_used = true;
+ recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT;
+ break;
case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL:
flow_hw_translate_group(dev, cfg, attr->group,
&target_grp, error);
@@ -2322,6 +2477,14 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
if (ret)
goto err;
}
+ if (recom_used) {
+ MLX5_ASSERT(at->recom_off != UINT16_MAX);
+ ret = mlx5_create_ipv6_ext_reformat(dev, cfg, acts, at, push_data,
+ push_data_m, push_size, recom_src,
+ recom_type);
+ if (ret)
+ goto err;
+ }
return 0;
err:
err = rte_errno;
@@ -2719,11 +2882,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
const struct mlx5_hw_actions *hw_acts = &hw_at->acts;
const struct rte_flow_action *action;
const struct rte_flow_action_raw_encap *raw_encap_data;
+ const struct rte_flow_action_ipv6_ext_push *ipv6_push;
const struct rte_flow_item *enc_item = NULL;
const struct rte_flow_action_ethdev *port_action = NULL;
const struct rte_flow_action_meter *meter = NULL;
const struct rte_flow_action_age *age = NULL;
uint8_t *buf = job->encap_data;
+ uint8_t *push_buf = job->push_data;
struct rte_flow_attr attr = {
.ingress = 1,
};
@@ -2854,6 +3019,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
MLX5_ASSERT(raw_encap_data->size ==
act_data->encap.len);
break;
+ case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH:
+ ipv6_push =
+ (const struct rte_flow_action_ipv6_ext_push *)action->conf;
+ rte_memcpy((void *)push_buf, ipv6_push->data,
+ act_data->ipv6_ext.len);
+ MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len);
+ break;
case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID)
ret = flow_hw_set_vlan_vid_construct(dev, job,
@@ -3010,6 +3182,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
job->flow->res_idx - 1;
rule_acts[hw_acts->encap_decap_pos].reformat.data = buf;
}
+ if (hw_acts->push_remove && !hw_acts->push_remove->shared) {
+ rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset =
+ job->flow->res_idx - 1;
+ rule_acts[hw_acts->push_remove_pos].ipv6_ext.header = push_buf;
+ }
if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id))
job->flow->cnt_id = hw_acts->cnt_id;
return 0;
@@ -5113,6 +5290,38 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev,
return 0;
}
+/**
+ * Validate ipv6_ext_push action.
+ *
+ * @param[in] dev
+ * Pointer to rte_eth_dev structure.
+ * @param[in] action
+ * Pointer to the indirect action.
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_hw_validate_action_ipv6_ext_push(struct rte_eth_dev *dev __rte_unused,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_action_ipv6_ext_push *raw_push_data = action->conf;
+
+ if (!raw_push_data || !raw_push_data->size || !raw_push_data->data)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "invalid ipv6_ext_push data");
+ if (raw_push_data->type != IPPROTO_ROUTING ||
+ raw_push_data->size > MLX5_PUSH_MAX_LEN)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "Unsupported ipv6_ext_push type or length");
+ return 0;
+}
+
/**
* Validate raw_encap action.
*
@@ -5340,6 +5549,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
#endif
uint16_t i;
int ret;
+ const struct rte_flow_action_ipv6_ext_remove *remove_data;
/* FDB actions are only valid to proxy port. */
if (attr->transfer && (!priv->sh->config.dv_esw_en || !priv->master))
@@ -5436,6 +5646,21 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
/* TODO: Validation logic */
action_flags |= MLX5_FLOW_ACTION_DECAP;
break;
+ case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH:
+ ret = flow_hw_validate_action_ipv6_ext_push(dev, action, error);
+ if (ret < 0)
+ return ret;
+ action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH;
+ break;
+ case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE:
+ remove_data = action->conf;
+ /* Remove action must be shared. */
+ if (remove_data->type != IPPROTO_ROUTING || !mask) {
+ DRV_LOG(ERR, "Only supports shared IPv6 routing remove");
+ return -EINVAL;
+ }
+ action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE;
+ break;
case RTE_FLOW_ACTION_TYPE_METER:
/* TODO: Validation logic */
action_flags |= MLX5_FLOW_ACTION_METER;
@@ -5551,6 +5776,8 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = {
[RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = MLX5DR_ACTION_TYP_POP_VLAN,
[RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = MLX5DR_ACTION_TYP_PUSH_VLAN,
[RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL] = MLX5DR_ACTION_TYP_DEST_ROOT,
+ [RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH] = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT,
+ [RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE] = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT,
};
static inline void
@@ -5648,6 +5875,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at,
/**
* Create DR action template based on a provided sequence of flow actions.
*
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
* @param[in] at
* Pointer to flow actions template to be updated.
*
@@ -5656,7 +5885,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at,
* NULL otherwise.
*/
static struct mlx5dr_action_template *
-flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at)
+flow_hw_dr_actions_template_create(struct rte_eth_dev *dev,
+ struct rte_flow_actions_template *at)
{
struct mlx5dr_action_template *dr_template;
enum mlx5dr_action_type action_types[MLX5_HW_MAX_ACTS] = { MLX5DR_ACTION_TYP_LAST };
@@ -5665,8 +5895,11 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at)
enum mlx5dr_action_type reformat_act_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2;
uint16_t reformat_off = UINT16_MAX;
uint16_t mhdr_off = UINT16_MAX;
+ uint16_t recom_off = UINT16_MAX;
uint16_t cnt_off = UINT16_MAX;
+ enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST;
int ret;
+
for (i = 0, curr_off = 0; at->actions[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) {
const struct rte_flow_action_raw_encap *raw_encap_data;
size_t data_size;
@@ -5698,6 +5931,16 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at)
reformat_off = curr_off++;
reformat_act_type = mlx5_hw_dr_action_types[at->actions[i].type];
break;
+ case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH:
+ MLX5_ASSERT(recom_off == UINT16_MAX);
+ recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT;
+ recom_off = curr_off++;
+ break;
+ case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE:
+ MLX5_ASSERT(recom_off == UINT16_MAX);
+ recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT;
+ recom_off = curr_off++;
+ break;
case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
raw_encap_data = at->actions[i].conf;
data_size = raw_encap_data->size;
@@ -5770,11 +6013,25 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at)
at->reformat_off = reformat_off;
action_types[reformat_off] = reformat_act_type;
}
+ if (recom_off != UINT16_MAX) {
+ at->recom_off = recom_off;
+ action_types[recom_off] = recom_type;
+ }
dr_template = mlx5dr_action_template_create(action_types);
- if (dr_template)
+ if (dr_template) {
at->dr_actions_num = curr_off;
- else
+ } else {
DRV_LOG(ERR, "Failed to create DR action template: %d", rte_errno);
+ return NULL;
+ }
+ /* Create srh flex parser for remove anchor. */
+ if ((recom_type == MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT ||
+ recom_type == MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) &&
+ mlx5_alloc_srh_flex_parser(dev)) {
+ DRV_LOG(ERR, "Failed to create srv6 flex parser");
+ claim_zero(mlx5dr_action_template_destroy(dr_template));
+ return NULL;
+ }
return dr_template;
err_actions_num:
DRV_LOG(ERR, "Number of HW actions (%u) exceeded maximum (%u) allowed in template",
@@ -6183,7 +6440,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
break;
}
}
- at->tmpl = flow_hw_dr_actions_template_create(at);
+ at->tmpl = flow_hw_dr_actions_template_create(dev, at);
if (!at->tmpl)
goto error;
at->action_flags = action_flags;
@@ -6220,6 +6477,9 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev,
struct rte_flow_actions_template *template,
struct rte_flow_error *error __rte_unused)
{
+ uint64_t flag = MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE |
+ MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH;
+
if (__atomic_load_n(&template->refcnt, __ATOMIC_RELAXED) > 1) {
DRV_LOG(WARNING, "Action template %p is still in use.",
(void *)template);
@@ -6228,6 +6488,8 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev,
NULL,
"action template in using");
}
+ if (template->action_flags & flag)
+ mlx5_free_srh_flex_parser(dev);
LIST_REMOVE(template, next);
flow_hw_flex_item_release(dev, &template->flex_item);
if (template->tmpl)
@@ -8796,6 +9058,7 @@ flow_hw_configure(struct rte_eth_dev *dev,
mem_size += (sizeof(struct mlx5_hw_q_job *) +
sizeof(struct mlx5_hw_q_job) +
sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN +
+ sizeof(uint8_t) * MLX5_PUSH_MAX_LEN +
sizeof(struct mlx5_modification_cmd) *
MLX5_MHDR_MAX_CMD +
sizeof(struct rte_flow_item) *
@@ -8811,7 +9074,7 @@ flow_hw_configure(struct rte_eth_dev *dev,
}
for (i = 0; i < nb_q_updated; i++) {
char mz_name[RTE_MEMZONE_NAMESIZE];
- uint8_t *encap = NULL;
+ uint8_t *encap = NULL, *push = NULL;
struct mlx5_modification_cmd *mhdr_cmd = NULL;
struct rte_flow_item *items = NULL;
struct rte_flow_hw *upd_flow = NULL;
@@ -8831,13 +9094,16 @@ flow_hw_configure(struct rte_eth_dev *dev,
&job[_queue_attr[i]->size];
encap = (uint8_t *)
&mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD];
- items = (struct rte_flow_item *)
+ push = (uint8_t *)
&encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN];
+ items = (struct rte_flow_item *)
+ &push[_queue_attr[i]->size * MLX5_PUSH_MAX_LEN];
upd_flow = (struct rte_flow_hw *)
&items[_queue_attr[i]->size * MLX5_HW_MAX_ITEMS];
for (j = 0; j < _queue_attr[i]->size; j++) {
job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD];
job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN];
+ job[j].push_data = &push[j * MLX5_PUSH_MAX_LEN];
job[j].items = &items[j * MLX5_HW_MAX_ITEMS];
job[j].upd_flow = &upd_flow[j];
priv->hw_q[i].job[j] = &job[j];
--
2.27.0
next prev parent reply other threads:[~2023-10-31 10:52 UTC|newest]
Thread overview: 64+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-17 9:25 [PATCH v1 0/8] add IPv6 extension " Rongwei Liu
2023-04-17 9:25 ` [PATCH v1 1/8] ethdev: add IPv6 extension push remove action Rongwei Liu
2023-05-24 6:55 ` Ori Kam
2023-05-24 7:39 ` [PATCH v1 0/2] add IPv6 extension push remove Rongwei Liu
2023-05-24 7:39 ` [PATCH v1 1/2] ethdev: add IPv6 extension push remove action Rongwei Liu
2023-05-24 10:30 ` Ori Kam
2023-05-24 7:39 ` [PATCH v1 2/2] app/testpmd: add IPv6 extension push remove cli Rongwei Liu
2023-06-02 14:39 ` [PATCH v1 0/2] add IPv6 extension push remove Ferruh Yigit
2023-07-10 2:32 ` Rongwei Liu
2023-07-10 8:55 ` Ferruh Yigit
2023-07-10 14:41 ` Stephen Hemminger
2023-07-11 6:16 ` Thomas Monjalon
2023-09-19 8:12 ` [PATCH v3] net/mlx5: add test for live migration Rongwei Liu
2023-10-16 8:19 ` Thomas Monjalon
2023-10-16 8:25 ` Rongwei Liu
2023-10-16 9:26 ` Rongwei Liu
2023-10-16 9:26 ` Thomas Monjalon
2023-10-16 9:29 ` Rongwei Liu
2023-10-25 9:07 ` Rongwei Liu
2023-10-16 9:22 ` [PATCH v4] " Rongwei Liu
2023-10-25 9:36 ` [PATCH v5] " Rongwei Liu
2023-10-25 9:41 ` Thomas Monjalon
2023-10-25 9:45 ` [PATCH v6] " Rongwei Liu
2023-10-25 9:48 ` [PATCH v5] " Rongwei Liu
2023-10-25 9:50 ` [PATCH v7] " Rongwei Liu
2023-10-25 13:10 ` Thomas Monjalon
2023-10-26 8:15 ` Raslan Darawsheh
2023-04-17 9:25 ` [PATCH v1 2/8] app/testpmd: add IPv6 extension push remove cli Rongwei Liu
2023-05-24 7:06 ` Ori Kam
2023-04-17 9:25 ` [PATCH v1 3/8] net/mlx5/hws: add no reparse support Rongwei Liu
2023-04-17 9:25 ` [PATCH v1 4/8] net/mlx5: sample the srv6 last segment Rongwei Liu
2023-10-31 9:42 ` [PATCH v2 0/6] support IPv6 extension push remove Rongwei Liu
2023-10-31 9:42 ` [PATCH v2 1/6] net/mlx5: sample the srv6 last segment Rongwei Liu
2023-10-31 9:42 ` [PATCH v2 2/6] net/mlx5/hws: fix potential wrong errno value Rongwei Liu
2023-10-31 9:42 ` [PATCH v2 3/6] net/mlx5/hws: add IPv6 routing extension push remove actions Rongwei Liu
2023-10-31 9:42 ` [PATCH v2 4/6] net/mlx5/hws: add setter for IPv6 routing push remove Rongwei Liu
2023-10-31 9:42 ` [PATCH v2 5/6] net/mlx5: implement " Rongwei Liu
2023-10-31 9:42 ` [PATCH v2 6/6] net/mlx5/hws: add stc reparse support for srv6 push pop Rongwei Liu
2023-10-31 10:51 ` [PATCH v3 0/6] support IPv6 extension push remove Rongwei Liu
2023-10-31 10:51 ` [PATCH v3 1/6] net/mlx5: sample the srv6 last segment Rongwei Liu
2023-10-31 10:51 ` [PATCH v3 2/6] net/mlx5/hws: fix potential wrong errno value Rongwei Liu
2023-10-31 10:51 ` [PATCH v3 3/6] net/mlx5/hws: add IPv6 routing extension push remove actions Rongwei Liu
2023-10-31 10:51 ` [PATCH v3 4/6] net/mlx5/hws: add setter for IPv6 routing push remove Rongwei Liu
2023-10-31 10:51 ` Rongwei Liu [this message]
2023-10-31 10:51 ` [PATCH v3 6/6] net/mlx5/hws: add stc reparse support for srv6 push pop Rongwei Liu
2023-11-01 4:44 ` [PATCH v4 00/13] support IPv6 push remove action Rongwei Liu
2023-11-01 4:44 ` [PATCH v4 01/13] net/mlx5/hws: support insert header action Rongwei Liu
2023-11-01 4:44 ` [PATCH v4 02/13] net/mlx5/hws: support remove " Rongwei Liu
2023-11-01 4:44 ` [PATCH v4 03/13] net/mlx5/hws: allow jump to TIR over FDB Rongwei Liu
2023-11-01 4:44 ` [PATCH v4 04/13] net/mlx5/hws: support dynamic re-parse Rongwei Liu
2023-11-01 4:44 ` [PATCH v4 05/13] net/mlx5/hws: dynamic re-parse for modify header Rongwei Liu
2023-11-01 4:44 ` [PATCH v4 06/13] net/mlx5/hws: fix incorrect re-parse on complex rules Rongwei Liu
2023-11-01 4:44 ` [PATCH v4 07/13] net/mlx5: sample the srv6 last segment Rongwei Liu
2023-11-01 4:44 ` [PATCH v4 08/13] net/mlx5/hws: fix potential wrong rte_errno value Rongwei Liu
2023-11-01 4:44 ` [PATCH v4 09/13] net/mlx5/hws: add IPv6 routing extension push remove actions Rongwei Liu
2023-11-01 4:44 ` [PATCH v4 10/13] net/mlx5/hws: add setter for IPv6 routing push remove Rongwei Liu
2023-11-01 4:44 ` [PATCH v4 11/13] net/mlx5: implement " Rongwei Liu
2023-11-01 4:44 ` [PATCH v4 12/13] net/mlx5/hws: fix srv6 push compilation failure Rongwei Liu
2023-11-01 4:44 ` [PATCH v4 13/13] net/mlx5/hws: add stc reparse support for srv6 push pop Rongwei Liu
2023-11-02 13:44 ` [PATCH v4 00/13] support IPv6 push remove action Raslan Darawsheh
2023-04-17 9:25 ` [PATCH v1 5/8] net/mlx5: generate srv6 modify header resource Rongwei Liu
2023-04-17 9:25 ` [PATCH v1 6/8] net/mlx5/hws: add IPv6 routing extension push pop actions Rongwei Liu
2023-04-17 9:25 ` [PATCH v1 7/8] net/mlx5/hws: add setter for IPv6 routing push pop Rongwei Liu
2023-04-17 9:25 ` [PATCH v1 8/8] net/mlx5: implement " Rongwei Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231031105131.441078-6-rongweil@nvidia.com \
--to=rongweil@nvidia.com \
--cc=dev@dpdk.org \
--cc=matan@nvidia.com \
--cc=orika@nvidia.com \
--cc=suanmingm@nvidia.com \
--cc=thomas@monjalon.net \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).