* [PATCH v1 0/5] add IPv6 routing extension implementation
@ 2023-02-02 10:11 Rongwei Liu
2023-02-02 10:11 ` [PATCH v1 1/5] net/mlx5: adopt IPv6 routing extension prm definition Rongwei Liu
` (4 more replies)
0 siblings, 5 replies; 19+ messages in thread
From: Rongwei Liu @ 2023-02-02 10:11 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas; +Cc: rasland
Add IPv6 routing extension matching and IPv6 protocol modify
filed support.
This patch relies on the preceding ethdev one:
http://patches.dpdk.org/project/dpdk/patch/20230202100021.2445976-2-rongweil@nvidia.com/
Including one commit from Gregory to pass the compilation.
Gregory Etelson (1):
net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data
Rongwei Liu (4):
net/mlx5: adopt IPv6 routing extension prm definition
net/mlx5/hws: add IPv6 routing extension matching support
net/mlx5/hws: add modify IPv6 protocol implementation
doc/mlx5: add IPv6 routing extension matching docs
doc/guides/nics/features/default.ini | 1 +
doc/guides/nics/features/mlx5.ini | 1 +
doc/guides/nics/mlx5.rst | 2 +
drivers/common/mlx5/mlx5_devx_cmds.c | 17 +++-
drivers/common/mlx5/mlx5_devx_cmds.h | 7 +-
drivers/common/mlx5/mlx5_prm.h | 29 +++++-
drivers/net/mlx5/hws/mlx5dr_definer.c | 133 ++++++++++++++++++++++----
drivers/net/mlx5/hws/mlx5dr_definer.h | 15 +++
drivers/net/mlx5/mlx5.c | 103 +++++++++++++++++++-
drivers/net/mlx5/mlx5.h | 19 +++-
drivers/net/mlx5/mlx5_flow.h | 28 ++++++
drivers/net/mlx5/mlx5_flow_dv.c | 10 ++
drivers/net/mlx5/mlx5_flow_flex.c | 14 ++-
drivers/net/mlx5/mlx5_flow_hw.c | 29 +++++-
14 files changed, 368 insertions(+), 40 deletions(-)
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v1 1/5] net/mlx5: adopt IPv6 routing extension prm definition
2023-02-02 10:11 [PATCH v1 0/5] add IPv6 routing extension implementation Rongwei Liu
@ 2023-02-02 10:11 ` Rongwei Liu
2023-02-13 11:37 ` [PATCH v2 0/5] add IPv6 routing extension implementation Rongwei Liu
2023-02-02 10:11 ` [PATCH v1 2/5] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Rongwei Liu
` (3 subsequent siblings)
4 siblings, 1 reply; 19+ messages in thread
From: Rongwei Liu @ 2023-02-02 10:11 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas; +Cc: rasland
Per newest PRM definition, sample_id stands for 3 parts
of information instead of single uint32_t id: sample_id +
modify_filed_id + format_select_dw.
Also new FW capability bits have been introduces to identify
the new capability.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/common/mlx5/mlx5_devx_cmds.c | 14 +++++++++++---
drivers/common/mlx5/mlx5_devx_cmds.h | 7 ++++++-
drivers/common/mlx5/mlx5_prm.h | 28 ++++++++++++++++++++++++++--
drivers/net/mlx5/mlx5.c | 15 +++++++++++----
drivers/net/mlx5/mlx5.h | 3 ++-
drivers/net/mlx5/mlx5_flow_flex.c | 14 +++++++++++---
6 files changed, 67 insertions(+), 14 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index e3a4927d0f..1f65ea7dcb 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -607,7 +607,8 @@ mlx5_devx_cmd_query_hca_vdpa_attr(void *ctx,
int
mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
- uint32_t ids[], uint32_t num)
+ struct mlx5_ext_sample_id ids[],
+ uint32_t num, uint8_t *anchor)
{
uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0};
uint32_t out[MLX5_ST_SZ_DW(create_flex_parser_out)] = {0};
@@ -636,6 +637,7 @@ mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
(void *)flex_obj);
return -rte_errno;
}
+ *anchor = MLX5_GET(parse_graph_flex, flex, head_anchor_id);
for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
void *s_off = (void *)((char *)sample + i *
MLX5_ST_SZ_BYTES(parse_graph_flow_match_sample));
@@ -645,8 +647,8 @@ mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
flow_match_sample_en);
if (!en)
continue;
- ids[idx++] = MLX5_GET(parse_graph_flow_match_sample, s_off,
- flow_match_sample_field_id);
+ ids[idx++].id = MLX5_GET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_id);
}
if (num != idx) {
rte_errno = EINVAL;
@@ -794,6 +796,12 @@ mlx5_devx_cmd_query_hca_parse_graph_node_cap
max_num_arc_out);
attr->max_num_sample = MLX5_GET(parse_graph_node_cap, hcattr,
max_num_sample);
+ attr->anchor_en = MLX5_GET(parse_graph_node_cap, hcattr, anchor_en);
+ attr->ext_sample_id = MLX5_GET(parse_graph_node_cap, hcattr, ext_sample_id);
+ attr->sample_tunnel_inner2 = MLX5_GET(parse_graph_node_cap, hcattr,
+ sample_tunnel_inner2);
+ attr->zero_size_supported = MLX5_GET(parse_graph_node_cap, hcattr,
+ zero_size_supported);
attr->sample_id_in_out = MLX5_GET(parse_graph_node_cap, hcattr,
sample_id_in_out);
attr->max_base_header_length = MLX5_GET(parse_graph_node_cap, hcattr,
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index c94b9eac06..5b33010155 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -114,6 +114,10 @@ struct mlx5_hca_flex_attr {
uint8_t max_num_arc_out;
uint8_t max_num_sample;
uint8_t max_num_prog_sample:5; /* From HCA CAP 2 */
+ uint8_t anchor_en:1;
+ uint8_t ext_sample_id:1;
+ uint8_t sample_tunnel_inner2:1;
+ uint8_t zero_size_supported:1;
uint8_t sample_id_in_out:1;
uint16_t max_base_header_length;
uint8_t max_sample_base_offset;
@@ -706,7 +710,8 @@ int mlx5_devx_cmd_modify_tir(struct mlx5_devx_obj *tir,
struct mlx5_devx_modify_tir_attr *tir_attr);
__rte_internal
int mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
- uint32_t ids[], uint32_t num);
+ struct mlx5_ext_sample_id ids[],
+ uint32_t num, uint8_t *anchor);
__rte_internal
struct mlx5_devx_obj *
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 9294f65e24..b32dc735a1 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -1894,7 +1894,11 @@ struct mlx5_ifc_parse_graph_node_cap_bits {
u8 max_num_arc_in[0x08];
u8 max_num_arc_out[0x08];
u8 max_num_sample[0x08];
- u8 reserved_at_78[0x07];
+ u8 reserved_at_78[0x03];
+ u8 anchor_en[0x1];
+ u8 ext_sample_id[0x1];
+ u8 sample_tunnel_inner2[0x1];
+ u8 zero_size_supported[0x1];
u8 sample_id_in_out[0x1];
u8 max_base_header_length[0x10];
u8 reserved_at_90[0x08];
@@ -1904,6 +1908,24 @@ struct mlx5_ifc_parse_graph_node_cap_bits {
u8 header_length_mask_width[0x08];
};
+/* ext_sample_id structure, see PRM Table: Flow Match Sample ID Format. */
+struct mlx5_ext_sample_id {
+ union {
+ struct {
+#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
+ uint32_t format_select_dw:8;
+ uint32_t modify_field_id:12;
+ uint32_t sample_id:12;
+#else
+ uint32_t sample_id:12;
+ uint32_t modify_field_id:12;
+ uint32_t format_select_dw:8;
+#endif
+ };
+ uint32_t id;
+ };
+};
+
struct mlx5_ifc_flow_table_prop_layout_bits {
u8 ft_support[0x1];
u8 flow_tag[0x1];
@@ -4542,7 +4564,9 @@ struct mlx5_ifc_parse_graph_flex_bits {
u8 header_length_mode[0x4];
u8 header_length_field_offset[0x10];
u8 next_header_field_offset[0x10];
- u8 reserved_at_160[0x1b];
+ u8 reserved_at_160[0x12];
+ u8 head_anchor_id[0x6];
+ u8 reserved_at_178[0x3];
u8 next_header_field_size[0x5];
u8 header_length_field_mask[0x20];
u8 reserved_at_224[0x20];
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index b8643cebdd..0b97c4e78d 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -964,11 +964,13 @@ int
mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_hca_flex_attr *attr = &priv->sh->cdev->config.hca_attr.flex;
struct mlx5_ecpri_parser_profile *prf = &priv->sh->ecpri_parser;
struct mlx5_devx_graph_node_attr node = {
.modify_field_select = 0,
};
- uint32_t ids[8];
+ struct mlx5_ext_sample_id ids[8];
+ uint8_t anchor_id;
int ret;
if (!priv->sh->cdev->config.hca_attr.parse_graph_flex_node) {
@@ -1004,15 +1006,20 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
return (rte_errno == 0) ? -ENODEV : -rte_errno;
}
prf->num = 2;
- ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num);
+ ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num, &anchor_id);
if (ret) {
DRV_LOG(ERR, "Failed to query sample IDs.");
return (rte_errno == 0) ? -ENODEV : -rte_errno;
}
prf->offset[0] = 0x0;
prf->offset[1] = sizeof(uint32_t);
- prf->ids[0] = ids[0];
- prf->ids[1] = ids[1];
+ if (attr->ext_sample_id) {
+ prf->ids[0] = ids[0].sample_id;
+ prf->ids[1] = ids[1].sample_id;
+ } else {
+ prf->ids[0] = ids[0].id;
+ prf->ids[1] = ids[1].id;
+ }
return 0;
}
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 16b33e1548..83fb316ad8 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1307,9 +1307,10 @@ struct mlx5_lag {
struct mlx5_flex_parser_devx {
struct mlx5_list_entry entry; /* List element at the beginning. */
uint32_t num_samples;
+ uint8_t anchor_id;
void *devx_obj;
struct mlx5_devx_graph_node_attr devx_conf;
- uint32_t sample_ids[MLX5_GRAPH_NODE_SAMPLE_NUM];
+ struct mlx5_ext_sample_id sample_ids[MLX5_GRAPH_NODE_SAMPLE_NUM];
};
/* Pattern field descriptor - how to translate flex pattern into samples. */
diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index fb08910ddb..35f2a9923d 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -226,15 +226,18 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev *dev,
void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher,
misc_parameters_4);
void *misc4_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_4);
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_hca_flex_attr *attr = &priv->sh->cdev->config.hca_attr.flex;
struct mlx5_flex_item *tp;
uint32_t i, pos = 0;
+ uint32_t sample_id;
RTE_SET_USED(dev);
MLX5_ASSERT(item->spec && item->mask);
spec = item->spec;
mask = item->mask;
tp = (struct mlx5_flex_item *)spec->handle;
- MLX5_ASSERT(mlx5_flex_index(dev->data->dev_private, tp) >= 0);
+ MLX5_ASSERT(mlx5_flex_index(priv, tp) >= 0);
for (i = 0; i < tp->mapnum; i++) {
struct mlx5_flex_pattern_field *map = tp->map + i;
uint32_t id = map->reg_id;
@@ -257,9 +260,13 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev *dev,
MLX5_ASSERT(id < num_samples);
id += num_samples;
}
+ if (attr->ext_sample_id)
+ sample_id = tp->devx_fp->sample_ids[id].sample_id;
+ else
+ sample_id = tp->devx_fp->sample_ids[id].id;
mlx5_flex_set_match_sample(misc4_m, misc4_v,
def, msk & def, val & msk & def,
- tp->devx_fp->sample_ids[id], id);
+ sample_id, id);
pos += map->width;
}
}
@@ -1298,7 +1305,8 @@ mlx5_flex_parser_create_cb(void *list_ctx, void *ctx)
/* Query the firmware assigned sample ids. */
ret = mlx5_devx_cmd_query_parse_samples(fp->devx_obj,
fp->sample_ids,
- fp->num_samples);
+ fp->num_samples,
+ &fp->anchor_id);
if (ret)
goto error;
DRV_LOG(DEBUG, "DEVx flex parser %p created, samples num: %u",
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v1 2/5] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data
2023-02-02 10:11 [PATCH v1 0/5] add IPv6 routing extension implementation Rongwei Liu
2023-02-02 10:11 ` [PATCH v1 1/5] net/mlx5: adopt IPv6 routing extension prm definition Rongwei Liu
@ 2023-02-02 10:11 ` Rongwei Liu
2023-02-02 10:11 ` [PATCH v1 3/5] net/mlx5/hws: add IPv6 routing extension matching support Rongwei Liu
` (2 subsequent siblings)
4 siblings, 0 replies; 19+ messages in thread
From: Rongwei Liu @ 2023-02-02 10:11 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas
Cc: rasland, Gregory Etelson, Alex Vesker
From: Gregory Etelson <getelson@nvidia.com>
New mlx5dr_context member replaces mlx5dr_cmd_query_caps.
Capabilities structure is a member of mlx5dr_context.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 42 ++++++++++++++-------------
1 file changed, 22 insertions(+), 20 deletions(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 6b98eb8c96..0f1cab7e07 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -100,7 +100,7 @@ struct mlx5dr_definer_sel_ctrl {
};
struct mlx5dr_definer_conv_data {
- struct mlx5dr_cmd_query_caps *caps;
+ struct mlx5dr_context *ctx;
struct mlx5dr_definer_fc *fc;
uint8_t relaxed;
uint8_t tunnel;
@@ -815,6 +815,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
struct rte_flow_item *item,
int item_idx)
{
+ struct mlx5dr_cmd_query_caps *caps = cd->ctx->caps;
const struct rte_flow_item_gtp *m = item->mask;
struct mlx5dr_definer_fc *fc;
@@ -836,7 +837,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
}
if (m->teid) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -844,11 +845,11 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
fc->item_idx = item_idx;
fc->tag_set = &mlx5dr_definer_gtp_teid_set;
fc->bit_mask = __mlx5_mask(header_gtp, teid);
- fc->byte_off = cd->caps->format_select_gtpu_dw_1 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_dw_1 * DW_SIZE;
}
if (m->v_pt_rsv_flags) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -857,12 +858,12 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
fc->tag_set = &mlx5dr_definer_gtp_ext_flag_set;
fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag);
fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag);
- fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_dw_0 * DW_SIZE;
}
if (m->msg_type) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -871,7 +872,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
fc->tag_set = &mlx5dr_definer_gtp_msg_type_set;
fc->bit_mask = __mlx5_mask(header_gtp, msg_type);
fc->bit_off = __mlx5_dw_bit_off(header_gtp, msg_type);
- fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_dw_0 * DW_SIZE;
}
return 0;
@@ -882,12 +883,13 @@ mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd,
struct rte_flow_item *item,
int item_idx)
{
+ struct mlx5dr_cmd_query_caps *caps = cd->ctx->caps;
const struct rte_flow_item_gtp_psc *m = item->mask;
struct mlx5dr_definer_fc *fc;
/* Overwrite GTP extension flag to be 1 */
if (!cd->relaxed) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -896,12 +898,12 @@ mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd,
fc->tag_set = &mlx5dr_definer_ones_set;
fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag);
fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag);
- fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_dw_0 * DW_SIZE;
}
/* Overwrite next extension header type */
if (!cd->relaxed) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_2_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_2_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -911,14 +913,14 @@ mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd,
fc->tag_mask_set = &mlx5dr_definer_ones_set;
fc->bit_mask = __mlx5_mask(header_opt_gtp, next_ext_hdr_type);
fc->bit_off = __mlx5_dw_bit_off(header_opt_gtp, next_ext_hdr_type);
- fc->byte_off = cd->caps->format_select_gtpu_dw_2 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_dw_2 * DW_SIZE;
}
if (!m)
return 0;
if (m->hdr.type) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -927,11 +929,11 @@ mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd,
fc->tag_set = &mlx5dr_definer_gtp_ext_hdr_pdu_set;
fc->bit_mask = __mlx5_mask(header_gtp_psc, pdu_type);
fc->bit_off = __mlx5_dw_bit_off(header_gtp_psc, pdu_type);
- fc->byte_off = cd->caps->format_select_gtpu_ext_dw_0 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_ext_dw_0 * DW_SIZE;
}
if (m->hdr.qfi) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -940,7 +942,7 @@ mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd,
fc->tag_set = &mlx5dr_definer_gtp_ext_hdr_qfi_set;
fc->bit_mask = __mlx5_mask(header_gtp_psc, qfi);
fc->bit_off = __mlx5_dw_bit_off(header_gtp_psc, qfi);
- fc->byte_off = cd->caps->format_select_gtpu_ext_dw_0 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_ext_dw_0 * DW_SIZE;
}
return 0;
@@ -951,18 +953,19 @@ mlx5dr_definer_conv_item_port(struct mlx5dr_definer_conv_data *cd,
struct rte_flow_item *item,
int item_idx)
{
+ struct mlx5dr_cmd_query_caps *caps = cd->ctx->caps;
const struct rte_flow_item_ethdev *m = item->mask;
struct mlx5dr_definer_fc *fc;
uint8_t bit_offset = 0;
if (m->port_id) {
- if (!cd->caps->wire_regc_mask) {
+ if (!caps->wire_regc_mask) {
DR_LOG(ERR, "Port ID item not supported, missing wire REGC mask");
rte_errno = ENOTSUP;
return rte_errno;
}
- while (!(cd->caps->wire_regc_mask & (1 << bit_offset)))
+ while (!(caps->wire_regc_mask & (1 << bit_offset)))
bit_offset++;
fc = &cd->fc[MLX5DR_DEFINER_FNAME_VPORT_REG_C_0];
@@ -971,7 +974,7 @@ mlx5dr_definer_conv_item_port(struct mlx5dr_definer_conv_data *cd,
fc->tag_mask_set = &mlx5dr_definer_ones_set;
DR_CALC_SET_HDR(fc, registers, register_c_0);
fc->bit_off = bit_offset;
- fc->bit_mask = cd->caps->wire_regc_mask >> bit_offset;
+ fc->bit_mask = caps->wire_regc_mask >> bit_offset;
} else {
DR_LOG(ERR, "Pord ID item mask must specify ID mask");
rte_errno = EINVAL;
@@ -1479,8 +1482,7 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
int ret;
cd.fc = fc;
- cd.hl = hl;
- cd.caps = ctx->caps;
+ cd.ctx = ctx;
cd.relaxed = mt->flags & MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH;
/* Collect all RTE fields to the field array and set header layout */
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v1 3/5] net/mlx5/hws: add IPv6 routing extension matching support
2023-02-02 10:11 [PATCH v1 0/5] add IPv6 routing extension implementation Rongwei Liu
2023-02-02 10:11 ` [PATCH v1 1/5] net/mlx5: adopt IPv6 routing extension prm definition Rongwei Liu
2023-02-02 10:11 ` [PATCH v1 2/5] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Rongwei Liu
@ 2023-02-02 10:11 ` Rongwei Liu
2023-02-02 10:11 ` [PATCH v1 4/5] net/mlx5: add modify IPv6 protocol implementation Rongwei Liu
2023-02-02 10:11 ` [PATCH v1 5/5] doc/mlx5: add IPv6 routing extension matching docs Rongwei Liu
4 siblings, 0 replies; 19+ messages in thread
From: Rongwei Liu @ 2023-02-02 10:11 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas; +Cc: rasland, Alex Vesker
Add mlx5 HWS logic to match IPv6 routing extension header.
Once detecting IPv6 matching extension items in pattern template
create callback, PMD allocates a flex parser to sample the first
dword of srv6 header.
Only support next_hdr/segments_left/type for now.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/common/mlx5/mlx5_devx_cmds.c | 7 +-
drivers/net/mlx5/hws/mlx5dr_definer.c | 91 ++++++++++++++++++++++++++
drivers/net/mlx5/hws/mlx5dr_definer.h | 15 +++++
drivers/net/mlx5/mlx5.c | 92 ++++++++++++++++++++++++++-
drivers/net/mlx5/mlx5.h | 16 +++++
drivers/net/mlx5/mlx5_flow.h | 28 ++++++++
drivers/net/mlx5/mlx5_flow_hw.c | 29 +++++++--
7 files changed, 268 insertions(+), 10 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 1f65ea7dcb..22a94c1e1a 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -607,7 +607,7 @@ mlx5_devx_cmd_query_hca_vdpa_attr(void *ctx,
int
mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
- struct mlx5_ext_sample_id ids[],
+ struct mlx5_ext_sample_id *ids,
uint32_t num, uint8_t *anchor)
{
uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0};
@@ -637,8 +637,9 @@ mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
(void *)flex_obj);
return -rte_errno;
}
- *anchor = MLX5_GET(parse_graph_flex, flex, head_anchor_id);
- for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
+ if (anchor)
+ *anchor = MLX5_GET(parse_graph_flex, flex, head_anchor_id);
+ for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM && idx <= num; i++) {
void *s_off = (void *)((char *)sample + i *
MLX5_ST_SZ_BYTES(parse_graph_flow_match_sample));
uint32_t en;
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 0f1cab7e07..142fc545eb 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -125,6 +125,7 @@ struct mlx5dr_definer_conv_data {
X(SET_BE16, ipv4_frag, v->fragment_offset, rte_ipv4_hdr) \
X(SET_BE16, ipv6_payload_len, v->hdr.payload_len, rte_flow_item_ipv6) \
X(SET, ipv6_proto, v->hdr.proto, rte_flow_item_ipv6) \
+ X(SET, ipv6_routing_hdr, IPPROTO_ROUTING, rte_flow_item_ipv6) \
X(SET, ipv6_hop_limits, v->hdr.hop_limits, rte_flow_item_ipv6) \
X(SET_BE32P, ipv6_src_addr_127_96, &v->hdr.src_addr[0], rte_flow_item_ipv6) \
X(SET_BE32P, ipv6_src_addr_95_64, &v->hdr.src_addr[4], rte_flow_item_ipv6) \
@@ -293,6 +294,21 @@ mlx5dr_definer_integrity_set(struct mlx5dr_definer_fc *fc,
DR_SET(tag, ok1_bits, fc->byte_off, fc->bit_off, fc->bit_mask);
}
+static void
+mlx5dr_definer_ipv6_routing_ext_set(struct mlx5dr_definer_fc *fc,
+ const void *item,
+ uint8_t *tag)
+{
+ const struct rte_flow_item_ipv6_routing_ext *v = item;
+ uint32_t val;
+
+ val = v->hdr.next_hdr << __mlx5_dw_bit_off(header_ipv6_routing_ext, next_hdr);
+ val |= v->hdr.type << __mlx5_dw_bit_off(header_ipv6_routing_ext, type);
+ val |= v->hdr.segments_left <<
+ __mlx5_dw_bit_off(header_ipv6_routing_ext, segments_left);
+ DR_SET(tag, val, fc->byte_off, 0, fc->bit_mask);
+}
+
static void
mlx5dr_definer_gre_key_set(struct mlx5dr_definer_fc *fc,
const void *item_spec,
@@ -1468,6 +1484,76 @@ mlx5dr_definer_conv_item_meter_color(struct mlx5dr_definer_conv_data *cd,
return 0;
}
+static struct mlx5dr_definer_fc *
+mlx5dr_definer_get_flex_parser_fc(struct mlx5dr_definer_conv_data *cd, uint32_t byte_off)
+{
+ uint32_t byte_off_fp7 = MLX5_BYTE_OFF(definer_hl, flex_parser.flex_parser_7);
+ uint32_t byte_off_fp0 = MLX5_BYTE_OFF(definer_hl, flex_parser.flex_parser_0);
+ enum mlx5dr_definer_fname fname = MLX5DR_DEFINER_FNAME_FLEX_PARSER_0;
+ struct mlx5dr_definer_fc *fc;
+ uint32_t idx;
+
+ if (byte_off < byte_off_fp7 || byte_off > byte_off_fp0) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+ idx = (byte_off_fp0 - byte_off) / (sizeof(uint32_t));
+ fname += (enum mlx5dr_definer_fname)idx;
+ fc = &cd->fc[fname];
+ fc->byte_off = byte_off;
+ fc->bit_mask = UINT32_MAX;
+ return fc;
+}
+
+static int
+mlx5dr_definer_conv_item_ipv6_routing_ext(struct mlx5dr_definer_conv_data *cd,
+ struct rte_flow_item *item,
+ int item_idx)
+{
+ const struct rte_flow_item_ipv6_routing_ext *m = item->mask;
+ struct mlx5dr_definer_fc *fc;
+ bool inner = cd->tunnel;
+ uint32_t byte_off;
+
+ if (!cd->relaxed) {
+ fc = &cd->fc[DR_CALC_FNAME(IP_VERSION, inner)];
+ fc->item_idx = item_idx;
+ fc->tag_set = &mlx5dr_definer_ipv6_version_set;
+ fc->tag_mask_set = &mlx5dr_definer_ones_set;
+ DR_CALC_SET(fc, eth_l2, l3_type, inner);
+
+ /* Overwrite - Unset ethertype if present */
+ memset(&cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)], 0, sizeof(*fc));
+
+ fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)];
+ if (!fc->tag_set) {
+ fc->item_idx = item_idx;
+ fc->tag_set = &mlx5dr_definer_ipv6_routing_hdr_set;
+ fc->tag_mask_set = &mlx5dr_definer_ones_set;
+ DR_CALC_SET(fc, eth_l3, protocol_next_header, inner);
+ }
+ }
+
+ if (!m)
+ return 0;
+
+ if (m->hdr.hdr_len || m->hdr.flags) {
+ rte_errno = ENOTSUP;
+ return rte_errno;
+ }
+
+ if (m->hdr.next_hdr || m->hdr.type || m->hdr.segments_left) {
+ byte_off = flow_hw_get_srh_flex_parser_byte_off_from_ctx(cd->ctx);
+ fc = mlx5dr_definer_get_flex_parser_fc(cd, byte_off);
+ if (!fc)
+ return rte_errno;
+
+ fc->item_idx = item_idx;
+ fc->tag_set = &mlx5dr_definer_ipv6_routing_ext_set;
+ }
+ return 0;
+}
+
static int
mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
struct mlx5dr_match_template *mt,
@@ -1583,6 +1669,11 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
ret = mlx5dr_definer_conv_item_meter_color(&cd, items, i);
item_flags |= MLX5_FLOW_ITEM_METER_COLOR;
break;
+ case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT:
+ ret = mlx5dr_definer_conv_item_ipv6_routing_ext(&cd, items, i);
+ item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT :
+ MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT;
+ break;
default:
DR_LOG(ERR, "Unsupported item type %d", items->type);
rte_errno = ENOTSUP;
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h
index d52c6b0627..c857848a28 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.h
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.h
@@ -511,6 +511,21 @@ struct mlx5_ifc_header_ipv6_vtc_bits {
u8 flow_label[0x14];
};
+struct mlx5_ifc_header_ipv6_routing_ext_bits {
+ u8 next_hdr[0x8];
+ u8 hdr_len[0x8];
+ u8 type[0x8];
+ u8 segments_left[0x8];
+ union {
+ u8 flags[0x20];
+ struct {
+ u8 last_entry[0x8];
+ u8 flag[0x8];
+ u8 tag[0x10];
+ };
+ };
+};
+
struct mlx5_ifc_header_vxlan_bits {
u8 flags[0x8];
u8 reserved1[0x18];
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 0b97c4e78d..94fd5a91e3 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -970,7 +970,6 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
.modify_field_select = 0,
};
struct mlx5_ext_sample_id ids[8];
- uint8_t anchor_id;
int ret;
if (!priv->sh->cdev->config.hca_attr.parse_graph_flex_node) {
@@ -1006,7 +1005,7 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
return (rte_errno == 0) ? -ENODEV : -rte_errno;
}
prf->num = 2;
- ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num, &anchor_id);
+ ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num, NULL);
if (ret) {
DRV_LOG(ERR, "Failed to query sample IDs.");
return (rte_errno == 0) ? -ENODEV : -rte_errno;
@@ -1041,6 +1040,95 @@ mlx5_flex_parser_ecpri_release(struct rte_eth_dev *dev)
prf->obj = NULL;
}
+/*
+ * Allocation of a flex parser for srh. Once refcnt is zero, the resources held
+ * by this parser will be freed.
+ * @param dev
+ * Pointer to Ethernet device structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev)
+{
+ struct mlx5_devx_graph_node_attr node = {
+ .modify_field_select = 0,
+ };
+ struct mlx5_ext_sample_id ids[MLX5_GRAPH_NODE_SAMPLE_NUM];
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_common_dev_config *config = &priv->sh->cdev->config;
+ void *ibv_ctx = priv->sh->cdev->ctx;
+ int ret;
+
+ memset(ids, 0xff, sizeof(ids));
+ if (!config->hca_attr.parse_graph_flex_node) {
+ DRV_LOG(ERR, "Dynamic flex parser is not supported");
+ return -ENOTSUP;
+ }
+ if (__atomic_add_fetch(&priv->sh->srh_flex_parser.refcnt, 1, __ATOMIC_RELAXED) > 1)
+ return 0;
+
+ node.header_length_mode = MLX5_GRAPH_NODE_LEN_FIELD;
+ /* Srv6 first two DW are not counted in. */
+ node.header_length_base_value = 0x8;
+ /* The unit is uint64_t. */
+ node.header_length_field_shift = 0x3;
+ /* Header length is the 2nd byte. */
+ node.header_length_field_offset = 0x8;
+ node.header_length_field_mask = 0xF;
+ /* One byte next header protocol. */
+ node.next_header_field_size = 0x8;
+ node.in[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IP;
+ node.in[0].compare_condition_value = IPPROTO_ROUTING;
+ node.sample[0].flow_match_sample_en = 1;
+ /* First come first serve no matter inner or outer. */
+ node.sample[0].flow_match_sample_tunnel_mode = MLX5_GRAPH_SAMPLE_TUNNEL_FIRST;
+ node.out[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_TCP;
+ node.out[0].compare_condition_value = IPPROTO_TCP;
+ node.out[1].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_UDP;
+ node.out[1].compare_condition_value = IPPROTO_UDP;
+ node.out[2].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IPV6;
+ node.out[2].compare_condition_value = IPPROTO_IPV6;
+ priv->sh->srh_flex_parser.fp = mlx5_devx_cmd_create_flex_parser(ibv_ctx, &node);
+ if (!priv->sh->srh_flex_parser.fp) {
+ DRV_LOG(ERR, "Failed to create flex parser node object.");
+ return (rte_errno == 0) ? -ENODEV : -rte_errno;
+ }
+ priv->sh->srh_flex_parser.num = 1;
+ ret = mlx5_devx_cmd_query_parse_samples(priv->sh->srh_flex_parser.fp, ids,
+ priv->sh->srh_flex_parser.num,
+ &priv->sh->srh_flex_parser.anchor_id);
+ if (ret) {
+ DRV_LOG(ERR, "Failed to query sample IDs.");
+ return (rte_errno == 0) ? -ENODEV : -rte_errno;
+ }
+ priv->sh->srh_flex_parser.offset[0] = 0x0;
+ priv->sh->srh_flex_parser.ids[0].id = ids[0].id;
+ return 0;
+}
+
+/*
+ * Destroy the flex parser node, including the parser itself, input / output
+ * arcs and DW samples. Resources could be reused then.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure
+ */
+void
+mlx5_free_srh_flex_parser(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_internal_flex_parser_profile *fp = &priv->sh->srh_flex_parser;
+
+ if (__atomic_sub_fetch(&fp->refcnt, 1, __ATOMIC_RELAXED))
+ return;
+ if (fp->fp)
+ mlx5_devx_cmd_destroy(fp->fp);
+ fp->fp = NULL;
+ fp->num = 0;
+}
+
uint32_t
mlx5_get_supported_sw_parsing_offloads(const struct mlx5_hca_attr *attr)
{
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 83fb316ad8..bea1f62ea8 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -543,6 +543,17 @@ struct mlx5_counter_stats_raw {
volatile struct flow_counter_stats *data;
};
+/* Mlx5 internal flex parser profile structure. */
+struct mlx5_internal_flex_parser_profile {
+ uint32_t num;/* Actual number of samples. */
+ /* Sample IDs for this profile. */
+ struct mlx5_ext_sample_id ids[MLX5_FLEX_ITEM_MAPPING_NUM];
+ uint32_t offset[MLX5_FLEX_ITEM_MAPPING_NUM]; /* Each ID sample offset. */
+ uint8_t anchor_id;
+ uint32_t refcnt;
+ void *fp; /* DevX flex parser object. */
+};
+
TAILQ_HEAD(mlx5_counter_pools, mlx5_flow_counter_pool);
/* Counter global management structure. */
@@ -1436,6 +1447,7 @@ struct mlx5_dev_ctx_shared {
struct mlx5_uar rx_uar; /* DevX UAR for Rx. */
struct mlx5_proc_priv *pppriv; /* Pointer to primary private process. */
struct mlx5_ecpri_parser_profile ecpri_parser;
+ struct mlx5_internal_flex_parser_profile srh_flex_parser; /* srh flex parser structure. */
/* Flex parser profiles information. */
LIST_HEAD(shared_rxqs, mlx5_rxq_ctrl) shared_rxqs; /* Shared RXQs. */
struct mlx5_aso_age_mng *aso_age_mng;
@@ -2258,4 +2270,8 @@ struct mlx5_list_entry *mlx5_flex_parser_clone_cb(void *list_ctx,
void *ctx);
void mlx5_flex_parser_clone_free_cb(void *tool_ctx,
struct mlx5_list_entry *entry);
+
+int mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev);
+
+void mlx5_free_srh_flex_parser(struct rte_eth_dev *dev);
#endif /* RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index e376dcae93..1f359cfb12 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -219,6 +219,10 @@ enum mlx5_feature_name {
/* Meter color item */
#define MLX5_FLOW_ITEM_METER_COLOR (UINT64_C(1) << 44)
+/* IPv6 routing extension item */
+#define MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT (UINT64_C(1) << 45)
+#define MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT (UINT64_C(1) << 46)
+
/* Outer Masks. */
#define MLX5_FLOW_LAYER_OUTER_L3 \
(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6)
@@ -2611,4 +2615,28 @@ int mlx5_flow_item_field_width(struct rte_eth_dev *dev,
enum rte_flow_field_id field, int inherit,
const struct rte_flow_attr *attr,
struct rte_flow_error *error);
+
+static __rte_always_inline int
+flow_hw_get_srh_flex_parser_byte_off_from_ctx(void *dr_ctx __rte_unused)
+{
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+ uint16_t port;
+
+ MLX5_ETH_FOREACH_DEV(port, NULL) {
+ struct mlx5_priv *priv;
+ struct mlx5_hca_flex_attr *attr;
+
+ priv = rte_eth_devices[port].data->dev_private;
+ attr = &priv->sh->cdev->config.hca_attr.flex;
+ if (priv->dr_ctx == dr_ctx && attr->ext_sample_id) {
+ if (priv->sh->srh_flex_parser.num)
+ return priv->sh->srh_flex_parser.ids[0].format_select_dw *
+ sizeof(uint32_t);
+ else
+ return UINT32_MAX;
+ }
+ }
+#endif
+ return UINT32_MAX;
+}
#endif /* RTE_PMD_MLX5_FLOW_H_ */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 952247d3cf..d0e07acc4e 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -213,17 +213,17 @@ flow_hw_hashfields_set(struct mlx5_flow_rss_desc *rss_desc,
}
/**
- * Generate the pattern item flags.
- * Will be used for shared RSS action.
+ * Generate the matching pattern item flags.
*
* @param[in] items
* Pointer to the list of items.
*
* @return
- * Item flags.
+ * Matching item flags. RSS hash field function
+ * silently ignores the flags which are unsupported.
*/
static uint64_t
-flow_hw_rss_item_flags_get(const struct rte_flow_item items[])
+flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
{
uint64_t item_flags = 0;
uint64_t last_item = 0;
@@ -249,6 +249,10 @@ flow_hw_rss_item_flags_get(const struct rte_flow_item items[])
last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
MLX5_FLOW_LAYER_OUTER_L4_UDP;
break;
+ case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT:
+ last_item = tunnel ? MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT :
+ MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT;
+ break;
case RTE_FLOW_ITEM_TYPE_GRE:
last_item = MLX5_FLOW_LAYER_GRE;
break;
@@ -4736,6 +4740,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev,
case RTE_FLOW_ITEM_TYPE_ICMP:
case RTE_FLOW_ITEM_TYPE_ICMP6:
case RTE_FLOW_ITEM_TYPE_CONNTRACK:
+ case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT:
break;
case RTE_FLOW_ITEM_TYPE_INTEGRITY:
/*
@@ -4864,7 +4869,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
"cannot create match template");
return NULL;
}
- it->item_flags = flow_hw_rss_item_flags_get(tmpl_items);
+ it->item_flags = flow_hw_matching_item_flags_get(tmpl_items);
if (copied_items) {
if (attr->ingress)
it->implicit_port = true;
@@ -4872,6 +4877,17 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
it->implicit_tag = true;
mlx5_free(copied_items);
}
+ /* Either inner or outer, can't both. */
+ if (it->item_flags & (MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT |
+ MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT)) {
+ if (((it->item_flags & MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT) &&
+ (it->item_flags & MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT)) ||
+ (mlx5_alloc_srh_flex_parser(dev))) {
+ claim_zero(mlx5dr_match_template_destroy(it->mt));
+ mlx5_free(it);
+ return NULL;
+ }
+ }
__atomic_fetch_add(&it->refcnt, 1, __ATOMIC_RELAXED);
LIST_INSERT_HEAD(&priv->flow_hw_itt, it, next);
return it;
@@ -4903,6 +4919,9 @@ flow_hw_pattern_template_destroy(struct rte_eth_dev *dev __rte_unused,
NULL,
"item template in using");
}
+ if (template->item_flags & (MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT |
+ MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT))
+ mlx5_free_srh_flex_parser(dev);
LIST_REMOVE(template, next);
claim_zero(mlx5dr_match_template_destroy(template->mt));
mlx5_free(template);
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v1 4/5] net/mlx5: add modify IPv6 protocol implementation
2023-02-02 10:11 [PATCH v1 0/5] add IPv6 routing extension implementation Rongwei Liu
` (2 preceding siblings ...)
2023-02-02 10:11 ` [PATCH v1 3/5] net/mlx5/hws: add IPv6 routing extension matching support Rongwei Liu
@ 2023-02-02 10:11 ` Rongwei Liu
2023-02-02 10:11 ` [PATCH v1 5/5] doc/mlx5: add IPv6 routing extension matching docs Rongwei Liu
4 siblings, 0 replies; 19+ messages in thread
From: Rongwei Liu @ 2023-02-02 10:11 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas; +Cc: rasland
Add HWS modify IPv6 protocol implementation.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/common/mlx5/mlx5_prm.h | 1 +
drivers/net/mlx5/mlx5_flow_dv.c | 10 ++++++++++
2 files changed, 11 insertions(+)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index b32dc735a1..7667874152 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -760,6 +760,7 @@ enum mlx5_modification_field {
MLX5_MODI_TUNNEL_HDR_DW_1 = 0x75,
MLX5_MODI_GTPU_FIRST_EXT_DW_0 = 0x76,
MLX5_MODI_HASH_RESULT = 0x81,
+ MLX5_MODI_OUT_IPV6_NEXT_HDR = 0x4A,
};
/* Total number of metadata reg_c's. */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 7ca909999b..e972a2dc5a 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1357,6 +1357,7 @@ mlx5_flow_item_field_width(struct rte_eth_dev *dev,
case RTE_FLOW_FIELD_IPV6_DSCP:
return 6;
case RTE_FLOW_FIELD_IPV6_HOPLIMIT:
+ case RTE_FLOW_FIELD_IPV6_PROTO:
return 8;
case RTE_FLOW_FIELD_IPV6_SRC:
case RTE_FLOW_FIELD_IPV6_DST:
@@ -1883,6 +1884,15 @@ mlx5_flow_field_id_to_modify_info
info[idx].offset = data->offset;
}
break;
+ case RTE_FLOW_FIELD_IPV6_PROTO:
+ MLX5_ASSERT(data->offset + width <= 8);
+ off_be = 8 - (data->offset + width);
+ info[idx] = (struct field_modify_info){1, 0, MLX5_MODI_OUT_IPV6_NEXT_HDR};
+ if (mask)
+ mask[idx] = flow_modify_info_mask_8(width, off_be);
+ else
+ info[idx].offset = off_be;
+ break;
case RTE_FLOW_FIELD_POINTER:
case RTE_FLOW_FIELD_VALUE:
default:
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v1 5/5] doc/mlx5: add IPv6 routing extension matching docs
2023-02-02 10:11 [PATCH v1 0/5] add IPv6 routing extension implementation Rongwei Liu
` (3 preceding siblings ...)
2023-02-02 10:11 ` [PATCH v1 4/5] net/mlx5: add modify IPv6 protocol implementation Rongwei Liu
@ 2023-02-02 10:11 ` Rongwei Liu
4 siblings, 0 replies; 19+ messages in thread
From: Rongwei Liu @ 2023-02-02 10:11 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas; +Cc: rasland, Ferruh Yigit
Update mlx5 related document on IPv6 routing extension header
matching.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
doc/guides/nics/features/default.ini | 1 +
doc/guides/nics/features/mlx5.ini | 1 +
doc/guides/nics/mlx5.rst | 2 ++
3 files changed, 4 insertions(+)
diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini
index 510cc6679d..3d0744a243 100644
--- a/doc/guides/nics/features/default.ini
+++ b/doc/guides/nics/features/default.ini
@@ -141,6 +141,7 @@ udp =
vlan =
vxlan =
vxlan_gpe =
+ipv6_routing_ext =
[rte_flow actions]
age =
diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini
index 62fd330e2b..bd911a467b 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -87,6 +87,7 @@ vlan = Y
vxlan = Y
vxlan_gpe = Y
represented_port = Y
+ipv6_routing_ext = Y
[rte_flow actions]
age = I
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index f137f156f9..966f1bd83f 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -106,6 +106,7 @@ Features
- Sub-Function representors.
- Sub-Function.
- Matching on represented port.
+- Matching on IPv6 routing extension header.
Limitations
@@ -174,6 +175,7 @@ Limitations
- ``-EAGAIN`` for ``rte_eth_dev_start()``.
- ``-EBUSY`` for ``rte_eth_dev_stop()``.
+ - Matching on ICMP6 following IPv6 routing extension header, should match ipv6_routing_ext_next_hdr instead of ICMP6.
- When using Verbs flow engine (``dv_flow_en`` = 0), flow pattern without any
specific VLAN will match for VLAN packets as well:
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 0/5] add IPv6 routing extension implementation
2023-02-02 10:11 ` [PATCH v1 1/5] net/mlx5: adopt IPv6 routing extension prm definition Rongwei Liu
@ 2023-02-13 11:37 ` Rongwei Liu
2023-02-13 11:37 ` [PATCH v2 1/5] net/mlx5: adopt IPv6 routing extension prm definition Rongwei Liu
` (4 more replies)
0 siblings, 5 replies; 19+ messages in thread
From: Rongwei Liu @ 2023-02-13 11:37 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas; +Cc: rasland
Add IPv6 routing extension matching and IPv6 protocol modify filed
support.
This patch relies on the preceding ethdev one:
http://patches.dpdk.org/project/dpdk/patch/20230202100021.2445976-2-rongweil@nvidia.com/
Including one commit from Gregory to pass the compilation.
Gregory Etelson (1):
net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data
Rongwei Liu (4):
net/mlx5: adopt IPv6 routing extension prm definition
net/mlx5/hws: add IPv6 routing extension matching support
net/mlx5/hws: add modify IPv6 protocol implementation
doc/mlx5: add IPv6 routing extension matching docs
doc/guides/nics/features/default.ini | 1 +
doc/guides/nics/features/mlx5.ini | 1 +
doc/guides/nics/mlx5.rst | 2 +
drivers/common/mlx5/mlx5_devx_cmds.c | 17 +++-
drivers/common/mlx5/mlx5_devx_cmds.h | 7 +-
drivers/common/mlx5/mlx5_prm.h | 29 +++++-
drivers/net/mlx5/hws/mlx5dr_definer.c | 132 ++++++++++++++++++++++----
drivers/net/mlx5/hws/mlx5dr_definer.h | 15 +++
drivers/net/mlx5/mlx5.c | 103 +++++++++++++++++++-
drivers/net/mlx5/mlx5.h | 19 +++-
drivers/net/mlx5/mlx5_flow.h | 28 ++++++
drivers/net/mlx5/mlx5_flow_dv.c | 10 ++
drivers/net/mlx5/mlx5_flow_flex.c | 14 ++-
drivers/net/mlx5/mlx5_flow_hw.c | 29 +++++-
14 files changed, 368 insertions(+), 39 deletions(-)
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 1/5] net/mlx5: adopt IPv6 routing extension prm definition
2023-02-13 11:37 ` [PATCH v2 0/5] add IPv6 routing extension implementation Rongwei Liu
@ 2023-02-13 11:37 ` Rongwei Liu
2023-02-14 12:57 ` [PATCH v3 0/5] add IPv6 routing extension implementation Rongwei Liu
2023-02-13 11:37 ` [PATCH v2 2/5] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Rongwei Liu
` (3 subsequent siblings)
4 siblings, 1 reply; 19+ messages in thread
From: Rongwei Liu @ 2023-02-13 11:37 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas; +Cc: rasland
Per newest PRM definition, sample_id stands for 3 parts
of information instead of single uint32_t id: sample_id +
modify_filed_id + format_select_dw.
Also new FW capability bits have been introduces to identify
the new capability.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/common/mlx5/mlx5_devx_cmds.c | 14 +++++++++++---
drivers/common/mlx5/mlx5_devx_cmds.h | 7 ++++++-
drivers/common/mlx5/mlx5_prm.h | 28 ++++++++++++++++++++++++++--
drivers/net/mlx5/mlx5.c | 15 +++++++++++----
drivers/net/mlx5/mlx5.h | 3 ++-
drivers/net/mlx5/mlx5_flow_flex.c | 14 +++++++++++---
6 files changed, 67 insertions(+), 14 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index e3a4927d0f..1f65ea7dcb 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -607,7 +607,8 @@ mlx5_devx_cmd_query_hca_vdpa_attr(void *ctx,
int
mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
- uint32_t ids[], uint32_t num)
+ struct mlx5_ext_sample_id ids[],
+ uint32_t num, uint8_t *anchor)
{
uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0};
uint32_t out[MLX5_ST_SZ_DW(create_flex_parser_out)] = {0};
@@ -636,6 +637,7 @@ mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
(void *)flex_obj);
return -rte_errno;
}
+ *anchor = MLX5_GET(parse_graph_flex, flex, head_anchor_id);
for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
void *s_off = (void *)((char *)sample + i *
MLX5_ST_SZ_BYTES(parse_graph_flow_match_sample));
@@ -645,8 +647,8 @@ mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
flow_match_sample_en);
if (!en)
continue;
- ids[idx++] = MLX5_GET(parse_graph_flow_match_sample, s_off,
- flow_match_sample_field_id);
+ ids[idx++].id = MLX5_GET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_id);
}
if (num != idx) {
rte_errno = EINVAL;
@@ -794,6 +796,12 @@ mlx5_devx_cmd_query_hca_parse_graph_node_cap
max_num_arc_out);
attr->max_num_sample = MLX5_GET(parse_graph_node_cap, hcattr,
max_num_sample);
+ attr->anchor_en = MLX5_GET(parse_graph_node_cap, hcattr, anchor_en);
+ attr->ext_sample_id = MLX5_GET(parse_graph_node_cap, hcattr, ext_sample_id);
+ attr->sample_tunnel_inner2 = MLX5_GET(parse_graph_node_cap, hcattr,
+ sample_tunnel_inner2);
+ attr->zero_size_supported = MLX5_GET(parse_graph_node_cap, hcattr,
+ zero_size_supported);
attr->sample_id_in_out = MLX5_GET(parse_graph_node_cap, hcattr,
sample_id_in_out);
attr->max_base_header_length = MLX5_GET(parse_graph_node_cap, hcattr,
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index c94b9eac06..5b33010155 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -114,6 +114,10 @@ struct mlx5_hca_flex_attr {
uint8_t max_num_arc_out;
uint8_t max_num_sample;
uint8_t max_num_prog_sample:5; /* From HCA CAP 2 */
+ uint8_t anchor_en:1;
+ uint8_t ext_sample_id:1;
+ uint8_t sample_tunnel_inner2:1;
+ uint8_t zero_size_supported:1;
uint8_t sample_id_in_out:1;
uint16_t max_base_header_length;
uint8_t max_sample_base_offset;
@@ -706,7 +710,8 @@ int mlx5_devx_cmd_modify_tir(struct mlx5_devx_obj *tir,
struct mlx5_devx_modify_tir_attr *tir_attr);
__rte_internal
int mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
- uint32_t ids[], uint32_t num);
+ struct mlx5_ext_sample_id ids[],
+ uint32_t num, uint8_t *anchor);
__rte_internal
struct mlx5_devx_obj *
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 243952bf85..d93b0bfbae 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -1895,7 +1895,11 @@ struct mlx5_ifc_parse_graph_node_cap_bits {
u8 max_num_arc_in[0x08];
u8 max_num_arc_out[0x08];
u8 max_num_sample[0x08];
- u8 reserved_at_78[0x07];
+ u8 reserved_at_78[0x03];
+ u8 anchor_en[0x1];
+ u8 ext_sample_id[0x1];
+ u8 sample_tunnel_inner2[0x1];
+ u8 zero_size_supported[0x1];
u8 sample_id_in_out[0x1];
u8 max_base_header_length[0x10];
u8 reserved_at_90[0x08];
@@ -1905,6 +1909,24 @@ struct mlx5_ifc_parse_graph_node_cap_bits {
u8 header_length_mask_width[0x08];
};
+/* ext_sample_id structure, see PRM Table: Flow Match Sample ID Format. */
+struct mlx5_ext_sample_id {
+ union {
+ struct {
+#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
+ uint32_t format_select_dw:8;
+ uint32_t modify_field_id:12;
+ uint32_t sample_id:12;
+#else
+ uint32_t sample_id:12;
+ uint32_t modify_field_id:12;
+ uint32_t format_select_dw:8;
+#endif
+ };
+ uint32_t id;
+ };
+};
+
struct mlx5_ifc_flow_table_prop_layout_bits {
u8 ft_support[0x1];
u8 flow_tag[0x1];
@@ -4577,7 +4599,9 @@ struct mlx5_ifc_parse_graph_flex_bits {
u8 header_length_mode[0x4];
u8 header_length_field_offset[0x10];
u8 next_header_field_offset[0x10];
- u8 reserved_at_160[0x1b];
+ u8 reserved_at_160[0x12];
+ u8 head_anchor_id[0x6];
+ u8 reserved_at_178[0x3];
u8 next_header_field_size[0x5];
u8 header_length_field_mask[0x20];
u8 reserved_at_224[0x20];
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index b8643cebdd..0b97c4e78d 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -964,11 +964,13 @@ int
mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_hca_flex_attr *attr = &priv->sh->cdev->config.hca_attr.flex;
struct mlx5_ecpri_parser_profile *prf = &priv->sh->ecpri_parser;
struct mlx5_devx_graph_node_attr node = {
.modify_field_select = 0,
};
- uint32_t ids[8];
+ struct mlx5_ext_sample_id ids[8];
+ uint8_t anchor_id;
int ret;
if (!priv->sh->cdev->config.hca_attr.parse_graph_flex_node) {
@@ -1004,15 +1006,20 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
return (rte_errno == 0) ? -ENODEV : -rte_errno;
}
prf->num = 2;
- ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num);
+ ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num, &anchor_id);
if (ret) {
DRV_LOG(ERR, "Failed to query sample IDs.");
return (rte_errno == 0) ? -ENODEV : -rte_errno;
}
prf->offset[0] = 0x0;
prf->offset[1] = sizeof(uint32_t);
- prf->ids[0] = ids[0];
- prf->ids[1] = ids[1];
+ if (attr->ext_sample_id) {
+ prf->ids[0] = ids[0].sample_id;
+ prf->ids[1] = ids[1].sample_id;
+ } else {
+ prf->ids[0] = ids[0].id;
+ prf->ids[1] = ids[1].id;
+ }
return 0;
}
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 16b33e1548..83fb316ad8 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1307,9 +1307,10 @@ struct mlx5_lag {
struct mlx5_flex_parser_devx {
struct mlx5_list_entry entry; /* List element at the beginning. */
uint32_t num_samples;
+ uint8_t anchor_id;
void *devx_obj;
struct mlx5_devx_graph_node_attr devx_conf;
- uint32_t sample_ids[MLX5_GRAPH_NODE_SAMPLE_NUM];
+ struct mlx5_ext_sample_id sample_ids[MLX5_GRAPH_NODE_SAMPLE_NUM];
};
/* Pattern field descriptor - how to translate flex pattern into samples. */
diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index fb08910ddb..35f2a9923d 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -226,15 +226,18 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev *dev,
void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher,
misc_parameters_4);
void *misc4_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_4);
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_hca_flex_attr *attr = &priv->sh->cdev->config.hca_attr.flex;
struct mlx5_flex_item *tp;
uint32_t i, pos = 0;
+ uint32_t sample_id;
RTE_SET_USED(dev);
MLX5_ASSERT(item->spec && item->mask);
spec = item->spec;
mask = item->mask;
tp = (struct mlx5_flex_item *)spec->handle;
- MLX5_ASSERT(mlx5_flex_index(dev->data->dev_private, tp) >= 0);
+ MLX5_ASSERT(mlx5_flex_index(priv, tp) >= 0);
for (i = 0; i < tp->mapnum; i++) {
struct mlx5_flex_pattern_field *map = tp->map + i;
uint32_t id = map->reg_id;
@@ -257,9 +260,13 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev *dev,
MLX5_ASSERT(id < num_samples);
id += num_samples;
}
+ if (attr->ext_sample_id)
+ sample_id = tp->devx_fp->sample_ids[id].sample_id;
+ else
+ sample_id = tp->devx_fp->sample_ids[id].id;
mlx5_flex_set_match_sample(misc4_m, misc4_v,
def, msk & def, val & msk & def,
- tp->devx_fp->sample_ids[id], id);
+ sample_id, id);
pos += map->width;
}
}
@@ -1298,7 +1305,8 @@ mlx5_flex_parser_create_cb(void *list_ctx, void *ctx)
/* Query the firmware assigned sample ids. */
ret = mlx5_devx_cmd_query_parse_samples(fp->devx_obj,
fp->sample_ids,
- fp->num_samples);
+ fp->num_samples,
+ &fp->anchor_id);
if (ret)
goto error;
DRV_LOG(DEBUG, "DEVx flex parser %p created, samples num: %u",
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 2/5] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data
2023-02-13 11:37 ` [PATCH v2 0/5] add IPv6 routing extension implementation Rongwei Liu
2023-02-13 11:37 ` [PATCH v2 1/5] net/mlx5: adopt IPv6 routing extension prm definition Rongwei Liu
@ 2023-02-13 11:37 ` Rongwei Liu
2023-02-13 11:37 ` [PATCH v2 3/5] net/mlx5/hws: add IPv6 routing extension matching support Rongwei Liu
` (2 subsequent siblings)
4 siblings, 0 replies; 19+ messages in thread
From: Rongwei Liu @ 2023-02-13 11:37 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas
Cc: rasland, Gregory Etelson, Alex Vesker
From: Gregory Etelson <getelson@nvidia.com>
New mlx5dr_context member replaces mlx5dr_cmd_query_caps.
Capabilities structure is a member of mlx5dr_context.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 41 ++++++++++++++-------------
1 file changed, 22 insertions(+), 19 deletions(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 4849158407..e7b42ee912 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -100,7 +100,7 @@ struct mlx5dr_definer_sel_ctrl {
};
struct mlx5dr_definer_conv_data {
- struct mlx5dr_cmd_query_caps *caps;
+ struct mlx5dr_context *ctx;
struct mlx5dr_definer_fc *fc;
uint8_t relaxed;
uint8_t tunnel;
@@ -888,6 +888,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
struct rte_flow_item *item,
int item_idx)
{
+ struct mlx5dr_cmd_query_caps *caps = cd->ctx->caps;
const struct rte_flow_item_gtp *m = item->mask;
struct mlx5dr_definer_fc *fc;
@@ -909,7 +910,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
}
if (m->hdr.teid) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -917,11 +918,11 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
fc->item_idx = item_idx;
fc->tag_set = &mlx5dr_definer_gtp_teid_set;
fc->bit_mask = __mlx5_mask(header_gtp, teid);
- fc->byte_off = cd->caps->format_select_gtpu_dw_1 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_dw_1 * DW_SIZE;
}
if (m->hdr.gtp_hdr_info) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -930,12 +931,12 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
fc->tag_set = &mlx5dr_definer_gtp_ext_flag_set;
fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag);
fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag);
- fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_dw_0 * DW_SIZE;
}
if (m->hdr.msg_type) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -944,7 +945,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
fc->tag_set = &mlx5dr_definer_gtp_msg_type_set;
fc->bit_mask = __mlx5_mask(header_gtp, msg_type);
fc->bit_off = __mlx5_dw_bit_off(header_gtp, msg_type);
- fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_dw_0 * DW_SIZE;
}
return 0;
@@ -955,12 +956,13 @@ mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd,
struct rte_flow_item *item,
int item_idx)
{
+ struct mlx5dr_cmd_query_caps *caps = cd->ctx->caps;
const struct rte_flow_item_gtp_psc *m = item->mask;
struct mlx5dr_definer_fc *fc;
/* Overwrite GTP extension flag to be 1 */
if (!cd->relaxed) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -969,12 +971,12 @@ mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd,
fc->tag_set = &mlx5dr_definer_ones_set;
fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag);
fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag);
- fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_dw_0 * DW_SIZE;
}
/* Overwrite next extension header type */
if (!cd->relaxed) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_2_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_2_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -984,14 +986,14 @@ mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd,
fc->tag_mask_set = &mlx5dr_definer_ones_set;
fc->bit_mask = __mlx5_mask(header_opt_gtp, next_ext_hdr_type);
fc->bit_off = __mlx5_dw_bit_off(header_opt_gtp, next_ext_hdr_type);
- fc->byte_off = cd->caps->format_select_gtpu_dw_2 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_dw_2 * DW_SIZE;
}
if (!m)
return 0;
if (m->hdr.type) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -1000,11 +1002,11 @@ mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd,
fc->tag_set = &mlx5dr_definer_gtp_ext_hdr_pdu_set;
fc->bit_mask = __mlx5_mask(header_gtp_psc, pdu_type);
fc->bit_off = __mlx5_dw_bit_off(header_gtp_psc, pdu_type);
- fc->byte_off = cd->caps->format_select_gtpu_ext_dw_0 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_ext_dw_0 * DW_SIZE;
}
if (m->hdr.qfi) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -1013,7 +1015,7 @@ mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd,
fc->tag_set = &mlx5dr_definer_gtp_ext_hdr_qfi_set;
fc->bit_mask = __mlx5_mask(header_gtp_psc, qfi);
fc->bit_off = __mlx5_dw_bit_off(header_gtp_psc, qfi);
- fc->byte_off = cd->caps->format_select_gtpu_ext_dw_0 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_ext_dw_0 * DW_SIZE;
}
return 0;
@@ -1024,18 +1026,19 @@ mlx5dr_definer_conv_item_port(struct mlx5dr_definer_conv_data *cd,
struct rte_flow_item *item,
int item_idx)
{
+ struct mlx5dr_cmd_query_caps *caps = cd->ctx->caps;
const struct rte_flow_item_ethdev *m = item->mask;
struct mlx5dr_definer_fc *fc;
uint8_t bit_offset = 0;
if (m->port_id) {
- if (!cd->caps->wire_regc_mask) {
+ if (!caps->wire_regc_mask) {
DR_LOG(ERR, "Port ID item not supported, missing wire REGC mask");
rte_errno = ENOTSUP;
return rte_errno;
}
- while (!(cd->caps->wire_regc_mask & (1 << bit_offset)))
+ while (!(caps->wire_regc_mask & (1 << bit_offset)))
bit_offset++;
fc = &cd->fc[MLX5DR_DEFINER_FNAME_VPORT_REG_C_0];
@@ -1044,7 +1047,7 @@ mlx5dr_definer_conv_item_port(struct mlx5dr_definer_conv_data *cd,
fc->tag_mask_set = &mlx5dr_definer_ones_set;
DR_CALC_SET_HDR(fc, registers, register_c_0);
fc->bit_off = bit_offset;
- fc->bit_mask = cd->caps->wire_regc_mask >> bit_offset;
+ fc->bit_mask = caps->wire_regc_mask >> bit_offset;
} else {
DR_LOG(ERR, "Pord ID item mask must specify ID mask");
rte_errno = EINVAL;
@@ -1657,7 +1660,7 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
int i, ret;
cd.fc = fc;
- cd.caps = ctx->caps;
+ cd.ctx = ctx;
cd.relaxed = mt->flags & MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH;
/* Collect all RTE fields to the field array and set header layout */
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 3/5] net/mlx5/hws: add IPv6 routing extension matching support
2023-02-13 11:37 ` [PATCH v2 0/5] add IPv6 routing extension implementation Rongwei Liu
2023-02-13 11:37 ` [PATCH v2 1/5] net/mlx5: adopt IPv6 routing extension prm definition Rongwei Liu
2023-02-13 11:37 ` [PATCH v2 2/5] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Rongwei Liu
@ 2023-02-13 11:37 ` Rongwei Liu
2023-02-13 11:37 ` [PATCH v2 4/5] net/mlx5/hws: add modify IPv6 protocol implementation Rongwei Liu
2023-02-13 11:37 ` [PATCH v2 5/5] doc/mlx5: add IPv6 routing extension matching docs Rongwei Liu
4 siblings, 0 replies; 19+ messages in thread
From: Rongwei Liu @ 2023-02-13 11:37 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas; +Cc: rasland, Alex Vesker
Add mlx5 HWS logic to match IPv6 routing extension header.
Once detecting IPv6 matching extension items in pattern template
create callback, PMD allocates a flex parser to sample the first
dword of srv6 header.
Only support next_hdr/segments_left/type for now.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
eviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/common/mlx5/mlx5_devx_cmds.c | 7 +-
drivers/net/mlx5/hws/mlx5dr_definer.c | 91 ++++++++++++++++++++++++++
drivers/net/mlx5/hws/mlx5dr_definer.h | 15 +++++
drivers/net/mlx5/mlx5.c | 92 ++++++++++++++++++++++++++-
drivers/net/mlx5/mlx5.h | 16 +++++
drivers/net/mlx5/mlx5_flow.h | 28 ++++++++
drivers/net/mlx5/mlx5_flow_hw.c | 29 +++++++--
7 files changed, 268 insertions(+), 10 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 1f65ea7dcb..22a94c1e1a 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -607,7 +607,7 @@ mlx5_devx_cmd_query_hca_vdpa_attr(void *ctx,
int
mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
- struct mlx5_ext_sample_id ids[],
+ struct mlx5_ext_sample_id *ids,
uint32_t num, uint8_t *anchor)
{
uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0};
@@ -637,8 +637,9 @@ mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
(void *)flex_obj);
return -rte_errno;
}
- *anchor = MLX5_GET(parse_graph_flex, flex, head_anchor_id);
- for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
+ if (anchor)
+ *anchor = MLX5_GET(parse_graph_flex, flex, head_anchor_id);
+ for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM && idx <= num; i++) {
void *s_off = (void *)((char *)sample + i *
MLX5_ST_SZ_BYTES(parse_graph_flow_match_sample));
uint32_t en;
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index e7b42ee912..396ab2c19e 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -125,6 +125,7 @@ struct mlx5dr_definer_conv_data {
X(SET_BE16, ipv4_len, v->total_length, rte_ipv4_hdr) \
X(SET_BE16, ipv6_payload_len, v->hdr.payload_len, rte_flow_item_ipv6) \
X(SET, ipv6_proto, v->hdr.proto, rte_flow_item_ipv6) \
+ X(SET, ipv6_routing_hdr, IPPROTO_ROUTING, rte_flow_item_ipv6) \
X(SET, ipv6_hop_limits, v->hdr.hop_limits, rte_flow_item_ipv6) \
X(SET_BE32P, ipv6_src_addr_127_96, &v->hdr.src_addr[0], rte_flow_item_ipv6) \
X(SET_BE32P, ipv6_src_addr_95_64, &v->hdr.src_addr[4], rte_flow_item_ipv6) \
@@ -293,6 +294,21 @@ mlx5dr_definer_integrity_set(struct mlx5dr_definer_fc *fc,
DR_SET(tag, ok1_bits, fc->byte_off, fc->bit_off, fc->bit_mask);
}
+static void
+mlx5dr_definer_ipv6_routing_ext_set(struct mlx5dr_definer_fc *fc,
+ const void *item,
+ uint8_t *tag)
+{
+ const struct rte_flow_item_ipv6_routing_ext *v = item;
+ uint32_t val;
+
+ val = v->hdr.next_hdr << __mlx5_dw_bit_off(header_ipv6_routing_ext, next_hdr);
+ val |= v->hdr.type << __mlx5_dw_bit_off(header_ipv6_routing_ext, type);
+ val |= v->hdr.segments_left <<
+ __mlx5_dw_bit_off(header_ipv6_routing_ext, segments_left);
+ DR_SET(tag, val, fc->byte_off, 0, fc->bit_mask);
+}
+
static void
mlx5dr_definer_gre_key_set(struct mlx5dr_definer_fc *fc,
const void *item_spec,
@@ -1589,6 +1605,76 @@ mlx5dr_definer_conv_item_meter_color(struct mlx5dr_definer_conv_data *cd,
return 0;
}
+static struct mlx5dr_definer_fc *
+mlx5dr_definer_get_flex_parser_fc(struct mlx5dr_definer_conv_data *cd, uint32_t byte_off)
+{
+ uint32_t byte_off_fp7 = MLX5_BYTE_OFF(definer_hl, flex_parser.flex_parser_7);
+ uint32_t byte_off_fp0 = MLX5_BYTE_OFF(definer_hl, flex_parser.flex_parser_0);
+ enum mlx5dr_definer_fname fname = MLX5DR_DEFINER_FNAME_FLEX_PARSER_0;
+ struct mlx5dr_definer_fc *fc;
+ uint32_t idx;
+
+ if (byte_off < byte_off_fp7 || byte_off > byte_off_fp0) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+ idx = (byte_off_fp0 - byte_off) / (sizeof(uint32_t));
+ fname += (enum mlx5dr_definer_fname)idx;
+ fc = &cd->fc[fname];
+ fc->byte_off = byte_off;
+ fc->bit_mask = UINT32_MAX;
+ return fc;
+}
+
+static int
+mlx5dr_definer_conv_item_ipv6_routing_ext(struct mlx5dr_definer_conv_data *cd,
+ struct rte_flow_item *item,
+ int item_idx)
+{
+ const struct rte_flow_item_ipv6_routing_ext *m = item->mask;
+ struct mlx5dr_definer_fc *fc;
+ bool inner = cd->tunnel;
+ uint32_t byte_off;
+
+ if (!cd->relaxed) {
+ fc = &cd->fc[DR_CALC_FNAME(IP_VERSION, inner)];
+ fc->item_idx = item_idx;
+ fc->tag_set = &mlx5dr_definer_ipv6_version_set;
+ fc->tag_mask_set = &mlx5dr_definer_ones_set;
+ DR_CALC_SET(fc, eth_l2, l3_type, inner);
+
+ /* Overwrite - Unset ethertype if present */
+ memset(&cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)], 0, sizeof(*fc));
+
+ fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)];
+ if (!fc->tag_set) {
+ fc->item_idx = item_idx;
+ fc->tag_set = &mlx5dr_definer_ipv6_routing_hdr_set;
+ fc->tag_mask_set = &mlx5dr_definer_ones_set;
+ DR_CALC_SET(fc, eth_l3, protocol_next_header, inner);
+ }
+ }
+
+ if (!m)
+ return 0;
+
+ if (m->hdr.hdr_len || m->hdr.flags) {
+ rte_errno = ENOTSUP;
+ return rte_errno;
+ }
+
+ if (m->hdr.next_hdr || m->hdr.type || m->hdr.segments_left) {
+ byte_off = flow_hw_get_srh_flex_parser_byte_off_from_ctx(cd->ctx);
+ fc = mlx5dr_definer_get_flex_parser_fc(cd, byte_off);
+ if (!fc)
+ return rte_errno;
+
+ fc->item_idx = item_idx;
+ fc->tag_set = &mlx5dr_definer_ipv6_routing_ext_set;
+ }
+ return 0;
+}
+
static int
mlx5dr_definer_mt_set_fc(struct mlx5dr_match_template *mt,
struct mlx5dr_definer_fc *fc,
@@ -1770,6 +1856,11 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
ret = mlx5dr_definer_conv_item_meter_color(&cd, items, i);
item_flags |= MLX5_FLOW_ITEM_METER_COLOR;
break;
+ case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT:
+ ret = mlx5dr_definer_conv_item_ipv6_routing_ext(&cd, items, i);
+ item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT :
+ MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT;
+ break;
default:
DR_LOG(ERR, "Unsupported item type %d", items->type);
rte_errno = ENOTSUP;
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h
index 464872acd6..7420971f4a 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.h
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.h
@@ -519,6 +519,21 @@ struct mlx5_ifc_header_ipv6_vtc_bits {
u8 flow_label[0x14];
};
+struct mlx5_ifc_header_ipv6_routing_ext_bits {
+ u8 next_hdr[0x8];
+ u8 hdr_len[0x8];
+ u8 type[0x8];
+ u8 segments_left[0x8];
+ union {
+ u8 flags[0x20];
+ struct {
+ u8 last_entry[0x8];
+ u8 flag[0x8];
+ u8 tag[0x10];
+ };
+ };
+};
+
struct mlx5_ifc_header_vxlan_bits {
u8 flags[0x8];
u8 reserved1[0x18];
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 0b97c4e78d..94fd5a91e3 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -970,7 +970,6 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
.modify_field_select = 0,
};
struct mlx5_ext_sample_id ids[8];
- uint8_t anchor_id;
int ret;
if (!priv->sh->cdev->config.hca_attr.parse_graph_flex_node) {
@@ -1006,7 +1005,7 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
return (rte_errno == 0) ? -ENODEV : -rte_errno;
}
prf->num = 2;
- ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num, &anchor_id);
+ ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num, NULL);
if (ret) {
DRV_LOG(ERR, "Failed to query sample IDs.");
return (rte_errno == 0) ? -ENODEV : -rte_errno;
@@ -1041,6 +1040,95 @@ mlx5_flex_parser_ecpri_release(struct rte_eth_dev *dev)
prf->obj = NULL;
}
+/*
+ * Allocation of a flex parser for srh. Once refcnt is zero, the resources held
+ * by this parser will be freed.
+ * @param dev
+ * Pointer to Ethernet device structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev)
+{
+ struct mlx5_devx_graph_node_attr node = {
+ .modify_field_select = 0,
+ };
+ struct mlx5_ext_sample_id ids[MLX5_GRAPH_NODE_SAMPLE_NUM];
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_common_dev_config *config = &priv->sh->cdev->config;
+ void *ibv_ctx = priv->sh->cdev->ctx;
+ int ret;
+
+ memset(ids, 0xff, sizeof(ids));
+ if (!config->hca_attr.parse_graph_flex_node) {
+ DRV_LOG(ERR, "Dynamic flex parser is not supported");
+ return -ENOTSUP;
+ }
+ if (__atomic_add_fetch(&priv->sh->srh_flex_parser.refcnt, 1, __ATOMIC_RELAXED) > 1)
+ return 0;
+
+ node.header_length_mode = MLX5_GRAPH_NODE_LEN_FIELD;
+ /* Srv6 first two DW are not counted in. */
+ node.header_length_base_value = 0x8;
+ /* The unit is uint64_t. */
+ node.header_length_field_shift = 0x3;
+ /* Header length is the 2nd byte. */
+ node.header_length_field_offset = 0x8;
+ node.header_length_field_mask = 0xF;
+ /* One byte next header protocol. */
+ node.next_header_field_size = 0x8;
+ node.in[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IP;
+ node.in[0].compare_condition_value = IPPROTO_ROUTING;
+ node.sample[0].flow_match_sample_en = 1;
+ /* First come first serve no matter inner or outer. */
+ node.sample[0].flow_match_sample_tunnel_mode = MLX5_GRAPH_SAMPLE_TUNNEL_FIRST;
+ node.out[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_TCP;
+ node.out[0].compare_condition_value = IPPROTO_TCP;
+ node.out[1].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_UDP;
+ node.out[1].compare_condition_value = IPPROTO_UDP;
+ node.out[2].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IPV6;
+ node.out[2].compare_condition_value = IPPROTO_IPV6;
+ priv->sh->srh_flex_parser.fp = mlx5_devx_cmd_create_flex_parser(ibv_ctx, &node);
+ if (!priv->sh->srh_flex_parser.fp) {
+ DRV_LOG(ERR, "Failed to create flex parser node object.");
+ return (rte_errno == 0) ? -ENODEV : -rte_errno;
+ }
+ priv->sh->srh_flex_parser.num = 1;
+ ret = mlx5_devx_cmd_query_parse_samples(priv->sh->srh_flex_parser.fp, ids,
+ priv->sh->srh_flex_parser.num,
+ &priv->sh->srh_flex_parser.anchor_id);
+ if (ret) {
+ DRV_LOG(ERR, "Failed to query sample IDs.");
+ return (rte_errno == 0) ? -ENODEV : -rte_errno;
+ }
+ priv->sh->srh_flex_parser.offset[0] = 0x0;
+ priv->sh->srh_flex_parser.ids[0].id = ids[0].id;
+ return 0;
+}
+
+/*
+ * Destroy the flex parser node, including the parser itself, input / output
+ * arcs and DW samples. Resources could be reused then.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure
+ */
+void
+mlx5_free_srh_flex_parser(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_internal_flex_parser_profile *fp = &priv->sh->srh_flex_parser;
+
+ if (__atomic_sub_fetch(&fp->refcnt, 1, __ATOMIC_RELAXED))
+ return;
+ if (fp->fp)
+ mlx5_devx_cmd_destroy(fp->fp);
+ fp->fp = NULL;
+ fp->num = 0;
+}
+
uint32_t
mlx5_get_supported_sw_parsing_offloads(const struct mlx5_hca_attr *attr)
{
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 83fb316ad8..bea1f62ea8 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -543,6 +543,17 @@ struct mlx5_counter_stats_raw {
volatile struct flow_counter_stats *data;
};
+/* Mlx5 internal flex parser profile structure. */
+struct mlx5_internal_flex_parser_profile {
+ uint32_t num;/* Actual number of samples. */
+ /* Sample IDs for this profile. */
+ struct mlx5_ext_sample_id ids[MLX5_FLEX_ITEM_MAPPING_NUM];
+ uint32_t offset[MLX5_FLEX_ITEM_MAPPING_NUM]; /* Each ID sample offset. */
+ uint8_t anchor_id;
+ uint32_t refcnt;
+ void *fp; /* DevX flex parser object. */
+};
+
TAILQ_HEAD(mlx5_counter_pools, mlx5_flow_counter_pool);
/* Counter global management structure. */
@@ -1436,6 +1447,7 @@ struct mlx5_dev_ctx_shared {
struct mlx5_uar rx_uar; /* DevX UAR for Rx. */
struct mlx5_proc_priv *pppriv; /* Pointer to primary private process. */
struct mlx5_ecpri_parser_profile ecpri_parser;
+ struct mlx5_internal_flex_parser_profile srh_flex_parser; /* srh flex parser structure. */
/* Flex parser profiles information. */
LIST_HEAD(shared_rxqs, mlx5_rxq_ctrl) shared_rxqs; /* Shared RXQs. */
struct mlx5_aso_age_mng *aso_age_mng;
@@ -2258,4 +2270,8 @@ struct mlx5_list_entry *mlx5_flex_parser_clone_cb(void *list_ctx,
void *ctx);
void mlx5_flex_parser_clone_free_cb(void *tool_ctx,
struct mlx5_list_entry *entry);
+
+int mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev);
+
+void mlx5_free_srh_flex_parser(struct rte_eth_dev *dev);
#endif /* RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 86311b0b08..4bef2296b8 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -219,6 +219,10 @@ enum mlx5_feature_name {
/* Meter color item */
#define MLX5_FLOW_ITEM_METER_COLOR (UINT64_C(1) << 44)
+/* IPv6 routing extension item */
+#define MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT (UINT64_C(1) << 45)
+#define MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT (UINT64_C(1) << 46)
+
/* Outer Masks. */
#define MLX5_FLOW_LAYER_OUTER_L3 \
(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6)
@@ -2615,4 +2619,28 @@ int mlx5_flow_item_field_width(struct rte_eth_dev *dev,
enum rte_flow_field_id field, int inherit,
const struct rte_flow_attr *attr,
struct rte_flow_error *error);
+
+static __rte_always_inline int
+flow_hw_get_srh_flex_parser_byte_off_from_ctx(void *dr_ctx __rte_unused)
+{
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+ uint16_t port;
+
+ MLX5_ETH_FOREACH_DEV(port, NULL) {
+ struct mlx5_priv *priv;
+ struct mlx5_hca_flex_attr *attr;
+
+ priv = rte_eth_devices[port].data->dev_private;
+ attr = &priv->sh->cdev->config.hca_attr.flex;
+ if (priv->dr_ctx == dr_ctx && attr->ext_sample_id) {
+ if (priv->sh->srh_flex_parser.num)
+ return priv->sh->srh_flex_parser.ids[0].format_select_dw *
+ sizeof(uint32_t);
+ else
+ return UINT32_MAX;
+ }
+ }
+#endif
+ return UINT32_MAX;
+}
#endif /* RTE_PMD_MLX5_FLOW_H_ */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 798bcca710..6799b8a89f 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -213,17 +213,17 @@ flow_hw_hashfields_set(struct mlx5_flow_rss_desc *rss_desc,
}
/**
- * Generate the pattern item flags.
- * Will be used for shared RSS action.
+ * Generate the matching pattern item flags.
*
* @param[in] items
* Pointer to the list of items.
*
* @return
- * Item flags.
+ * Matching item flags. RSS hash field function
+ * silently ignores the flags which are unsupported.
*/
static uint64_t
-flow_hw_rss_item_flags_get(const struct rte_flow_item items[])
+flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
{
uint64_t item_flags = 0;
uint64_t last_item = 0;
@@ -249,6 +249,10 @@ flow_hw_rss_item_flags_get(const struct rte_flow_item items[])
last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
MLX5_FLOW_LAYER_OUTER_L4_UDP;
break;
+ case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT:
+ last_item = tunnel ? MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT :
+ MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT;
+ break;
case RTE_FLOW_ITEM_TYPE_GRE:
last_item = MLX5_FLOW_LAYER_GRE;
break;
@@ -4738,6 +4742,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev,
case RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REQUEST:
case RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REPLY:
case RTE_FLOW_ITEM_TYPE_CONNTRACK:
+ case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT:
break;
case RTE_FLOW_ITEM_TYPE_INTEGRITY:
/*
@@ -4866,7 +4871,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
"cannot create match template");
return NULL;
}
- it->item_flags = flow_hw_rss_item_flags_get(tmpl_items);
+ it->item_flags = flow_hw_matching_item_flags_get(tmpl_items);
if (copied_items) {
if (attr->ingress)
it->implicit_port = true;
@@ -4874,6 +4879,17 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
it->implicit_tag = true;
mlx5_free(copied_items);
}
+ /* Either inner or outer, can't both. */
+ if (it->item_flags & (MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT |
+ MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT)) {
+ if (((it->item_flags & MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT) &&
+ (it->item_flags & MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT)) ||
+ (mlx5_alloc_srh_flex_parser(dev))) {
+ claim_zero(mlx5dr_match_template_destroy(it->mt));
+ mlx5_free(it);
+ return NULL;
+ }
+ }
__atomic_fetch_add(&it->refcnt, 1, __ATOMIC_RELAXED);
LIST_INSERT_HEAD(&priv->flow_hw_itt, it, next);
return it;
@@ -4905,6 +4921,9 @@ flow_hw_pattern_template_destroy(struct rte_eth_dev *dev __rte_unused,
NULL,
"item template in using");
}
+ if (template->item_flags & (MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT |
+ MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT))
+ mlx5_free_srh_flex_parser(dev);
LIST_REMOVE(template, next);
claim_zero(mlx5dr_match_template_destroy(template->mt));
mlx5_free(template);
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 4/5] net/mlx5/hws: add modify IPv6 protocol implementation
2023-02-13 11:37 ` [PATCH v2 0/5] add IPv6 routing extension implementation Rongwei Liu
` (2 preceding siblings ...)
2023-02-13 11:37 ` [PATCH v2 3/5] net/mlx5/hws: add IPv6 routing extension matching support Rongwei Liu
@ 2023-02-13 11:37 ` Rongwei Liu
2023-02-13 11:37 ` [PATCH v2 5/5] doc/mlx5: add IPv6 routing extension matching docs Rongwei Liu
4 siblings, 0 replies; 19+ messages in thread
From: Rongwei Liu @ 2023-02-13 11:37 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas; +Cc: rasland
Add HWS modify IPv6 protocol implementation.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/common/mlx5/mlx5_prm.h | 1 +
drivers/net/mlx5/mlx5_flow_dv.c | 10 ++++++++++
2 files changed, 11 insertions(+)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index d93b0bfbae..c05bce714a 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -760,6 +760,7 @@ enum mlx5_modification_field {
MLX5_MODI_TUNNEL_HDR_DW_1 = 0x75,
MLX5_MODI_GTPU_FIRST_EXT_DW_0 = 0x76,
MLX5_MODI_HASH_RESULT = 0x81,
+ MLX5_MODI_OUT_IPV6_NEXT_HDR = 0x4A,
};
/* Total number of metadata reg_c's. */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 9e5db6b945..f93dd4073c 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1357,6 +1357,7 @@ mlx5_flow_item_field_width(struct rte_eth_dev *dev,
case RTE_FLOW_FIELD_IPV6_DSCP:
return 6;
case RTE_FLOW_FIELD_IPV6_HOPLIMIT:
+ case RTE_FLOW_FIELD_IPV6_PROTO:
return 8;
case RTE_FLOW_FIELD_IPV6_SRC:
case RTE_FLOW_FIELD_IPV6_DST:
@@ -1883,6 +1884,15 @@ mlx5_flow_field_id_to_modify_info
info[idx].offset = data->offset;
}
break;
+ case RTE_FLOW_FIELD_IPV6_PROTO:
+ MLX5_ASSERT(data->offset + width <= 8);
+ off_be = 8 - (data->offset + width);
+ info[idx] = (struct field_modify_info){1, 0, MLX5_MODI_OUT_IPV6_NEXT_HDR};
+ if (mask)
+ mask[idx] = flow_modify_info_mask_8(width, off_be);
+ else
+ info[idx].offset = off_be;
+ break;
case RTE_FLOW_FIELD_POINTER:
case RTE_FLOW_FIELD_VALUE:
default:
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v2 5/5] doc/mlx5: add IPv6 routing extension matching docs
2023-02-13 11:37 ` [PATCH v2 0/5] add IPv6 routing extension implementation Rongwei Liu
` (3 preceding siblings ...)
2023-02-13 11:37 ` [PATCH v2 4/5] net/mlx5/hws: add modify IPv6 protocol implementation Rongwei Liu
@ 2023-02-13 11:37 ` Rongwei Liu
4 siblings, 0 replies; 19+ messages in thread
From: Rongwei Liu @ 2023-02-13 11:37 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas; +Cc: rasland, Ferruh Yigit
Update mlx5 related document on IPv6 routing extension header
matching.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
doc/guides/nics/features/default.ini | 1 +
doc/guides/nics/features/mlx5.ini | 1 +
doc/guides/nics/mlx5.rst | 2 ++
3 files changed, 4 insertions(+)
diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini
index 976a020985..b1ad3bdca0 100644
--- a/doc/guides/nics/features/default.ini
+++ b/doc/guides/nics/features/default.ini
@@ -143,6 +143,7 @@ udp =
vlan =
vxlan =
vxlan_gpe =
+ipv6_routing_ext =
[rte_flow actions]
age =
diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini
index eb016f34da..dac5ee5579 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -89,6 +89,7 @@ vlan = Y
vxlan = Y
vxlan_gpe = Y
represented_port = Y
+ipv6_routing_ext = Y
[rte_flow actions]
age = I
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 9c6f1cca19..cf8bf25054 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -106,6 +106,7 @@ Features
- Sub-Function representors.
- Sub-Function.
- Matching on represented port.
+- Matching on IPv6 routing extension header.
Limitations
@@ -174,6 +175,7 @@ Limitations
- ``-EAGAIN`` for ``rte_eth_dev_start()``.
- ``-EBUSY`` for ``rte_eth_dev_stop()``.
+ - Matching on ICMP6 following IPv6 routing extension header, should match ipv6_routing_ext_next_hdr instead of ICMP6.
- When using Verbs flow engine (``dv_flow_en`` = 0), flow pattern without any
specific VLAN will match for VLAN packets as well:
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 0/5] add IPv6 routing extension implementation
2023-02-13 11:37 ` [PATCH v2 1/5] net/mlx5: adopt IPv6 routing extension prm definition Rongwei Liu
@ 2023-02-14 12:57 ` Rongwei Liu
2023-02-14 12:57 ` [PATCH v3 1/5] net/mlx5: adopt IPv6 routing extension prm definition Rongwei Liu
` (5 more replies)
0 siblings, 6 replies; 19+ messages in thread
From: Rongwei Liu @ 2023-02-14 12:57 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas; +Cc: rasland
Add IPv6 routing extension matching and IPv6 protocol modify filed
support.
This patch relies on the preceding ethdev one:
http://patches.dpdk.org/project/dpdk/patch/20230202100021.2445976-2-rongweil@nvidia.com/
Including one commit from Gregory to pass the compilation.
v3: add more sentences into mlx5.rst.
Gregory Etelson (1):
net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data
Rongwei Liu (4):
net/mlx5: adopt IPv6 routing extension prm definition
net/mlx5/hws: add IPv6 routing extension matching support
net/mlx5/hws: add modify IPv6 protocol implementation
doc/mlx5: add IPv6 routing extension matching docs
doc/guides/nics/features/default.ini | 1 +
doc/guides/nics/features/mlx5.ini | 1 +
doc/guides/nics/mlx5.rst | 10 ++
drivers/common/mlx5/mlx5_devx_cmds.c | 17 +++-
drivers/common/mlx5/mlx5_devx_cmds.h | 7 +-
drivers/common/mlx5/mlx5_prm.h | 29 +++++-
drivers/net/mlx5/hws/mlx5dr_definer.c | 132 ++++++++++++++++++++++----
drivers/net/mlx5/hws/mlx5dr_definer.h | 15 +++
drivers/net/mlx5/mlx5.c | 103 +++++++++++++++++++-
drivers/net/mlx5/mlx5.h | 19 +++-
drivers/net/mlx5/mlx5_flow.h | 28 ++++++
drivers/net/mlx5/mlx5_flow_dv.c | 10 ++
drivers/net/mlx5/mlx5_flow_flex.c | 14 ++-
drivers/net/mlx5/mlx5_flow_hw.c | 29 +++++-
14 files changed, 376 insertions(+), 39 deletions(-)
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 1/5] net/mlx5: adopt IPv6 routing extension prm definition
2023-02-14 12:57 ` [PATCH v3 0/5] add IPv6 routing extension implementation Rongwei Liu
@ 2023-02-14 12:57 ` Rongwei Liu
2023-02-14 12:57 ` [PATCH v3 2/5] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Rongwei Liu
` (4 subsequent siblings)
5 siblings, 0 replies; 19+ messages in thread
From: Rongwei Liu @ 2023-02-14 12:57 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas; +Cc: rasland
Per newest PRM definition, sample_id stands for 3 parts
of information instead of single uint32_t id: sample_id +
modify_filed_id + format_select_dw.
Also new FW capability bits have been introduces to identify
the new capability.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/common/mlx5/mlx5_devx_cmds.c | 14 +++++++++++---
drivers/common/mlx5/mlx5_devx_cmds.h | 7 ++++++-
drivers/common/mlx5/mlx5_prm.h | 28 ++++++++++++++++++++++++++--
drivers/net/mlx5/mlx5.c | 15 +++++++++++----
drivers/net/mlx5/mlx5.h | 3 ++-
drivers/net/mlx5/mlx5_flow_flex.c | 14 +++++++++++---
6 files changed, 67 insertions(+), 14 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index e3a4927d0f..1f65ea7dcb 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -607,7 +607,8 @@ mlx5_devx_cmd_query_hca_vdpa_attr(void *ctx,
int
mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
- uint32_t ids[], uint32_t num)
+ struct mlx5_ext_sample_id ids[],
+ uint32_t num, uint8_t *anchor)
{
uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0};
uint32_t out[MLX5_ST_SZ_DW(create_flex_parser_out)] = {0};
@@ -636,6 +637,7 @@ mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
(void *)flex_obj);
return -rte_errno;
}
+ *anchor = MLX5_GET(parse_graph_flex, flex, head_anchor_id);
for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
void *s_off = (void *)((char *)sample + i *
MLX5_ST_SZ_BYTES(parse_graph_flow_match_sample));
@@ -645,8 +647,8 @@ mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
flow_match_sample_en);
if (!en)
continue;
- ids[idx++] = MLX5_GET(parse_graph_flow_match_sample, s_off,
- flow_match_sample_field_id);
+ ids[idx++].id = MLX5_GET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_id);
}
if (num != idx) {
rte_errno = EINVAL;
@@ -794,6 +796,12 @@ mlx5_devx_cmd_query_hca_parse_graph_node_cap
max_num_arc_out);
attr->max_num_sample = MLX5_GET(parse_graph_node_cap, hcattr,
max_num_sample);
+ attr->anchor_en = MLX5_GET(parse_graph_node_cap, hcattr, anchor_en);
+ attr->ext_sample_id = MLX5_GET(parse_graph_node_cap, hcattr, ext_sample_id);
+ attr->sample_tunnel_inner2 = MLX5_GET(parse_graph_node_cap, hcattr,
+ sample_tunnel_inner2);
+ attr->zero_size_supported = MLX5_GET(parse_graph_node_cap, hcattr,
+ zero_size_supported);
attr->sample_id_in_out = MLX5_GET(parse_graph_node_cap, hcattr,
sample_id_in_out);
attr->max_base_header_length = MLX5_GET(parse_graph_node_cap, hcattr,
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index c94b9eac06..5b33010155 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -114,6 +114,10 @@ struct mlx5_hca_flex_attr {
uint8_t max_num_arc_out;
uint8_t max_num_sample;
uint8_t max_num_prog_sample:5; /* From HCA CAP 2 */
+ uint8_t anchor_en:1;
+ uint8_t ext_sample_id:1;
+ uint8_t sample_tunnel_inner2:1;
+ uint8_t zero_size_supported:1;
uint8_t sample_id_in_out:1;
uint16_t max_base_header_length;
uint8_t max_sample_base_offset;
@@ -706,7 +710,8 @@ int mlx5_devx_cmd_modify_tir(struct mlx5_devx_obj *tir,
struct mlx5_devx_modify_tir_attr *tir_attr);
__rte_internal
int mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
- uint32_t ids[], uint32_t num);
+ struct mlx5_ext_sample_id ids[],
+ uint32_t num, uint8_t *anchor);
__rte_internal
struct mlx5_devx_obj *
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 243952bf85..d93b0bfbae 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -1895,7 +1895,11 @@ struct mlx5_ifc_parse_graph_node_cap_bits {
u8 max_num_arc_in[0x08];
u8 max_num_arc_out[0x08];
u8 max_num_sample[0x08];
- u8 reserved_at_78[0x07];
+ u8 reserved_at_78[0x03];
+ u8 anchor_en[0x1];
+ u8 ext_sample_id[0x1];
+ u8 sample_tunnel_inner2[0x1];
+ u8 zero_size_supported[0x1];
u8 sample_id_in_out[0x1];
u8 max_base_header_length[0x10];
u8 reserved_at_90[0x08];
@@ -1905,6 +1909,24 @@ struct mlx5_ifc_parse_graph_node_cap_bits {
u8 header_length_mask_width[0x08];
};
+/* ext_sample_id structure, see PRM Table: Flow Match Sample ID Format. */
+struct mlx5_ext_sample_id {
+ union {
+ struct {
+#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
+ uint32_t format_select_dw:8;
+ uint32_t modify_field_id:12;
+ uint32_t sample_id:12;
+#else
+ uint32_t sample_id:12;
+ uint32_t modify_field_id:12;
+ uint32_t format_select_dw:8;
+#endif
+ };
+ uint32_t id;
+ };
+};
+
struct mlx5_ifc_flow_table_prop_layout_bits {
u8 ft_support[0x1];
u8 flow_tag[0x1];
@@ -4577,7 +4599,9 @@ struct mlx5_ifc_parse_graph_flex_bits {
u8 header_length_mode[0x4];
u8 header_length_field_offset[0x10];
u8 next_header_field_offset[0x10];
- u8 reserved_at_160[0x1b];
+ u8 reserved_at_160[0x12];
+ u8 head_anchor_id[0x6];
+ u8 reserved_at_178[0x3];
u8 next_header_field_size[0x5];
u8 header_length_field_mask[0x20];
u8 reserved_at_224[0x20];
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index b8643cebdd..0b97c4e78d 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -964,11 +964,13 @@ int
mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_hca_flex_attr *attr = &priv->sh->cdev->config.hca_attr.flex;
struct mlx5_ecpri_parser_profile *prf = &priv->sh->ecpri_parser;
struct mlx5_devx_graph_node_attr node = {
.modify_field_select = 0,
};
- uint32_t ids[8];
+ struct mlx5_ext_sample_id ids[8];
+ uint8_t anchor_id;
int ret;
if (!priv->sh->cdev->config.hca_attr.parse_graph_flex_node) {
@@ -1004,15 +1006,20 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
return (rte_errno == 0) ? -ENODEV : -rte_errno;
}
prf->num = 2;
- ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num);
+ ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num, &anchor_id);
if (ret) {
DRV_LOG(ERR, "Failed to query sample IDs.");
return (rte_errno == 0) ? -ENODEV : -rte_errno;
}
prf->offset[0] = 0x0;
prf->offset[1] = sizeof(uint32_t);
- prf->ids[0] = ids[0];
- prf->ids[1] = ids[1];
+ if (attr->ext_sample_id) {
+ prf->ids[0] = ids[0].sample_id;
+ prf->ids[1] = ids[1].sample_id;
+ } else {
+ prf->ids[0] = ids[0].id;
+ prf->ids[1] = ids[1].id;
+ }
return 0;
}
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 16b33e1548..83fb316ad8 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1307,9 +1307,10 @@ struct mlx5_lag {
struct mlx5_flex_parser_devx {
struct mlx5_list_entry entry; /* List element at the beginning. */
uint32_t num_samples;
+ uint8_t anchor_id;
void *devx_obj;
struct mlx5_devx_graph_node_attr devx_conf;
- uint32_t sample_ids[MLX5_GRAPH_NODE_SAMPLE_NUM];
+ struct mlx5_ext_sample_id sample_ids[MLX5_GRAPH_NODE_SAMPLE_NUM];
};
/* Pattern field descriptor - how to translate flex pattern into samples. */
diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index fb08910ddb..35f2a9923d 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -226,15 +226,18 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev *dev,
void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher,
misc_parameters_4);
void *misc4_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_4);
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_hca_flex_attr *attr = &priv->sh->cdev->config.hca_attr.flex;
struct mlx5_flex_item *tp;
uint32_t i, pos = 0;
+ uint32_t sample_id;
RTE_SET_USED(dev);
MLX5_ASSERT(item->spec && item->mask);
spec = item->spec;
mask = item->mask;
tp = (struct mlx5_flex_item *)spec->handle;
- MLX5_ASSERT(mlx5_flex_index(dev->data->dev_private, tp) >= 0);
+ MLX5_ASSERT(mlx5_flex_index(priv, tp) >= 0);
for (i = 0; i < tp->mapnum; i++) {
struct mlx5_flex_pattern_field *map = tp->map + i;
uint32_t id = map->reg_id;
@@ -257,9 +260,13 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev *dev,
MLX5_ASSERT(id < num_samples);
id += num_samples;
}
+ if (attr->ext_sample_id)
+ sample_id = tp->devx_fp->sample_ids[id].sample_id;
+ else
+ sample_id = tp->devx_fp->sample_ids[id].id;
mlx5_flex_set_match_sample(misc4_m, misc4_v,
def, msk & def, val & msk & def,
- tp->devx_fp->sample_ids[id], id);
+ sample_id, id);
pos += map->width;
}
}
@@ -1298,7 +1305,8 @@ mlx5_flex_parser_create_cb(void *list_ctx, void *ctx)
/* Query the firmware assigned sample ids. */
ret = mlx5_devx_cmd_query_parse_samples(fp->devx_obj,
fp->sample_ids,
- fp->num_samples);
+ fp->num_samples,
+ &fp->anchor_id);
if (ret)
goto error;
DRV_LOG(DEBUG, "DEVx flex parser %p created, samples num: %u",
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 2/5] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data
2023-02-14 12:57 ` [PATCH v3 0/5] add IPv6 routing extension implementation Rongwei Liu
2023-02-14 12:57 ` [PATCH v3 1/5] net/mlx5: adopt IPv6 routing extension prm definition Rongwei Liu
@ 2023-02-14 12:57 ` Rongwei Liu
2023-02-14 12:57 ` [PATCH v3 3/5] net/mlx5/hws: add IPv6 routing extension matching support Rongwei Liu
` (3 subsequent siblings)
5 siblings, 0 replies; 19+ messages in thread
From: Rongwei Liu @ 2023-02-14 12:57 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas
Cc: rasland, Gregory Etelson, Alex Vesker
From: Gregory Etelson <getelson@nvidia.com>
New mlx5dr_context member replaces mlx5dr_cmd_query_caps.
Capabilities structure is a member of mlx5dr_context.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 41 ++++++++++++++-------------
1 file changed, 22 insertions(+), 19 deletions(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index cb7e6011a0..dea460137d 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -100,7 +100,7 @@ struct mlx5dr_definer_sel_ctrl {
};
struct mlx5dr_definer_conv_data {
- struct mlx5dr_cmd_query_caps *caps;
+ struct mlx5dr_context *ctx;
struct mlx5dr_definer_fc *fc;
uint8_t relaxed;
uint8_t tunnel;
@@ -904,6 +904,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
struct rte_flow_item *item,
int item_idx)
{
+ struct mlx5dr_cmd_query_caps *caps = cd->ctx->caps;
const struct rte_flow_item_gtp *m = item->mask;
struct mlx5dr_definer_fc *fc;
@@ -925,7 +926,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
}
if (m->hdr.teid) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -933,11 +934,11 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
fc->item_idx = item_idx;
fc->tag_set = &mlx5dr_definer_gtp_teid_set;
fc->bit_mask = __mlx5_mask(header_gtp, teid);
- fc->byte_off = cd->caps->format_select_gtpu_dw_1 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_dw_1 * DW_SIZE;
}
if (m->hdr.gtp_hdr_info) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -946,12 +947,12 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
fc->tag_set = &mlx5dr_definer_gtp_ext_flag_set;
fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag);
fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag);
- fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_dw_0 * DW_SIZE;
}
if (m->hdr.msg_type) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -960,7 +961,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
fc->tag_set = &mlx5dr_definer_gtp_msg_type_set;
fc->bit_mask = __mlx5_mask(header_gtp, msg_type);
fc->bit_off = __mlx5_dw_bit_off(header_gtp, msg_type);
- fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_dw_0 * DW_SIZE;
}
return 0;
@@ -971,12 +972,13 @@ mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd,
struct rte_flow_item *item,
int item_idx)
{
+ struct mlx5dr_cmd_query_caps *caps = cd->ctx->caps;
const struct rte_flow_item_gtp_psc *m = item->mask;
struct mlx5dr_definer_fc *fc;
/* Overwrite GTP extension flag to be 1 */
if (!cd->relaxed) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -985,12 +987,12 @@ mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd,
fc->tag_set = &mlx5dr_definer_ones_set;
fc->bit_mask = __mlx5_mask(header_gtp, ext_hdr_flag);
fc->bit_off = __mlx5_dw_bit_off(header_gtp, ext_hdr_flag);
- fc->byte_off = cd->caps->format_select_gtpu_dw_0 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_dw_0 * DW_SIZE;
}
/* Overwrite next extension header type */
if (!cd->relaxed) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_2_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_2_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -1000,14 +1002,14 @@ mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd,
fc->tag_mask_set = &mlx5dr_definer_ones_set;
fc->bit_mask = __mlx5_mask(header_opt_gtp, next_ext_hdr_type);
fc->bit_off = __mlx5_dw_bit_off(header_opt_gtp, next_ext_hdr_type);
- fc->byte_off = cd->caps->format_select_gtpu_dw_2 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_dw_2 * DW_SIZE;
}
if (!m)
return 0;
if (m->hdr.type) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -1016,11 +1018,11 @@ mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd,
fc->tag_set = &mlx5dr_definer_gtp_ext_hdr_pdu_set;
fc->bit_mask = __mlx5_mask(header_gtp_psc, pdu_type);
fc->bit_off = __mlx5_dw_bit_off(header_gtp_psc, pdu_type);
- fc->byte_off = cd->caps->format_select_gtpu_ext_dw_0 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_ext_dw_0 * DW_SIZE;
}
if (m->hdr.qfi) {
- if (!(cd->caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) {
+ if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_FIRST_EXT_DW_0_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
}
@@ -1029,7 +1031,7 @@ mlx5dr_definer_conv_item_gtp_psc(struct mlx5dr_definer_conv_data *cd,
fc->tag_set = &mlx5dr_definer_gtp_ext_hdr_qfi_set;
fc->bit_mask = __mlx5_mask(header_gtp_psc, qfi);
fc->bit_off = __mlx5_dw_bit_off(header_gtp_psc, qfi);
- fc->byte_off = cd->caps->format_select_gtpu_ext_dw_0 * DW_SIZE;
+ fc->byte_off = caps->format_select_gtpu_ext_dw_0 * DW_SIZE;
}
return 0;
@@ -1040,18 +1042,19 @@ mlx5dr_definer_conv_item_port(struct mlx5dr_definer_conv_data *cd,
struct rte_flow_item *item,
int item_idx)
{
+ struct mlx5dr_cmd_query_caps *caps = cd->ctx->caps;
const struct rte_flow_item_ethdev *m = item->mask;
struct mlx5dr_definer_fc *fc;
uint8_t bit_offset = 0;
if (m->port_id) {
- if (!cd->caps->wire_regc_mask) {
+ if (!caps->wire_regc_mask) {
DR_LOG(ERR, "Port ID item not supported, missing wire REGC mask");
rte_errno = ENOTSUP;
return rte_errno;
}
- while (!(cd->caps->wire_regc_mask & (1 << bit_offset)))
+ while (!(caps->wire_regc_mask & (1 << bit_offset)))
bit_offset++;
fc = &cd->fc[MLX5DR_DEFINER_FNAME_VPORT_REG_C_0];
@@ -1060,7 +1063,7 @@ mlx5dr_definer_conv_item_port(struct mlx5dr_definer_conv_data *cd,
fc->tag_mask_set = &mlx5dr_definer_ones_set;
DR_CALC_SET_HDR(fc, registers, register_c_0);
fc->bit_off = bit_offset;
- fc->bit_mask = cd->caps->wire_regc_mask >> bit_offset;
+ fc->bit_mask = caps->wire_regc_mask >> bit_offset;
} else {
DR_LOG(ERR, "Pord ID item mask must specify ID mask");
rte_errno = EINVAL;
@@ -1673,7 +1676,7 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
int i, ret;
cd.fc = fc;
- cd.caps = ctx->caps;
+ cd.ctx = ctx;
cd.relaxed = mt->flags & MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH;
/* Collect all RTE fields to the field array and set header layout */
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 3/5] net/mlx5/hws: add IPv6 routing extension matching support
2023-02-14 12:57 ` [PATCH v3 0/5] add IPv6 routing extension implementation Rongwei Liu
2023-02-14 12:57 ` [PATCH v3 1/5] net/mlx5: adopt IPv6 routing extension prm definition Rongwei Liu
2023-02-14 12:57 ` [PATCH v3 2/5] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Rongwei Liu
@ 2023-02-14 12:57 ` Rongwei Liu
2023-02-14 12:57 ` [PATCH v3 4/5] net/mlx5/hws: add modify IPv6 protocol implementation Rongwei Liu
` (2 subsequent siblings)
5 siblings, 0 replies; 19+ messages in thread
From: Rongwei Liu @ 2023-02-14 12:57 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas; +Cc: rasland, Alex Vesker
Add mlx5 HWS logic to match IPv6 routing extension header.
Once detecting IPv6 matching extension items in pattern template
create callback, PMD allocates a flex parser to sample the first
dword of srv6 header.
Only support next_hdr/segments_left/type for now.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/common/mlx5/mlx5_devx_cmds.c | 7 +-
drivers/net/mlx5/hws/mlx5dr_definer.c | 91 ++++++++++++++++++++++++++
drivers/net/mlx5/hws/mlx5dr_definer.h | 15 +++++
drivers/net/mlx5/mlx5.c | 92 ++++++++++++++++++++++++++-
drivers/net/mlx5/mlx5.h | 16 +++++
drivers/net/mlx5/mlx5_flow.h | 28 ++++++++
drivers/net/mlx5/mlx5_flow_hw.c | 29 +++++++--
7 files changed, 268 insertions(+), 10 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 1f65ea7dcb..22a94c1e1a 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -607,7 +607,7 @@ mlx5_devx_cmd_query_hca_vdpa_attr(void *ctx,
int
mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
- struct mlx5_ext_sample_id ids[],
+ struct mlx5_ext_sample_id *ids,
uint32_t num, uint8_t *anchor)
{
uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0};
@@ -637,8 +637,9 @@ mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
(void *)flex_obj);
return -rte_errno;
}
- *anchor = MLX5_GET(parse_graph_flex, flex, head_anchor_id);
- for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
+ if (anchor)
+ *anchor = MLX5_GET(parse_graph_flex, flex, head_anchor_id);
+ for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM && idx <= num; i++) {
void *s_off = (void *)((char *)sample + i *
MLX5_ST_SZ_BYTES(parse_graph_flow_match_sample));
uint32_t en;
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index dea460137d..ce7cf0504d 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -125,6 +125,7 @@ struct mlx5dr_definer_conv_data {
X(SET_BE16, ipv4_len, v->total_length, rte_ipv4_hdr) \
X(SET_BE16, ipv6_payload_len, v->hdr.payload_len, rte_flow_item_ipv6) \
X(SET, ipv6_proto, v->hdr.proto, rte_flow_item_ipv6) \
+ X(SET, ipv6_routing_hdr, IPPROTO_ROUTING, rte_flow_item_ipv6) \
X(SET, ipv6_hop_limits, v->hdr.hop_limits, rte_flow_item_ipv6) \
X(SET_BE32P, ipv6_src_addr_127_96, &v->hdr.src_addr[0], rte_flow_item_ipv6) \
X(SET_BE32P, ipv6_src_addr_95_64, &v->hdr.src_addr[4], rte_flow_item_ipv6) \
@@ -293,6 +294,21 @@ mlx5dr_definer_integrity_set(struct mlx5dr_definer_fc *fc,
DR_SET(tag, ok1_bits, fc->byte_off, fc->bit_off, fc->bit_mask);
}
+static void
+mlx5dr_definer_ipv6_routing_ext_set(struct mlx5dr_definer_fc *fc,
+ const void *item,
+ uint8_t *tag)
+{
+ const struct rte_flow_item_ipv6_routing_ext *v = item;
+ uint32_t val;
+
+ val = v->hdr.next_hdr << __mlx5_dw_bit_off(header_ipv6_routing_ext, next_hdr);
+ val |= v->hdr.type << __mlx5_dw_bit_off(header_ipv6_routing_ext, type);
+ val |= v->hdr.segments_left <<
+ __mlx5_dw_bit_off(header_ipv6_routing_ext, segments_left);
+ DR_SET(tag, val, fc->byte_off, 0, fc->bit_mask);
+}
+
static void
mlx5dr_definer_gre_key_set(struct mlx5dr_definer_fc *fc,
const void *item_spec,
@@ -1605,6 +1621,76 @@ mlx5dr_definer_conv_item_meter_color(struct mlx5dr_definer_conv_data *cd,
return 0;
}
+static struct mlx5dr_definer_fc *
+mlx5dr_definer_get_flex_parser_fc(struct mlx5dr_definer_conv_data *cd, uint32_t byte_off)
+{
+ uint32_t byte_off_fp7 = MLX5_BYTE_OFF(definer_hl, flex_parser.flex_parser_7);
+ uint32_t byte_off_fp0 = MLX5_BYTE_OFF(definer_hl, flex_parser.flex_parser_0);
+ enum mlx5dr_definer_fname fname = MLX5DR_DEFINER_FNAME_FLEX_PARSER_0;
+ struct mlx5dr_definer_fc *fc;
+ uint32_t idx;
+
+ if (byte_off < byte_off_fp7 || byte_off > byte_off_fp0) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+ idx = (byte_off_fp0 - byte_off) / (sizeof(uint32_t));
+ fname += (enum mlx5dr_definer_fname)idx;
+ fc = &cd->fc[fname];
+ fc->byte_off = byte_off;
+ fc->bit_mask = UINT32_MAX;
+ return fc;
+}
+
+static int
+mlx5dr_definer_conv_item_ipv6_routing_ext(struct mlx5dr_definer_conv_data *cd,
+ struct rte_flow_item *item,
+ int item_idx)
+{
+ const struct rte_flow_item_ipv6_routing_ext *m = item->mask;
+ struct mlx5dr_definer_fc *fc;
+ bool inner = cd->tunnel;
+ uint32_t byte_off;
+
+ if (!cd->relaxed) {
+ fc = &cd->fc[DR_CALC_FNAME(IP_VERSION, inner)];
+ fc->item_idx = item_idx;
+ fc->tag_set = &mlx5dr_definer_ipv6_version_set;
+ fc->tag_mask_set = &mlx5dr_definer_ones_set;
+ DR_CALC_SET(fc, eth_l2, l3_type, inner);
+
+ /* Overwrite - Unset ethertype if present */
+ memset(&cd->fc[DR_CALC_FNAME(ETH_TYPE, inner)], 0, sizeof(*fc));
+
+ fc = &cd->fc[DR_CALC_FNAME(IP_PROTOCOL, inner)];
+ if (!fc->tag_set) {
+ fc->item_idx = item_idx;
+ fc->tag_set = &mlx5dr_definer_ipv6_routing_hdr_set;
+ fc->tag_mask_set = &mlx5dr_definer_ones_set;
+ DR_CALC_SET(fc, eth_l3, protocol_next_header, inner);
+ }
+ }
+
+ if (!m)
+ return 0;
+
+ if (m->hdr.hdr_len || m->hdr.flags) {
+ rte_errno = ENOTSUP;
+ return rte_errno;
+ }
+
+ if (m->hdr.next_hdr || m->hdr.type || m->hdr.segments_left) {
+ byte_off = flow_hw_get_srh_flex_parser_byte_off_from_ctx(cd->ctx);
+ fc = mlx5dr_definer_get_flex_parser_fc(cd, byte_off);
+ if (!fc)
+ return rte_errno;
+
+ fc->item_idx = item_idx;
+ fc->tag_set = &mlx5dr_definer_ipv6_routing_ext_set;
+ }
+ return 0;
+}
+
static int
mlx5dr_definer_mt_set_fc(struct mlx5dr_match_template *mt,
struct mlx5dr_definer_fc *fc,
@@ -1786,6 +1872,11 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
ret = mlx5dr_definer_conv_item_meter_color(&cd, items, i);
item_flags |= MLX5_FLOW_ITEM_METER_COLOR;
break;
+ case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT:
+ ret = mlx5dr_definer_conv_item_ipv6_routing_ext(&cd, items, i);
+ item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT :
+ MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT;
+ break;
default:
DR_LOG(ERR, "Unsupported item type %d", items->type);
rte_errno = ENOTSUP;
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h
index 464872acd6..7420971f4a 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.h
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.h
@@ -519,6 +519,21 @@ struct mlx5_ifc_header_ipv6_vtc_bits {
u8 flow_label[0x14];
};
+struct mlx5_ifc_header_ipv6_routing_ext_bits {
+ u8 next_hdr[0x8];
+ u8 hdr_len[0x8];
+ u8 type[0x8];
+ u8 segments_left[0x8];
+ union {
+ u8 flags[0x20];
+ struct {
+ u8 last_entry[0x8];
+ u8 flag[0x8];
+ u8 tag[0x10];
+ };
+ };
+};
+
struct mlx5_ifc_header_vxlan_bits {
u8 flags[0x8];
u8 reserved1[0x18];
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 0b97c4e78d..94fd5a91e3 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -970,7 +970,6 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
.modify_field_select = 0,
};
struct mlx5_ext_sample_id ids[8];
- uint8_t anchor_id;
int ret;
if (!priv->sh->cdev->config.hca_attr.parse_graph_flex_node) {
@@ -1006,7 +1005,7 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
return (rte_errno == 0) ? -ENODEV : -rte_errno;
}
prf->num = 2;
- ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num, &anchor_id);
+ ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num, NULL);
if (ret) {
DRV_LOG(ERR, "Failed to query sample IDs.");
return (rte_errno == 0) ? -ENODEV : -rte_errno;
@@ -1041,6 +1040,95 @@ mlx5_flex_parser_ecpri_release(struct rte_eth_dev *dev)
prf->obj = NULL;
}
+/*
+ * Allocation of a flex parser for srh. Once refcnt is zero, the resources held
+ * by this parser will be freed.
+ * @param dev
+ * Pointer to Ethernet device structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev)
+{
+ struct mlx5_devx_graph_node_attr node = {
+ .modify_field_select = 0,
+ };
+ struct mlx5_ext_sample_id ids[MLX5_GRAPH_NODE_SAMPLE_NUM];
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_common_dev_config *config = &priv->sh->cdev->config;
+ void *ibv_ctx = priv->sh->cdev->ctx;
+ int ret;
+
+ memset(ids, 0xff, sizeof(ids));
+ if (!config->hca_attr.parse_graph_flex_node) {
+ DRV_LOG(ERR, "Dynamic flex parser is not supported");
+ return -ENOTSUP;
+ }
+ if (__atomic_add_fetch(&priv->sh->srh_flex_parser.refcnt, 1, __ATOMIC_RELAXED) > 1)
+ return 0;
+
+ node.header_length_mode = MLX5_GRAPH_NODE_LEN_FIELD;
+ /* Srv6 first two DW are not counted in. */
+ node.header_length_base_value = 0x8;
+ /* The unit is uint64_t. */
+ node.header_length_field_shift = 0x3;
+ /* Header length is the 2nd byte. */
+ node.header_length_field_offset = 0x8;
+ node.header_length_field_mask = 0xF;
+ /* One byte next header protocol. */
+ node.next_header_field_size = 0x8;
+ node.in[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IP;
+ node.in[0].compare_condition_value = IPPROTO_ROUTING;
+ node.sample[0].flow_match_sample_en = 1;
+ /* First come first serve no matter inner or outer. */
+ node.sample[0].flow_match_sample_tunnel_mode = MLX5_GRAPH_SAMPLE_TUNNEL_FIRST;
+ node.out[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_TCP;
+ node.out[0].compare_condition_value = IPPROTO_TCP;
+ node.out[1].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_UDP;
+ node.out[1].compare_condition_value = IPPROTO_UDP;
+ node.out[2].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IPV6;
+ node.out[2].compare_condition_value = IPPROTO_IPV6;
+ priv->sh->srh_flex_parser.fp = mlx5_devx_cmd_create_flex_parser(ibv_ctx, &node);
+ if (!priv->sh->srh_flex_parser.fp) {
+ DRV_LOG(ERR, "Failed to create flex parser node object.");
+ return (rte_errno == 0) ? -ENODEV : -rte_errno;
+ }
+ priv->sh->srh_flex_parser.num = 1;
+ ret = mlx5_devx_cmd_query_parse_samples(priv->sh->srh_flex_parser.fp, ids,
+ priv->sh->srh_flex_parser.num,
+ &priv->sh->srh_flex_parser.anchor_id);
+ if (ret) {
+ DRV_LOG(ERR, "Failed to query sample IDs.");
+ return (rte_errno == 0) ? -ENODEV : -rte_errno;
+ }
+ priv->sh->srh_flex_parser.offset[0] = 0x0;
+ priv->sh->srh_flex_parser.ids[0].id = ids[0].id;
+ return 0;
+}
+
+/*
+ * Destroy the flex parser node, including the parser itself, input / output
+ * arcs and DW samples. Resources could be reused then.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure
+ */
+void
+mlx5_free_srh_flex_parser(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_internal_flex_parser_profile *fp = &priv->sh->srh_flex_parser;
+
+ if (__atomic_sub_fetch(&fp->refcnt, 1, __ATOMIC_RELAXED))
+ return;
+ if (fp->fp)
+ mlx5_devx_cmd_destroy(fp->fp);
+ fp->fp = NULL;
+ fp->num = 0;
+}
+
uint32_t
mlx5_get_supported_sw_parsing_offloads(const struct mlx5_hca_attr *attr)
{
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 83fb316ad8..bea1f62ea8 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -543,6 +543,17 @@ struct mlx5_counter_stats_raw {
volatile struct flow_counter_stats *data;
};
+/* Mlx5 internal flex parser profile structure. */
+struct mlx5_internal_flex_parser_profile {
+ uint32_t num;/* Actual number of samples. */
+ /* Sample IDs for this profile. */
+ struct mlx5_ext_sample_id ids[MLX5_FLEX_ITEM_MAPPING_NUM];
+ uint32_t offset[MLX5_FLEX_ITEM_MAPPING_NUM]; /* Each ID sample offset. */
+ uint8_t anchor_id;
+ uint32_t refcnt;
+ void *fp; /* DevX flex parser object. */
+};
+
TAILQ_HEAD(mlx5_counter_pools, mlx5_flow_counter_pool);
/* Counter global management structure. */
@@ -1436,6 +1447,7 @@ struct mlx5_dev_ctx_shared {
struct mlx5_uar rx_uar; /* DevX UAR for Rx. */
struct mlx5_proc_priv *pppriv; /* Pointer to primary private process. */
struct mlx5_ecpri_parser_profile ecpri_parser;
+ struct mlx5_internal_flex_parser_profile srh_flex_parser; /* srh flex parser structure. */
/* Flex parser profiles information. */
LIST_HEAD(shared_rxqs, mlx5_rxq_ctrl) shared_rxqs; /* Shared RXQs. */
struct mlx5_aso_age_mng *aso_age_mng;
@@ -2258,4 +2270,8 @@ struct mlx5_list_entry *mlx5_flex_parser_clone_cb(void *list_ctx,
void *ctx);
void mlx5_flex_parser_clone_free_cb(void *tool_ctx,
struct mlx5_list_entry *entry);
+
+int mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev);
+
+void mlx5_free_srh_flex_parser(struct rte_eth_dev *dev);
#endif /* RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 86311b0b08..4bef2296b8 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -219,6 +219,10 @@ enum mlx5_feature_name {
/* Meter color item */
#define MLX5_FLOW_ITEM_METER_COLOR (UINT64_C(1) << 44)
+/* IPv6 routing extension item */
+#define MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT (UINT64_C(1) << 45)
+#define MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT (UINT64_C(1) << 46)
+
/* Outer Masks. */
#define MLX5_FLOW_LAYER_OUTER_L3 \
(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6)
@@ -2615,4 +2619,28 @@ int mlx5_flow_item_field_width(struct rte_eth_dev *dev,
enum rte_flow_field_id field, int inherit,
const struct rte_flow_attr *attr,
struct rte_flow_error *error);
+
+static __rte_always_inline int
+flow_hw_get_srh_flex_parser_byte_off_from_ctx(void *dr_ctx __rte_unused)
+{
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+ uint16_t port;
+
+ MLX5_ETH_FOREACH_DEV(port, NULL) {
+ struct mlx5_priv *priv;
+ struct mlx5_hca_flex_attr *attr;
+
+ priv = rte_eth_devices[port].data->dev_private;
+ attr = &priv->sh->cdev->config.hca_attr.flex;
+ if (priv->dr_ctx == dr_ctx && attr->ext_sample_id) {
+ if (priv->sh->srh_flex_parser.num)
+ return priv->sh->srh_flex_parser.ids[0].format_select_dw *
+ sizeof(uint32_t);
+ else
+ return UINT32_MAX;
+ }
+ }
+#endif
+ return UINT32_MAX;
+}
#endif /* RTE_PMD_MLX5_FLOW_H_ */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 798bcca710..6799b8a89f 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -213,17 +213,17 @@ flow_hw_hashfields_set(struct mlx5_flow_rss_desc *rss_desc,
}
/**
- * Generate the pattern item flags.
- * Will be used for shared RSS action.
+ * Generate the matching pattern item flags.
*
* @param[in] items
* Pointer to the list of items.
*
* @return
- * Item flags.
+ * Matching item flags. RSS hash field function
+ * silently ignores the flags which are unsupported.
*/
static uint64_t
-flow_hw_rss_item_flags_get(const struct rte_flow_item items[])
+flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
{
uint64_t item_flags = 0;
uint64_t last_item = 0;
@@ -249,6 +249,10 @@ flow_hw_rss_item_flags_get(const struct rte_flow_item items[])
last_item = tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
MLX5_FLOW_LAYER_OUTER_L4_UDP;
break;
+ case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT:
+ last_item = tunnel ? MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT :
+ MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT;
+ break;
case RTE_FLOW_ITEM_TYPE_GRE:
last_item = MLX5_FLOW_LAYER_GRE;
break;
@@ -4738,6 +4742,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev,
case RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REQUEST:
case RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REPLY:
case RTE_FLOW_ITEM_TYPE_CONNTRACK:
+ case RTE_FLOW_ITEM_TYPE_IPV6_ROUTING_EXT:
break;
case RTE_FLOW_ITEM_TYPE_INTEGRITY:
/*
@@ -4866,7 +4871,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
"cannot create match template");
return NULL;
}
- it->item_flags = flow_hw_rss_item_flags_get(tmpl_items);
+ it->item_flags = flow_hw_matching_item_flags_get(tmpl_items);
if (copied_items) {
if (attr->ingress)
it->implicit_port = true;
@@ -4874,6 +4879,17 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
it->implicit_tag = true;
mlx5_free(copied_items);
}
+ /* Either inner or outer, can't both. */
+ if (it->item_flags & (MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT |
+ MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT)) {
+ if (((it->item_flags & MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT) &&
+ (it->item_flags & MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT)) ||
+ (mlx5_alloc_srh_flex_parser(dev))) {
+ claim_zero(mlx5dr_match_template_destroy(it->mt));
+ mlx5_free(it);
+ return NULL;
+ }
+ }
__atomic_fetch_add(&it->refcnt, 1, __ATOMIC_RELAXED);
LIST_INSERT_HEAD(&priv->flow_hw_itt, it, next);
return it;
@@ -4905,6 +4921,9 @@ flow_hw_pattern_template_destroy(struct rte_eth_dev *dev __rte_unused,
NULL,
"item template in using");
}
+ if (template->item_flags & (MLX5_FLOW_ITEM_OUTER_IPV6_ROUTING_EXT |
+ MLX5_FLOW_ITEM_INNER_IPV6_ROUTING_EXT))
+ mlx5_free_srh_flex_parser(dev);
LIST_REMOVE(template, next);
claim_zero(mlx5dr_match_template_destroy(template->mt));
mlx5_free(template);
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 4/5] net/mlx5/hws: add modify IPv6 protocol implementation
2023-02-14 12:57 ` [PATCH v3 0/5] add IPv6 routing extension implementation Rongwei Liu
` (2 preceding siblings ...)
2023-02-14 12:57 ` [PATCH v3 3/5] net/mlx5/hws: add IPv6 routing extension matching support Rongwei Liu
@ 2023-02-14 12:57 ` Rongwei Liu
2023-02-14 12:57 ` [PATCH v3 5/5] doc/mlx5: add IPv6 routing extension matching docs Rongwei Liu
2023-02-15 10:12 ` [PATCH v3 0/5] add IPv6 routing extension implementation Raslan Darawsheh
5 siblings, 0 replies; 19+ messages in thread
From: Rongwei Liu @ 2023-02-14 12:57 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas; +Cc: rasland
Add HWS modify IPv6 protocol implementation.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/common/mlx5/mlx5_prm.h | 1 +
drivers/net/mlx5/mlx5_flow_dv.c | 10 ++++++++++
2 files changed, 11 insertions(+)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index d93b0bfbae..c05bce714a 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -760,6 +760,7 @@ enum mlx5_modification_field {
MLX5_MODI_TUNNEL_HDR_DW_1 = 0x75,
MLX5_MODI_GTPU_FIRST_EXT_DW_0 = 0x76,
MLX5_MODI_HASH_RESULT = 0x81,
+ MLX5_MODI_OUT_IPV6_NEXT_HDR = 0x4A,
};
/* Total number of metadata reg_c's. */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 9e5db6b945..f93dd4073c 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1357,6 +1357,7 @@ mlx5_flow_item_field_width(struct rte_eth_dev *dev,
case RTE_FLOW_FIELD_IPV6_DSCP:
return 6;
case RTE_FLOW_FIELD_IPV6_HOPLIMIT:
+ case RTE_FLOW_FIELD_IPV6_PROTO:
return 8;
case RTE_FLOW_FIELD_IPV6_SRC:
case RTE_FLOW_FIELD_IPV6_DST:
@@ -1883,6 +1884,15 @@ mlx5_flow_field_id_to_modify_info
info[idx].offset = data->offset;
}
break;
+ case RTE_FLOW_FIELD_IPV6_PROTO:
+ MLX5_ASSERT(data->offset + width <= 8);
+ off_be = 8 - (data->offset + width);
+ info[idx] = (struct field_modify_info){1, 0, MLX5_MODI_OUT_IPV6_NEXT_HDR};
+ if (mask)
+ mask[idx] = flow_modify_info_mask_8(width, off_be);
+ else
+ info[idx].offset = off_be;
+ break;
case RTE_FLOW_FIELD_POINTER:
case RTE_FLOW_FIELD_VALUE:
default:
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH v3 5/5] doc/mlx5: add IPv6 routing extension matching docs
2023-02-14 12:57 ` [PATCH v3 0/5] add IPv6 routing extension implementation Rongwei Liu
` (3 preceding siblings ...)
2023-02-14 12:57 ` [PATCH v3 4/5] net/mlx5/hws: add modify IPv6 protocol implementation Rongwei Liu
@ 2023-02-14 12:57 ` Rongwei Liu
2023-02-15 10:12 ` [PATCH v3 0/5] add IPv6 routing extension implementation Raslan Darawsheh
5 siblings, 0 replies; 19+ messages in thread
From: Rongwei Liu @ 2023-02-14 12:57 UTC (permalink / raw)
To: dev, matan, viacheslavo, orika, thomas; +Cc: rasland, Ferruh Yigit
Update mlx5 related document on IPv6 routing extension header
matching.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
doc/guides/nics/features/default.ini | 1 +
doc/guides/nics/features/mlx5.ini | 1 +
doc/guides/nics/mlx5.rst | 10 ++++++++++
3 files changed, 12 insertions(+)
diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini
index 976a020985..b1ad3bdca0 100644
--- a/doc/guides/nics/features/default.ini
+++ b/doc/guides/nics/features/default.ini
@@ -143,6 +143,7 @@ udp =
vlan =
vxlan =
vxlan_gpe =
+ipv6_routing_ext =
[rte_flow actions]
age =
diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini
index eb016f34da..dac5ee5579 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -89,6 +89,7 @@ vlan = Y
vxlan = Y
vxlan_gpe = Y
represented_port = Y
+ipv6_routing_ext = Y
[rte_flow actions]
age = I
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 9c6f1cca19..ee2df66e77 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -106,6 +106,7 @@ Features
- Sub-Function representors.
- Sub-Function.
- Matching on represented port.
+- Matching on IPv6 routing extension header.
Limitations
@@ -174,6 +175,7 @@ Limitations
- ``-EAGAIN`` for ``rte_eth_dev_start()``.
- ``-EBUSY`` for ``rte_eth_dev_stop()``.
+ - Matching on ICMP6 following IPv6 routing extension header, should match ipv6_routing_ext_next_hdr instead of ICMP6.
- When using Verbs flow engine (``dv_flow_en`` = 0), flow pattern without any
specific VLAN will match for VLAN packets as well:
@@ -274,6 +276,14 @@ Limitations
extension header type = 0x85).
- Match on GTP extension header is not supported in group 0.
+- Match on IPv6 routing extension header supports the following fields only:
+
+ - type
+ - next_hdr
+ - segments_left
+
+ Only supports HW steering. (``dv_flow_en=2``)
+
- Flex item:
- Hardware support: BlueField-2.
--
2.27.0
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: [PATCH v3 0/5] add IPv6 routing extension implementation
2023-02-14 12:57 ` [PATCH v3 0/5] add IPv6 routing extension implementation Rongwei Liu
` (4 preceding siblings ...)
2023-02-14 12:57 ` [PATCH v3 5/5] doc/mlx5: add IPv6 routing extension matching docs Rongwei Liu
@ 2023-02-15 10:12 ` Raslan Darawsheh
5 siblings, 0 replies; 19+ messages in thread
From: Raslan Darawsheh @ 2023-02-15 10:12 UTC (permalink / raw)
To: Rongwei Liu, dev, Matan Azrad, Slava Ovsiienko, Ori Kam,
NBU-Contact-Thomas Monjalon (EXTERNAL)
Hi,
> -----Original Message-----
> From: Rongwei Liu <rongweil@nvidia.com>
> Sent: Tuesday, February 14, 2023 2:57 PM
> To: dev@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-
> Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>
> Cc: Raslan Darawsheh <rasland@nvidia.com>
> Subject: [PATCH v3 0/5] add IPv6 routing extension implementation
>
> Add IPv6 routing extension matching and IPv6 protocol modify filed support.
>
> This patch relies on the preceding ethdev one:
> http://patches.dpdk.org/project/dpdk/patch/20230202100021.2445976-2-
> rongweil@nvidia.com/
>
> Including one commit from Gregory to pass the compilation.
>
> v3: add more sentences into mlx5.rst.
>
> Gregory Etelson (1):
> net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data
>
> Rongwei Liu (4):
> net/mlx5: adopt IPv6 routing extension prm definition
> net/mlx5/hws: add IPv6 routing extension matching support
> net/mlx5/hws: add modify IPv6 protocol implementation
> doc/mlx5: add IPv6 routing extension matching docs
>
> doc/guides/nics/features/default.ini | 1 +
> doc/guides/nics/features/mlx5.ini | 1 +
> doc/guides/nics/mlx5.rst | 10 ++
> drivers/common/mlx5/mlx5_devx_cmds.c | 17 +++-
> drivers/common/mlx5/mlx5_devx_cmds.h | 7 +-
> drivers/common/mlx5/mlx5_prm.h | 29 +++++-
> drivers/net/mlx5/hws/mlx5dr_definer.c | 132 ++++++++++++++++++++++-
> --- drivers/net/mlx5/hws/mlx5dr_definer.h | 15 +++
> drivers/net/mlx5/mlx5.c | 103 +++++++++++++++++++-
> drivers/net/mlx5/mlx5.h | 19 +++-
> drivers/net/mlx5/mlx5_flow.h | 28 ++++++
> drivers/net/mlx5/mlx5_flow_dv.c | 10 ++
> drivers/net/mlx5/mlx5_flow_flex.c | 14 ++-
> drivers/net/mlx5/mlx5_flow_hw.c | 29 +++++-
> 14 files changed, 376 insertions(+), 39 deletions(-)
>
> --
> 2.27.0
Series applied to next-net-mlx,
Kindest regards,
Raslan Darawsheh
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2023-02-15 10:12 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-02 10:11 [PATCH v1 0/5] add IPv6 routing extension implementation Rongwei Liu
2023-02-02 10:11 ` [PATCH v1 1/5] net/mlx5: adopt IPv6 routing extension prm definition Rongwei Liu
2023-02-13 11:37 ` [PATCH v2 0/5] add IPv6 routing extension implementation Rongwei Liu
2023-02-13 11:37 ` [PATCH v2 1/5] net/mlx5: adopt IPv6 routing extension prm definition Rongwei Liu
2023-02-14 12:57 ` [PATCH v3 0/5] add IPv6 routing extension implementation Rongwei Liu
2023-02-14 12:57 ` [PATCH v3 1/5] net/mlx5: adopt IPv6 routing extension prm definition Rongwei Liu
2023-02-14 12:57 ` [PATCH v3 2/5] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Rongwei Liu
2023-02-14 12:57 ` [PATCH v3 3/5] net/mlx5/hws: add IPv6 routing extension matching support Rongwei Liu
2023-02-14 12:57 ` [PATCH v3 4/5] net/mlx5/hws: add modify IPv6 protocol implementation Rongwei Liu
2023-02-14 12:57 ` [PATCH v3 5/5] doc/mlx5: add IPv6 routing extension matching docs Rongwei Liu
2023-02-15 10:12 ` [PATCH v3 0/5] add IPv6 routing extension implementation Raslan Darawsheh
2023-02-13 11:37 ` [PATCH v2 2/5] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Rongwei Liu
2023-02-13 11:37 ` [PATCH v2 3/5] net/mlx5/hws: add IPv6 routing extension matching support Rongwei Liu
2023-02-13 11:37 ` [PATCH v2 4/5] net/mlx5/hws: add modify IPv6 protocol implementation Rongwei Liu
2023-02-13 11:37 ` [PATCH v2 5/5] doc/mlx5: add IPv6 routing extension matching docs Rongwei Liu
2023-02-02 10:11 ` [PATCH v1 2/5] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Rongwei Liu
2023-02-02 10:11 ` [PATCH v1 3/5] net/mlx5/hws: add IPv6 routing extension matching support Rongwei Liu
2023-02-02 10:11 ` [PATCH v1 4/5] net/mlx5: add modify IPv6 protocol implementation Rongwei Liu
2023-02-02 10:11 ` [PATCH v1 5/5] doc/mlx5: add IPv6 routing extension matching docs Rongwei Liu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).