* [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data
@ 2023-10-29 16:31 Gregory Etelson
2023-10-29 16:31 ` [PATCH 02/30] net/mlx5: add flow_hw_get_reg_id_from_ctx() Gregory Etelson
` (29 more replies)
0 siblings, 30 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Ori Kam, Matan Azrad,
Viacheslav Ovsiienko, Suanming Mou
New mlx5dr_context member replaces mlx5dr_cmd_query_caps.
Capabilities structure is a member of mlx5dr_context.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 95b5d4b70e..75ba46b966 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -1092,7 +1092,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
return rte_errno;
}
- if (m->hdr.teid) {
+ if (m->teid) {
if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
@@ -1118,7 +1118,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
}
- if (m->hdr.msg_type) {
+ if (m->msg_type) {
if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
rte_errno = ENOTSUP;
return rte_errno;
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 02/30] net/mlx5: add flow_hw_get_reg_id_from_ctx()
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 03/30] net/mlx5/hws: Definer, use flow_hw_get_reg_id_from_ctx function call Gregory Etelson
` (28 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Ori Kam, Matan Azrad,
Viacheslav Ovsiienko, Suanming Mou
The new function call `flow_hw_get_reg_id_from_ctx()` maps input
DR5 context and register type to REG_C register.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/mlx5_flow.h | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 3ea2548d2b..92dfd9a3a4 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1711,6 +1711,28 @@ flow_hw_get_reg_id(enum rte_flow_item_type type, uint32_t id)
}
}
+static __rte_always_inline int
+flow_hw_get_reg_id_from_ctx(void *dr_ctx,
+ enum rte_flow_item_type type, uint32_t id)
+{
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+ uint16_t port;
+
+ MLX5_ETH_FOREACH_DEV(port, NULL) {
+ struct mlx5_priv *priv;
+
+ priv = rte_eth_devices[port].data->dev_private;
+ if (priv->dr_ctx == dr_ctx)
+ return flow_hw_get_reg_id(type, id);
+ }
+#else
+ RTE_SET_USED(dr_ctx);
+ RTE_SET_USED(type);
+ RTE_SET_USED(id);
+#endif
+ return REG_NON;
+}
+
void flow_hw_set_port_info(struct rte_eth_dev *dev);
void flow_hw_clear_port_info(struct rte_eth_dev *dev);
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 03/30] net/mlx5/hws: Definer, use flow_hw_get_reg_id_from_ctx function call
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
2023-10-29 16:31 ` [PATCH 02/30] net/mlx5: add flow_hw_get_reg_id_from_ctx() Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 04/30] net/mlx5: add rte_device parameter to locate HWS registers Gregory Etelson
` (27 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Ori Kam, Matan Azrad,
Viacheslav Ovsiienko, Suanming Mou
New function call `flow_hw_get_reg_id_from_ctx()` matches REG_C
register to input DR5 context.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 16 +++++++++++-----
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 75ba46b966..0f53c1e3b5 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -1448,7 +1448,9 @@ mlx5dr_definer_conv_item_tag(struct mlx5dr_definer_conv_data *cd,
return 0;
if (item->type == RTE_FLOW_ITEM_TYPE_TAG)
- reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_TAG, v->index);
+ reg = flow_hw_get_reg_id_from_ctx(cd->ctx,
+ RTE_FLOW_ITEM_TYPE_TAG,
+ v->index);
else
reg = (int)v->index;
@@ -1508,7 +1510,9 @@ mlx5dr_definer_conv_item_quota(struct mlx5dr_definer_conv_data *cd,
__rte_unused struct rte_flow_item *item,
int item_idx)
{
- int mtr_reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
+ int mtr_reg =
+ flow_hw_get_reg_id_from_ctx(cd->ctx, RTE_FLOW_ITEM_TYPE_METER_COLOR,
+ 0);
struct mlx5dr_definer_fc *fc;
if (mtr_reg < 0) {
@@ -1538,7 +1542,7 @@ mlx5dr_definer_conv_item_metadata(struct mlx5dr_definer_conv_data *cd,
if (!m)
return 0;
- reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_META, -1);
+ reg = flow_hw_get_reg_id_from_ctx(cd->ctx, RTE_FLOW_ITEM_TYPE_META, -1);
if (reg <= 0) {
DR_LOG(ERR, "Invalid register for item metadata");
rte_errno = EINVAL;
@@ -1748,7 +1752,8 @@ mlx5dr_definer_conv_item_conntrack(struct mlx5dr_definer_conv_data *cd,
if (!m)
return 0;
- reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_CONNTRACK, -1);
+ reg = flow_hw_get_reg_id_from_ctx(cd->ctx, RTE_FLOW_ITEM_TYPE_CONNTRACK,
+ -1);
if (reg <= 0) {
DR_LOG(ERR, "Invalid register for item conntrack");
rte_errno = EINVAL;
@@ -1889,7 +1894,8 @@ mlx5dr_definer_conv_item_meter_color(struct mlx5dr_definer_conv_data *cd,
if (!m)
return 0;
- reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
+ reg = flow_hw_get_reg_id_from_ctx(cd->ctx,
+ RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
MLX5_ASSERT(reg > 0);
fc = mlx5dr_definer_get_register_fc(cd, reg);
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 04/30] net/mlx5: add rte_device parameter to locate HWS registers
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
2023-10-29 16:31 ` [PATCH 02/30] net/mlx5: add flow_hw_get_reg_id_from_ctx() Gregory Etelson
2023-10-29 16:31 ` [PATCH 03/30] net/mlx5/hws: Definer, use flow_hw_get_reg_id_from_ctx function call Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-11-05 20:27 ` Thomas Monjalon
2023-10-29 16:31 ` [PATCH 05/30] net/mlx5: separate port REG_C registers usage Gregory Etelson
` (26 subsequent siblings)
29 siblings, 1 reply; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Ori Kam, Matan Azrad,
Viacheslav Ovsiienko, Suanming Mou
1. Add rte_eth_dev parameter to the `flow_hw_get_reg_id()`
2. Add mlx5_flow_hw_get_reg_id()
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/mlx5_flow.c | 2 +-
drivers/net/mlx5/mlx5_flow.h | 13 +++++++++++--
drivers/net/mlx5/mlx5_flow_dv.c | 12 ++++++------
drivers/net/mlx5/mlx5_flow_hw.c | 7 +++----
4 files changed, 21 insertions(+), 13 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index a500afd4f7..45a67607ed 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1718,7 +1718,7 @@ flow_drv_rxq_flags_set(struct rte_eth_dev *dev,
}
}
-static void
+void
flow_rxq_mark_flag_set(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 92dfd9a3a4..9344b5178a 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1678,8 +1678,10 @@ void flow_hw_clear_flow_metadata_config(void);
* TODO: Per port / device, FDB or NIC for Meta matching.
*/
static __rte_always_inline int
-flow_hw_get_reg_id(enum rte_flow_item_type type, uint32_t id)
+flow_hw_get_reg_id(struct rte_eth_dev *dev,
+ enum rte_flow_item_type type, uint32_t id)
{
+ RTE_SET_USED(dev);
switch (type) {
case RTE_FLOW_ITEM_TYPE_META:
#ifdef HAVE_MLX5_HWS_SUPPORT
@@ -1723,7 +1725,8 @@ flow_hw_get_reg_id_from_ctx(void *dr_ctx,
priv = rte_eth_devices[port].data->dev_private;
if (priv->dr_ctx == dr_ctx)
- return flow_hw_get_reg_id(type, id);
+ return flow_hw_get_reg_id(&rte_eth_devices[port],
+ type, id);
}
#else
RTE_SET_USED(dr_ctx);
@@ -2874,6 +2877,12 @@ flow_hw_get_srh_flex_parser_byte_off_from_ctx(void *dr_ctx __rte_unused)
}
void
mlx5_indirect_list_handles_release(struct rte_eth_dev *dev);
+void
+flow_rxq_mark_flag_set(struct rte_eth_dev *dev);
+int
+mlx5_flow_hw_get_reg_id(struct mlx5dr_context *ctx,
+ enum rte_flow_item_type type, uint32_t id);
+
#ifdef HAVE_MLX5_HWS_SUPPORT
struct mlx5_mirror;
void
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 3dc2fe5c71..05a374493d 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1919,8 +1919,8 @@ mlx5_flow_field_id_to_modify_info
off_be = (tag_index == MLX5_LINEAR_HASH_TAG_INDEX) ?
16 - (data->offset + width) + 16 : data->offset;
if (priv->sh->config.dv_flow_en == 2)
- reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_TAG,
- tag_index);
+ reg = flow_hw_get_reg_id(dev, RTE_FLOW_ITEM_TYPE_TAG,
+ data->level);
else
reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG,
tag_index, error);
@@ -2025,7 +2025,7 @@ mlx5_flow_field_id_to_modify_info
if (priv->sh->config.dv_flow_en == 2)
reg = flow_hw_get_reg_id
- (RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
+ (dev, RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
else
reg = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR,
0, error);
@@ -10256,7 +10256,7 @@ flow_dv_translate_item_meta(struct rte_eth_dev *dev,
if (!!(key_type & MLX5_SET_MATCHER_SW))
reg = flow_dv_get_metadata_reg(dev, attr, NULL);
else
- reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_META, 0);
+ reg = flow_hw_get_reg_id(dev, RTE_FLOW_ITEM_TYPE_META, 0);
if (reg < 0)
return;
MLX5_ASSERT(reg != REG_NON);
@@ -10359,7 +10359,7 @@ flow_dv_translate_item_tag(struct rte_eth_dev *dev, void *key,
if (!!(key_type & MLX5_SET_MATCHER_SW))
reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, index, NULL);
else
- reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_TAG, index);
+ reg = flow_hw_get_reg_id(dev, RTE_FLOW_ITEM_TYPE_TAG, index);
MLX5_ASSERT(reg > 0);
flow_dv_match_meta_reg(key, (enum modify_reg)reg, tag_v->data, tag_m->data);
}
@@ -11057,7 +11057,7 @@ flow_dv_translate_item_meter_color(struct rte_eth_dev *dev, void *key,
if (!!(key_type & MLX5_SET_MATCHER_SW))
reg = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, NULL);
else
- reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
+ reg = flow_hw_get_reg_id(dev, RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
if (reg == REG_NON)
return;
flow_dv_match_meta_reg(key, (enum modify_reg)reg, value, mask);
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 88fe8d9a68..d3f065e9c1 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -5595,9 +5595,8 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev,
if (tag == NULL)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Tag spec is NULL");
- tag_idx = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_TAG, tag->index);
+ NULL, "Tag spec is NULL");
+ tag_idx = flow_hw_get_reg_id(dev, RTE_FLOW_ITEM_TYPE_TAG, tag->index);
if (tag_idx == REG_NON)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
@@ -5655,7 +5654,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev,
break;
case RTE_FLOW_ITEM_TYPE_METER_COLOR:
{
- int reg = flow_hw_get_reg_id(RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
+ int reg = flow_hw_get_reg_id(dev, RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
if (reg == REG_NON)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 05/30] net/mlx5: separate port REG_C registers usage
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (2 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 04/30] net/mlx5: add rte_device parameter to locate HWS registers Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 06/30] net/mlx5: merge REG_C aliases Gregory Etelson
` (25 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Ori Kam, Matan Azrad,
Viacheslav Ovsiienko, Suanming Mou
Current implementation stored REG_C registers available for HWS tags
in PMD global array. As the result, PMD could not work properly with
different port types that allocate REG_C registers differently.
The patch stores registers available to a port in the port
shared context. Register values will be assigned according to the port
capabilities.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/common/mlx5/mlx5_prm.h | 12 +++
drivers/net/mlx5/linux/mlx5_os.c | 16 ++--
drivers/net/mlx5/mlx5.c | 4 -
drivers/net/mlx5/mlx5.h | 11 ++-
drivers/net/mlx5/mlx5_flow.c | 29 ++-----
drivers/net/mlx5/mlx5_flow.h | 25 ++----
drivers/net/mlx5/mlx5_flow_dv.c | 13 +--
drivers/net/mlx5/mlx5_flow_hw.c | 129 ++++-------------------------
drivers/net/mlx5/mlx5_flow_meter.c | 14 ++--
9 files changed, 78 insertions(+), 175 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index bced5a59dd..e13ca3cd22 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -864,6 +864,18 @@ enum modify_reg {
REG_C_11,
};
+static __rte_always_inline uint8_t
+mlx5_regc_index(enum modify_reg regc_val)
+{
+ return (uint8_t)(regc_val - REG_C_0);
+}
+
+static __rte_always_inline enum modify_reg
+mlx5_regc_value(uint8_t regc_ix)
+{
+ return REG_C_0 + regc_ix;
+}
+
/* Modification sub command. */
struct mlx5_modification_cmd {
union {
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index d5ef695e6d..96d32d11d8 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1328,14 +1328,14 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
* Prefer REG_C_3 if it is available.
*/
if (reg_c_mask & (1 << (REG_C_3 - REG_C_0)))
- priv->mtr_color_reg = REG_C_3;
+ sh->registers.mtr_color_reg = REG_C_3;
else
- priv->mtr_color_reg = ffs(reg_c_mask)
- - 1 + REG_C_0;
+ sh->registers.mtr_color_reg =
+ ffs(reg_c_mask) - 1 + REG_C_0;
priv->mtr_en = 1;
priv->mtr_reg_share = hca_attr->qos.flow_meter;
DRV_LOG(DEBUG, "The REG_C meter uses is %d",
- priv->mtr_color_reg);
+ sh->registers.mtr_color_reg);
}
}
if (hca_attr->qos.sup && hca_attr->qos.flow_meter_aso_sup) {
@@ -1360,7 +1360,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
sh->tunnel_header_2_3 = 1;
#endif
#ifdef HAVE_MLX5_DR_CREATE_ACTION_ASO
- if (hca_attr->flow_hit_aso && priv->mtr_color_reg == REG_C_3) {
+ if (hca_attr->flow_hit_aso && sh->registers.mtr_color_reg == REG_C_3) {
sh->flow_hit_aso_en = 1;
err = mlx5_flow_aso_age_mng_init(sh);
if (err) {
@@ -1374,7 +1374,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
defined (HAVE_MLX5_DR_ACTION_ASO_CT)
/* HWS create CT ASO SQ based on HWS configure queue number. */
if (sh->config.dv_flow_en != 2 &&
- hca_attr->ct_offload && priv->mtr_color_reg == REG_C_3) {
+ hca_attr->ct_offload && sh->registers.mtr_color_reg == REG_C_3) {
err = mlx5_flow_aso_ct_mng_init(sh);
if (err) {
err = -err;
@@ -1618,8 +1618,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
goto error;
}
/* Only HWS requires this information. */
- flow_hw_init_tags_set(eth_dev);
- flow_hw_init_flow_metadata_config(eth_dev);
+ if (sh->refcnt == 1)
+ flow_hw_init_tags_set(eth_dev);
if (priv->sh->config.dv_esw_en &&
flow_hw_create_vport_action(eth_dev)) {
DRV_LOG(ERR, "port %u failed to create vport action",
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 08b7b03365..c13ce2c13c 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2173,10 +2173,6 @@ mlx5_dev_close(struct rte_eth_dev *dev)
flow_hw_destroy_vport_action(dev);
flow_hw_resource_release(dev);
flow_hw_clear_port_info(dev);
- if (priv->sh->config.dv_flow_en == 2) {
- flow_hw_clear_flow_metadata_config();
- flow_hw_clear_tags_set(dev);
- }
#endif
if (priv->rxq_privs != NULL) {
/* XXX race condition if mlx5_rx_burst() is still running. */
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f3b872f59c..01cb21fc93 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1373,6 +1373,14 @@ struct mlx5_hws_cnt_svc_mng {
struct mlx5_hws_aso_mng aso_mng __rte_cache_aligned;
};
+#define MLX5_FLOW_HW_TAGS_MAX 8
+
+struct mlx5_dev_registers {
+ enum modify_reg mlx5_flow_hw_aso_tag;
+ enum modify_reg mtr_color_reg; /* Meter color match REG_C. */
+ enum modify_reg hw_avl_tags[MLX5_FLOW_HW_TAGS_MAX];
+};
+
/*
* Shared Infiniband device context for Master/Representors
* which belong to same IB device with multiple IB ports.
@@ -1393,7 +1401,6 @@ struct mlx5_dev_ctx_shared {
uint32_t drop_action_check_flag:1; /* Check Flag for drop action. */
uint32_t flow_priority_check_flag:1; /* Check Flag for flow priority. */
uint32_t metadata_regc_check_flag:1; /* Check Flag for metadata REGC. */
- uint32_t hws_tags:1; /* Check if tags info for HWS initialized. */
uint32_t shared_mark_enabled:1;
/* If mark action is enabled on Rxqs (shared E-Switch domain). */
uint32_t lag_rx_port_affinity_en:1;
@@ -1482,6 +1489,7 @@ struct mlx5_dev_ctx_shared {
uint32_t host_shaper_rate:8;
uint32_t lwm_triggered:1;
struct mlx5_hws_cnt_svc_mng *cnt_svc;
+ struct mlx5_dev_registers registers;
struct mlx5_dev_shared_port port[]; /* per device port data array. */
};
@@ -1811,7 +1819,6 @@ struct mlx5_priv {
/* Hash table of Rx metadata register copy table. */
struct mlx5_mtr_config mtr_config; /* Meter configuration */
uint8_t mtr_sfx_reg; /* Meter prefix-suffix flow match REG_C. */
- uint8_t mtr_color_reg; /* Meter color match REG_C. */
struct mlx5_legacy_flow_meters flow_meters; /* MTR list. */
struct mlx5_l3t_tbl *mtr_profile_tbl; /* Meter index lookup table. */
struct mlx5_flow_meter_profile *mtr_profile_arr; /* Profile array. */
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 45a67607ed..3ddc3ba772 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -39,18 +39,6 @@
*/
struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS];
-/*
- * A global structure to save the available REG_C_x for tags usage.
- * The Meter color REG (ASO) and the last available one will be reserved
- * for PMD internal usage.
- * Since there is no "port" concept in the driver, it is assumed that the
- * available tags set will be the minimum intersection.
- * 3 - in FDB mode / 5 - in legacy mode
- */
-uint32_t mlx5_flow_hw_avl_tags_init_cnt;
-enum modify_reg mlx5_flow_hw_avl_tags[MLX5_FLOW_HW_TAGS_MAX] = {REG_NON};
-enum modify_reg mlx5_flow_hw_aso_tag;
-
struct tunnel_default_miss_ctx {
uint16_t *queue;
__extension__
@@ -1320,6 +1308,7 @@ mlx5_flow_get_reg_id(struct rte_eth_dev *dev,
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_sh_config *config = &priv->sh->config;
+ struct mlx5_dev_registers *reg = &priv->sh->registers;
enum modify_reg start_reg;
bool skip_mtr_reg = false;
@@ -1375,23 +1364,23 @@ mlx5_flow_get_reg_id(struct rte_eth_dev *dev,
* should use the meter color register for match.
*/
if (priv->mtr_reg_share)
- return priv->mtr_color_reg;
+ return reg->mtr_color_reg;
else
- return priv->mtr_color_reg != REG_C_2 ? REG_C_2 :
+ return reg->mtr_color_reg != REG_C_2 ? REG_C_2 :
REG_C_3;
case MLX5_MTR_COLOR:
case MLX5_ASO_FLOW_HIT:
case MLX5_ASO_CONNTRACK:
case MLX5_SAMPLE_ID:
/* All features use the same REG_C. */
- MLX5_ASSERT(priv->mtr_color_reg != REG_NON);
- return priv->mtr_color_reg;
+ MLX5_ASSERT(reg->mtr_color_reg != REG_NON);
+ return reg->mtr_color_reg;
case MLX5_COPY_MARK:
/*
* Metadata COPY_MARK register using is in meter suffix sub
* flow while with meter. It's safe to share the same register.
*/
- return priv->mtr_color_reg != REG_C_2 ? REG_C_2 : REG_C_3;
+ return reg->mtr_color_reg != REG_C_2 ? REG_C_2 : REG_C_3;
case MLX5_APP_TAG:
/*
* If meter is enable, it will engage the register for color
@@ -1400,7 +1389,7 @@ mlx5_flow_get_reg_id(struct rte_eth_dev *dev,
* match.
* If meter is disable, free to use all available registers.
*/
- start_reg = priv->mtr_color_reg != REG_C_2 ? REG_C_2 :
+ start_reg = reg->mtr_color_reg != REG_C_2 ? REG_C_2 :
(priv->mtr_reg_share ? REG_C_3 : REG_C_4);
skip_mtr_reg = !!(priv->mtr_en && start_reg == REG_C_2);
if (id > (uint32_t)(REG_C_7 - start_reg))
@@ -1418,7 +1407,7 @@ mlx5_flow_get_reg_id(struct rte_eth_dev *dev,
* color register.
*/
if (skip_mtr_reg && priv->sh->flow_mreg_c
- [id + start_reg - REG_C_0] >= priv->mtr_color_reg) {
+ [id + start_reg - REG_C_0] >= reg->mtr_color_reg) {
if (id >= (uint32_t)(REG_C_7 - start_reg))
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM,
@@ -6491,7 +6480,7 @@ flow_sample_split_prep(struct rte_eth_dev *dev,
* metadata regC is REG_NON, back to use application tag
* index 0.
*/
- if (unlikely(priv->mtr_color_reg == REG_NON))
+ if (unlikely(priv->sh->registers.mtr_color_reg == REG_NON))
ret = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, 0, error);
else
ret = mlx5_flow_get_reg_id(dev, MLX5_SAMPLE_ID, 0, error);
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 9344b5178a..011db1fb75 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1618,11 +1618,6 @@ struct flow_hw_port_info {
extern struct flow_hw_port_info mlx5_flow_hw_port_infos[RTE_MAX_ETHPORTS];
-#define MLX5_FLOW_HW_TAGS_MAX 8
-extern uint32_t mlx5_flow_hw_avl_tags_init_cnt;
-extern enum modify_reg mlx5_flow_hw_avl_tags[];
-extern enum modify_reg mlx5_flow_hw_aso_tag;
-
/*
* Get metadata match tag and mask for given rte_eth_dev port.
* Used in HWS rule creation.
@@ -1664,13 +1659,6 @@ flow_hw_get_wire_port(struct ibv_context *ibctx)
}
#endif
-extern uint32_t mlx5_flow_hw_flow_metadata_config_refcnt;
-extern uint8_t mlx5_flow_hw_flow_metadata_esw_en;
-extern uint8_t mlx5_flow_hw_flow_metadata_xmeta_en;
-
-void flow_hw_init_flow_metadata_config(struct rte_eth_dev *dev);
-void flow_hw_clear_flow_metadata_config(void);
-
/*
* Convert metadata or tag to the actual register.
* META: Can only be used to match in the FDB in this stage, fixed C_1.
@@ -1681,12 +1669,14 @@ static __rte_always_inline int
flow_hw_get_reg_id(struct rte_eth_dev *dev,
enum rte_flow_item_type type, uint32_t id)
{
- RTE_SET_USED(dev);
+ struct mlx5_dev_ctx_shared *sh = MLX5_SH(dev);
+ struct mlx5_dev_registers *reg = &sh->registers;
+
switch (type) {
case RTE_FLOW_ITEM_TYPE_META:
#ifdef HAVE_MLX5_HWS_SUPPORT
- if (mlx5_flow_hw_flow_metadata_esw_en &&
- mlx5_flow_hw_flow_metadata_xmeta_en == MLX5_XMETA_MODE_META32_HWS) {
+ if (sh->config.dv_esw_en &&
+ sh->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS) {
return REG_C_1;
}
#endif
@@ -1702,12 +1692,12 @@ flow_hw_get_reg_id(struct rte_eth_dev *dev,
return REG_A;
case RTE_FLOW_ITEM_TYPE_CONNTRACK:
case RTE_FLOW_ITEM_TYPE_METER_COLOR:
- return mlx5_flow_hw_aso_tag;
+ return reg->mlx5_flow_hw_aso_tag;
case RTE_FLOW_ITEM_TYPE_TAG:
if (id == MLX5_LINEAR_HASH_TAG_INDEX)
return REG_C_3;
MLX5_ASSERT(id < MLX5_FLOW_HW_TAGS_MAX);
- return mlx5_flow_hw_avl_tags[id];
+ return reg->hw_avl_tags[id];
default:
return REG_NON;
}
@@ -1740,7 +1730,6 @@ void flow_hw_set_port_info(struct rte_eth_dev *dev);
void flow_hw_clear_port_info(struct rte_eth_dev *dev);
void flow_hw_init_tags_set(struct rte_eth_dev *dev);
-void flow_hw_clear_tags_set(struct rte_eth_dev *dev);
int flow_hw_create_vport_action(struct rte_eth_dev *dev);
void flow_hw_destroy_vport_action(struct rte_eth_dev *dev);
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 05a374493d..024023abb5 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1919,7 +1919,8 @@ mlx5_flow_field_id_to_modify_info
off_be = (tag_index == MLX5_LINEAR_HASH_TAG_INDEX) ?
16 - (data->offset + width) + 16 : data->offset;
if (priv->sh->config.dv_flow_en == 2)
- reg = flow_hw_get_reg_id(dev, RTE_FLOW_ITEM_TYPE_TAG,
+ reg = flow_hw_get_reg_id(dev,
+ RTE_FLOW_ITEM_TYPE_TAG,
data->level);
else
reg = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG,
@@ -2025,7 +2026,7 @@ mlx5_flow_field_id_to_modify_info
if (priv->sh->config.dv_flow_en == 2)
reg = flow_hw_get_reg_id
- (dev, RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
+ (dev, RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
else
reg = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR,
0, error);
@@ -3922,7 +3923,7 @@ flow_dv_validate_item_meter_color(struct rte_eth_dev *dev,
};
int ret;
- if (priv->mtr_color_reg == REG_NON)
+ if (priv->sh->registers.mtr_color_reg == REG_NON)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ITEM, item,
"meter color register"
@@ -8373,7 +8374,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
if (ret < 0)
return ret;
if ((action_flags & MLX5_FLOW_ACTION_SET_TAG) &&
- tag_id == 0 && priv->mtr_color_reg == REG_NON)
+ tag_id == 0 &&
+ priv->sh->registers.mtr_color_reg == REG_NON)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, NULL,
"sample after tag action causes metadata tag index 0 corruption");
@@ -11057,7 +11059,8 @@ flow_dv_translate_item_meter_color(struct rte_eth_dev *dev, void *key,
if (!!(key_type & MLX5_SET_MATCHER_SW))
reg = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR, 0, NULL);
else
- reg = flow_hw_get_reg_id(dev, RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
+ reg = flow_hw_get_reg_id(dev,
+ RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
if (reg == REG_NON)
return;
flow_dv_match_meta_reg(key, (enum modify_reg)reg, value, mask);
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index d3f065e9c1..22cf412035 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -5654,7 +5654,9 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev,
break;
case RTE_FLOW_ITEM_TYPE_METER_COLOR:
{
- int reg = flow_hw_get_reg_id(dev, RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
+ int reg = flow_hw_get_reg_id(dev,
+ RTE_FLOW_ITEM_TYPE_METER_COLOR,
+ 0);
if (reg == REG_NON)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
@@ -8456,126 +8458,29 @@ flow_hw_clear_port_info(struct rte_eth_dev *dev)
*/
void flow_hw_init_tags_set(struct rte_eth_dev *dev)
{
- struct mlx5_priv *priv = dev->data->dev_private;
- uint32_t meta_mode = priv->sh->config.dv_xmeta_en;
- uint8_t masks = (uint8_t)priv->sh->cdev->config.hca_attr.set_reg_c;
- uint32_t i, j;
- uint8_t reg_off;
+ struct mlx5_dev_ctx_shared *sh = MLX5_SH(dev);
+ struct mlx5_dev_registers *reg = &sh->registers;
+ uint32_t meta_mode = sh->config.dv_xmeta_en;
+ uint8_t masks = (uint8_t)sh->cdev->config.hca_attr.set_reg_c;
uint8_t unset = 0;
- uint8_t common_masks = 0;
+ uint32_t i, j;
/*
* The CAPA is global for common device but only used in net.
* It is shared per eswitch domain.
*/
- if (!!priv->sh->hws_tags)
- return;
- unset |= 1 << (priv->mtr_color_reg - REG_C_0);
- unset |= 1 << (REG_C_6 - REG_C_0);
- if (priv->sh->config.dv_esw_en)
- unset |= 1 << (REG_C_0 - REG_C_0);
+ unset |= 1 << mlx5_regc_index(reg->mtr_color_reg);
+ unset |= 1 << mlx5_regc_index(REG_C_6);
+ if (sh->config.dv_esw_en)
+ unset |= 1 << mlx5_regc_index(REG_C_0);
if (meta_mode == MLX5_XMETA_MODE_META32_HWS)
- unset |= 1 << (REG_C_1 - REG_C_0);
+ unset |= 1 << mlx5_regc_index(REG_C_1);
masks &= ~unset;
- /*
- * If available tag registers were previously calculated,
- * calculate a bitmask with an intersection of sets of:
- * - registers supported by current port,
- * - previously calculated available tag registers.
- */
- if (mlx5_flow_hw_avl_tags_init_cnt) {
- MLX5_ASSERT(mlx5_flow_hw_aso_tag == priv->mtr_color_reg);
- for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) {
- if (mlx5_flow_hw_avl_tags[i] == REG_NON)
- continue;
- reg_off = mlx5_flow_hw_avl_tags[i] - REG_C_0;
- if ((1 << reg_off) & masks)
- common_masks |= (1 << reg_off);
- }
- if (common_masks != masks)
- masks = common_masks;
- else
- goto after_avl_tags;
+ for (i = 0, j = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) {
+ if (!!((1 << i) & masks))
+ reg->hw_avl_tags[j++] = mlx5_regc_value(i);
}
- j = 0;
- for (i = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) {
- if ((1 << i) & masks)
- mlx5_flow_hw_avl_tags[j++] = (enum modify_reg)(i + (uint32_t)REG_C_0);
- }
- /* Clear the rest of unusable tag indexes. */
- for (; j < MLX5_FLOW_HW_TAGS_MAX; j++)
- mlx5_flow_hw_avl_tags[j] = REG_NON;
-after_avl_tags:
- priv->sh->hws_tags = 1;
- mlx5_flow_hw_aso_tag = (enum modify_reg)priv->mtr_color_reg;
- mlx5_flow_hw_avl_tags_init_cnt++;
-}
-
-/*
- * Reset the available tag registers information to NONE.
- *
- * @param[in] dev
- * Pointer to the rte_eth_dev structure.
- */
-void flow_hw_clear_tags_set(struct rte_eth_dev *dev)
-{
- struct mlx5_priv *priv = dev->data->dev_private;
-
- if (!priv->sh->hws_tags)
- return;
- priv->sh->hws_tags = 0;
- mlx5_flow_hw_avl_tags_init_cnt--;
- if (!mlx5_flow_hw_avl_tags_init_cnt)
- memset(mlx5_flow_hw_avl_tags, REG_NON,
- sizeof(enum modify_reg) * MLX5_FLOW_HW_TAGS_MAX);
-}
-
-uint32_t mlx5_flow_hw_flow_metadata_config_refcnt;
-uint8_t mlx5_flow_hw_flow_metadata_esw_en;
-uint8_t mlx5_flow_hw_flow_metadata_xmeta_en;
-
-/**
- * Initializes static configuration of META flow items.
- *
- * As a temporary workaround, META flow item is translated to a register,
- * based on statically saved dv_esw_en and dv_xmeta_en device arguments.
- * It is a workaround for flow_hw_get_reg_id() where port specific information
- * is not available at runtime.
- *
- * Values of dv_esw_en and dv_xmeta_en device arguments are taken from the first opened port.
- * This means that each mlx5 port will use the same configuration for translation
- * of META flow items.
- *
- * @param[in] dev
- * Pointer to Ethernet device.
- */
-void
-flow_hw_init_flow_metadata_config(struct rte_eth_dev *dev)
-{
- uint32_t refcnt;
-
- refcnt = __atomic_fetch_add(&mlx5_flow_hw_flow_metadata_config_refcnt, 1,
- __ATOMIC_RELAXED);
- if (refcnt > 0)
- return;
- mlx5_flow_hw_flow_metadata_esw_en = MLX5_SH(dev)->config.dv_esw_en;
- mlx5_flow_hw_flow_metadata_xmeta_en = MLX5_SH(dev)->config.dv_xmeta_en;
-}
-
-/**
- * Clears statically stored configuration related to META flow items.
- */
-void
-flow_hw_clear_flow_metadata_config(void)
-{
- uint32_t refcnt;
-
- refcnt = __atomic_fetch_sub(&mlx5_flow_hw_flow_metadata_config_refcnt, 1,
- __ATOMIC_RELAXED) - 1;
- if (refcnt > 0)
- return;
- mlx5_flow_hw_flow_metadata_esw_en = 0;
- mlx5_flow_hw_flow_metadata_xmeta_en = 0;
+ reg->mlx5_flow_hw_aso_tag = reg->mtr_color_reg;
}
static int
diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c
index 14a435d157..eb88dfe39c 100644
--- a/drivers/net/mlx5/mlx5_flow_meter.c
+++ b/drivers/net/mlx5/mlx5_flow_meter.c
@@ -67,7 +67,7 @@ mlx5_flow_meter_action_create(struct mlx5_priv *priv,
val = (ebs_eir >> ASO_DSEG_EBS_MAN_OFFSET) & ASO_DSEG_MAN_MASK;
MLX5_SET(flow_meter_parameters, fmp, ebs_mantissa, val);
mtr_init.next_table = def_policy->sub_policy.tbl_rsc->obj;
- mtr_init.reg_c_index = priv->mtr_color_reg - REG_C_0;
+ mtr_init.reg_c_index = priv->sh->registers.mtr_color_reg - REG_C_0;
mtr_init.flow_meter_parameter = fmp;
mtr_init.flow_meter_parameter_sz =
MLX5_ST_SZ_BYTES(flow_meter_parameters);
@@ -1597,6 +1597,7 @@ mlx5_flow_meter_action_modify(struct mlx5_priv *priv,
uint64_t modify_bits, uint32_t active_state, uint32_t is_enable)
{
#ifdef HAVE_MLX5_DR_CREATE_ACTION_FLOW_METER
+ struct mlx5_dev_ctx_shared *sh = priv->sh;
uint32_t in[MLX5_ST_SZ_DW(flow_meter_parameters)] = { 0 };
uint32_t *attr;
struct mlx5dv_dr_flow_meter_attr mod_attr = { 0 };
@@ -1604,19 +1605,20 @@ mlx5_flow_meter_action_modify(struct mlx5_priv *priv,
struct mlx5_aso_mtr *aso_mtr = NULL;
uint32_t cbs_cir, ebs_eir, val;
- if (priv->sh->meter_aso_en) {
+ if (sh->meter_aso_en) {
fm->is_enable = !!is_enable;
aso_mtr = container_of(fm, struct mlx5_aso_mtr, fm);
- ret = mlx5_aso_meter_update_by_wqe(priv->sh, MLX5_HW_INV_QUEUE,
- aso_mtr, &priv->mtr_bulk, NULL, true);
+ ret = mlx5_aso_meter_update_by_wqe(sh, MLX5_HW_INV_QUEUE,
+ aso_mtr, &priv->mtr_bulk,
+ NULL, true);
if (ret)
return ret;
- ret = mlx5_aso_mtr_wait(priv->sh, MLX5_HW_INV_QUEUE, aso_mtr);
+ ret = mlx5_aso_mtr_wait(sh, MLX5_HW_INV_QUEUE, aso_mtr);
if (ret)
return ret;
} else {
/* Fill command parameters. */
- mod_attr.reg_c_index = priv->mtr_color_reg - REG_C_0;
+ mod_attr.reg_c_index = sh->registers.mtr_color_reg - REG_C_0;
mod_attr.flow_meter_parameter = in;
mod_attr.flow_meter_parameter_sz =
MLX5_ST_SZ_BYTES(flow_meter_parameters);
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 06/30] net/mlx5: merge REG_C aliases
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (3 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 05/30] net/mlx5: separate port REG_C registers usage Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 07/30] net/mlx5: initialize HWS flow tags registers in shared dev context Gregory Etelson
` (24 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Ori Kam, Matan Azrad,
Viacheslav Ovsiienko, Suanming Mou
Merge `mtr_color_reg` and `mlx5_flow_hw_aso_tag`
into `aso_reg`
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_os.c | 10 +++++-----
drivers/net/mlx5/mlx5.h | 3 +--
drivers/net/mlx5/mlx5_flow.c | 16 ++++++++--------
drivers/net/mlx5/mlx5_flow.h | 3 +--
drivers/net/mlx5/mlx5_flow_dv.c | 7 ++++---
drivers/net/mlx5/mlx5_flow_hw.c | 3 +--
drivers/net/mlx5/mlx5_flow_meter.c | 4 ++--
7 files changed, 22 insertions(+), 24 deletions(-)
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 96d32d11d8..ed273e14cf 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1328,14 +1328,14 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
* Prefer REG_C_3 if it is available.
*/
if (reg_c_mask & (1 << (REG_C_3 - REG_C_0)))
- sh->registers.mtr_color_reg = REG_C_3;
+ sh->registers.aso_reg = REG_C_3;
else
- sh->registers.mtr_color_reg =
+ sh->registers.aso_reg =
ffs(reg_c_mask) - 1 + REG_C_0;
priv->mtr_en = 1;
priv->mtr_reg_share = hca_attr->qos.flow_meter;
DRV_LOG(DEBUG, "The REG_C meter uses is %d",
- sh->registers.mtr_color_reg);
+ sh->registers.aso_reg);
}
}
if (hca_attr->qos.sup && hca_attr->qos.flow_meter_aso_sup) {
@@ -1360,7 +1360,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
sh->tunnel_header_2_3 = 1;
#endif
#ifdef HAVE_MLX5_DR_CREATE_ACTION_ASO
- if (hca_attr->flow_hit_aso && sh->registers.mtr_color_reg == REG_C_3) {
+ if (hca_attr->flow_hit_aso && sh->registers.aso_reg == REG_C_3) {
sh->flow_hit_aso_en = 1;
err = mlx5_flow_aso_age_mng_init(sh);
if (err) {
@@ -1374,7 +1374,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
defined (HAVE_MLX5_DR_ACTION_ASO_CT)
/* HWS create CT ASO SQ based on HWS configure queue number. */
if (sh->config.dv_flow_en != 2 &&
- hca_attr->ct_offload && sh->registers.mtr_color_reg == REG_C_3) {
+ hca_attr->ct_offload && sh->registers.aso_reg == REG_C_3) {
err = mlx5_flow_aso_ct_mng_init(sh);
if (err) {
err = -err;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 01cb21fc93..99a2ad88ed 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1376,8 +1376,7 @@ struct mlx5_hws_cnt_svc_mng {
#define MLX5_FLOW_HW_TAGS_MAX 8
struct mlx5_dev_registers {
- enum modify_reg mlx5_flow_hw_aso_tag;
- enum modify_reg mtr_color_reg; /* Meter color match REG_C. */
+ enum modify_reg aso_reg;
enum modify_reg hw_avl_tags[MLX5_FLOW_HW_TAGS_MAX];
};
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 3ddc3ba772..ad9a2f2273 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1364,23 +1364,23 @@ mlx5_flow_get_reg_id(struct rte_eth_dev *dev,
* should use the meter color register for match.
*/
if (priv->mtr_reg_share)
- return reg->mtr_color_reg;
+ return reg->aso_reg;
else
- return reg->mtr_color_reg != REG_C_2 ? REG_C_2 :
+ return reg->aso_reg != REG_C_2 ? REG_C_2 :
REG_C_3;
case MLX5_MTR_COLOR:
case MLX5_ASO_FLOW_HIT:
case MLX5_ASO_CONNTRACK:
case MLX5_SAMPLE_ID:
/* All features use the same REG_C. */
- MLX5_ASSERT(reg->mtr_color_reg != REG_NON);
- return reg->mtr_color_reg;
+ MLX5_ASSERT(reg->aso_reg != REG_NON);
+ return reg->aso_reg;
case MLX5_COPY_MARK:
/*
* Metadata COPY_MARK register using is in meter suffix sub
* flow while with meter. It's safe to share the same register.
*/
- return reg->mtr_color_reg != REG_C_2 ? REG_C_2 : REG_C_3;
+ return reg->aso_reg != REG_C_2 ? REG_C_2 : REG_C_3;
case MLX5_APP_TAG:
/*
* If meter is enable, it will engage the register for color
@@ -1389,7 +1389,7 @@ mlx5_flow_get_reg_id(struct rte_eth_dev *dev,
* match.
* If meter is disable, free to use all available registers.
*/
- start_reg = reg->mtr_color_reg != REG_C_2 ? REG_C_2 :
+ start_reg = reg->aso_reg != REG_C_2 ? REG_C_2 :
(priv->mtr_reg_share ? REG_C_3 : REG_C_4);
skip_mtr_reg = !!(priv->mtr_en && start_reg == REG_C_2);
if (id > (uint32_t)(REG_C_7 - start_reg))
@@ -1407,7 +1407,7 @@ mlx5_flow_get_reg_id(struct rte_eth_dev *dev,
* color register.
*/
if (skip_mtr_reg && priv->sh->flow_mreg_c
- [id + start_reg - REG_C_0] >= reg->mtr_color_reg) {
+ [id + start_reg - REG_C_0] >= reg->aso_reg) {
if (id >= (uint32_t)(REG_C_7 - start_reg))
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM,
@@ -6480,7 +6480,7 @@ flow_sample_split_prep(struct rte_eth_dev *dev,
* metadata regC is REG_NON, back to use application tag
* index 0.
*/
- if (unlikely(priv->sh->registers.mtr_color_reg == REG_NON))
+ if (unlikely(priv->sh->registers.aso_reg == REG_NON))
ret = mlx5_flow_get_reg_id(dev, MLX5_APP_TAG, 0, error);
else
ret = mlx5_flow_get_reg_id(dev, MLX5_SAMPLE_ID, 0, error);
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 011db1fb75..250d9eb1fc 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1663,7 +1663,6 @@ flow_hw_get_wire_port(struct ibv_context *ibctx)
* Convert metadata or tag to the actual register.
* META: Can only be used to match in the FDB in this stage, fixed C_1.
* TAG: C_x expect meter color reg and the reserved ones.
- * TODO: Per port / device, FDB or NIC for Meta matching.
*/
static __rte_always_inline int
flow_hw_get_reg_id(struct rte_eth_dev *dev,
@@ -1692,7 +1691,7 @@ flow_hw_get_reg_id(struct rte_eth_dev *dev,
return REG_A;
case RTE_FLOW_ITEM_TYPE_CONNTRACK:
case RTE_FLOW_ITEM_TYPE_METER_COLOR:
- return reg->mlx5_flow_hw_aso_tag;
+ return reg->aso_reg;
case RTE_FLOW_ITEM_TYPE_TAG:
if (id == MLX5_LINEAR_HASH_TAG_INDEX)
return REG_C_3;
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 024023abb5..9268a07c84 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -2026,7 +2026,8 @@ mlx5_flow_field_id_to_modify_info
if (priv->sh->config.dv_flow_en == 2)
reg = flow_hw_get_reg_id
- (dev, RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
+ (dev,
+ RTE_FLOW_ITEM_TYPE_METER_COLOR, 0);
else
reg = mlx5_flow_get_reg_id(dev, MLX5_MTR_COLOR,
0, error);
@@ -3923,7 +3924,7 @@ flow_dv_validate_item_meter_color(struct rte_eth_dev *dev,
};
int ret;
- if (priv->sh->registers.mtr_color_reg == REG_NON)
+ if (priv->sh->registers.aso_reg == REG_NON)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ITEM, item,
"meter color register"
@@ -8375,7 +8376,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
return ret;
if ((action_flags & MLX5_FLOW_ACTION_SET_TAG) &&
tag_id == 0 &&
- priv->sh->registers.mtr_color_reg == REG_NON)
+ priv->sh->registers.aso_reg == REG_NON)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, NULL,
"sample after tag action causes metadata tag index 0 corruption");
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 22cf412035..c48c2eec39 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -8469,7 +8469,7 @@ void flow_hw_init_tags_set(struct rte_eth_dev *dev)
* The CAPA is global for common device but only used in net.
* It is shared per eswitch domain.
*/
- unset |= 1 << mlx5_regc_index(reg->mtr_color_reg);
+ unset |= 1 << mlx5_regc_index(reg->aso_reg);
unset |= 1 << mlx5_regc_index(REG_C_6);
if (sh->config.dv_esw_en)
unset |= 1 << mlx5_regc_index(REG_C_0);
@@ -8480,7 +8480,6 @@ void flow_hw_init_tags_set(struct rte_eth_dev *dev)
if (!!((1 << i) & masks))
reg->hw_avl_tags[j++] = mlx5_regc_value(i);
}
- reg->mlx5_flow_hw_aso_tag = reg->mtr_color_reg;
}
static int
diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c
index eb88dfe39c..7cbf772ea4 100644
--- a/drivers/net/mlx5/mlx5_flow_meter.c
+++ b/drivers/net/mlx5/mlx5_flow_meter.c
@@ -67,7 +67,7 @@ mlx5_flow_meter_action_create(struct mlx5_priv *priv,
val = (ebs_eir >> ASO_DSEG_EBS_MAN_OFFSET) & ASO_DSEG_MAN_MASK;
MLX5_SET(flow_meter_parameters, fmp, ebs_mantissa, val);
mtr_init.next_table = def_policy->sub_policy.tbl_rsc->obj;
- mtr_init.reg_c_index = priv->sh->registers.mtr_color_reg - REG_C_0;
+ mtr_init.reg_c_index = priv->sh->registers.aso_reg - REG_C_0;
mtr_init.flow_meter_parameter = fmp;
mtr_init.flow_meter_parameter_sz =
MLX5_ST_SZ_BYTES(flow_meter_parameters);
@@ -1618,7 +1618,7 @@ mlx5_flow_meter_action_modify(struct mlx5_priv *priv,
return ret;
} else {
/* Fill command parameters. */
- mod_attr.reg_c_index = sh->registers.mtr_color_reg - REG_C_0;
+ mod_attr.reg_c_index = sh->registers.aso_reg - REG_C_0;
mod_attr.flow_meter_parameter = in;
mod_attr.flow_meter_parameter_sz =
MLX5_ST_SZ_BYTES(flow_meter_parameters);
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 07/30] net/mlx5: initialize HWS flow tags registers in shared dev context
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (4 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 06/30] net/mlx5: merge REG_C aliases Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 08/30] net/mlx5/hws: adding method to query rule hash Gregory Etelson
` (23 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Ori Kam, Matan Azrad,
Viacheslav Ovsiienko, Suanming Mou
Move HWS flow tags registers initialization to shared dev context.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_os.c | 35 ++-------------
drivers/net/mlx5/mlx5.c | 75 ++++++++++++++++++++++++++++++++
drivers/net/mlx5/mlx5.h | 6 +++
drivers/net/mlx5/mlx5_flow.h | 3 --
drivers/net/mlx5/mlx5_flow_hw.c | 34 ---------------
5 files changed, 84 insertions(+), 69 deletions(-)
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index ed273e14cf..ec067ef52c 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1304,38 +1304,12 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
struct mlx5_hca_attr *hca_attr = &sh->cdev->config.hca_attr;
sh->steering_format_version = hca_attr->steering_format_version;
-#if defined(HAVE_MLX5DV_DR) && \
- (defined(HAVE_MLX5_DR_CREATE_ACTION_FLOW_METER) || \
- defined(HAVE_MLX5_DR_CREATE_ACTION_ASO))
+#if defined(HAVE_MLX5_DR_CREATE_ACTION_ASO_EXT)
if (hca_attr->qos.sup && hca_attr->qos.flow_meter_old &&
sh->config.dv_flow_en) {
- uint8_t reg_c_mask = hca_attr->qos.flow_meter_reg_c_ids;
- /*
- * Meter needs two REG_C's for color match and pre-sfx
- * flow match. Here get the REG_C for color match.
- * REG_C_0 and REG_C_1 is reserved for metadata feature.
- */
- reg_c_mask &= 0xfc;
- if (rte_popcount32(reg_c_mask) < 1) {
- priv->mtr_en = 0;
- DRV_LOG(WARNING, "No available register for"
- " meter.");
- } else {
- /*
- * The meter color register is used by the
- * flow-hit feature as well.
- * The flow-hit feature must use REG_C_3
- * Prefer REG_C_3 if it is available.
- */
- if (reg_c_mask & (1 << (REG_C_3 - REG_C_0)))
- sh->registers.aso_reg = REG_C_3;
- else
- sh->registers.aso_reg =
- ffs(reg_c_mask) - 1 + REG_C_0;
+ if (sh->registers.aso_reg != REG_NON) {
priv->mtr_en = 1;
priv->mtr_reg_share = hca_attr->qos.flow_meter;
- DRV_LOG(DEBUG, "The REG_C meter uses is %d",
- sh->registers.aso_reg);
}
}
if (hca_attr->qos.sup && hca_attr->qos.flow_meter_aso_sup) {
@@ -1358,7 +1332,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
sh->tunnel_header_0_1 = 1;
if (hca_attr->flow.tunnel_header_2_3)
sh->tunnel_header_2_3 = 1;
-#endif
+#endif /* HAVE_MLX5_DR_CREATE_ACTION_ASO_EXT */
#ifdef HAVE_MLX5_DR_CREATE_ACTION_ASO
if (hca_attr->flow_hit_aso && sh->registers.aso_reg == REG_C_3) {
sh->flow_hit_aso_en = 1;
@@ -1617,9 +1591,6 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
err = ENOTSUP;
goto error;
}
- /* Only HWS requires this information. */
- if (sh->refcnt == 1)
- flow_hw_init_tags_set(eth_dev);
if (priv->sh->config.dv_esw_en &&
flow_hw_create_vport_action(eth_dev)) {
DRV_LOG(ERR, "port %u failed to create vport action",
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index c13ce2c13c..840c566162 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1599,6 +1599,80 @@ mlx5_rt_timestamp_config(struct mlx5_dev_ctx_shared *sh,
}
}
+static void
+mlx5_init_hws_flow_tags_registers(struct mlx5_dev_ctx_shared *sh)
+{
+ struct mlx5_dev_registers *reg = &sh->registers;
+ uint32_t meta_mode = sh->config.dv_xmeta_en;
+ uint8_t masks = (uint8_t)sh->cdev->config.hca_attr.set_reg_c;
+ uint8_t unset = 0;
+ uint32_t i, j;
+
+ /*
+ * The CAPA is global for common device but only used in net.
+ * It is shared per eswitch domain.
+ */
+ if (reg->aso_reg != REG_NON)
+ unset |= 1 << mlx5_regc_index(reg->aso_reg);
+ unset |= 1 << mlx5_regc_index(REG_C_6);
+ if (sh->config.dv_esw_en)
+ unset |= 1 << mlx5_regc_index(REG_C_0);
+ if (meta_mode == MLX5_XMETA_MODE_META32_HWS)
+ unset |= 1 << mlx5_regc_index(REG_C_1);
+ masks &= ~unset;
+ for (i = 0, j = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) {
+ if (!!((1 << i) & masks))
+ reg->hw_avl_tags[j++] = mlx5_regc_value(i);
+ }
+}
+
+static void
+mlx5_init_aso_register(struct mlx5_dev_ctx_shared *sh)
+{
+#if defined(HAVE_MLX5_DR_CREATE_ACTION_ASO_EXT)
+ const struct mlx5_hca_attr *hca_attr = &sh->cdev->config.hca_attr;
+ const struct mlx5_hca_qos_attr *qos = &hca_attr->qos;
+ uint8_t reg_c_mask = qos->flow_meter_reg_c_ids & 0xfc;
+
+ if (!(qos->sup && qos->flow_meter_old && sh->config.dv_flow_en))
+ return;
+ /*
+ * Meter needs two REG_C's for color match and pre-sfx
+ * flow match. Here get the REG_C for color match.
+ * REG_C_0 and REG_C_1 is reserved for metadata feature.
+ */
+ if (__builtin_popcount(reg_c_mask) > 0) {
+ /*
+ * The meter color register is used by the
+ * flow-hit feature as well.
+ * The flow-hit feature must use REG_C_3
+ * Prefer REG_C_3 if it is available.
+ */
+ if (reg_c_mask & (1 << mlx5_regc_index(REG_C_3)))
+ sh->registers.aso_reg = REG_C_3;
+ else
+ sh->registers.aso_reg =
+ mlx5_regc_value(ffs(reg_c_mask) - 1);
+ }
+#else
+ RTE_SET_USED(sh);
+#endif
+}
+
+static void
+mlx5_init_shared_dev_registers(struct mlx5_dev_ctx_shared *sh)
+{
+ if (sh->cdev->config.devx)
+ mlx5_init_aso_register(sh);
+ if (sh->registers.aso_reg != REG_NON) {
+ DRV_LOG(DEBUG, "ASO register: REG_C%d",
+ mlx5_regc_index(sh->registers.aso_reg));
+ } else {
+ DRV_LOG(DEBUG, "ASO register: NONE");
+ }
+ mlx5_init_hws_flow_tags_registers(sh);
+}
+
/**
* Allocate shared device context. If there is multiport device the
* master and representors will share this context, if there is single
@@ -1720,6 +1794,7 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn,
/* Add context to the global device list. */
LIST_INSERT_HEAD(&mlx5_dev_ctx_list, sh, next);
rte_spinlock_init(&sh->geneve_tlv_opt_sl);
+ mlx5_init_shared_dev_registers(sh);
exit:
pthread_mutex_unlock(&mlx5_dev_ctx_list_mutex);
return sh;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 99a2ad88ed..a0dcd788b4 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1380,6 +1380,12 @@ struct mlx5_dev_registers {
enum modify_reg hw_avl_tags[MLX5_FLOW_HW_TAGS_MAX];
};
+#if defined(HAVE_MLX5DV_DR) && \
+ (defined(HAVE_MLX5_DR_CREATE_ACTION_FLOW_METER) || \
+ defined(HAVE_MLX5_DR_CREATE_ACTION_ASO))
+#define HAVE_MLX5_DR_CREATE_ACTION_ASO_EXT
+#endif
+
/*
* Shared Infiniband device context for Master/Representors
* which belong to same IB device with multiple IB ports.
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 250d9eb1fc..aea8b38f39 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1727,9 +1727,6 @@ flow_hw_get_reg_id_from_ctx(void *dr_ctx,
void flow_hw_set_port_info(struct rte_eth_dev *dev);
void flow_hw_clear_port_info(struct rte_eth_dev *dev);
-
-void flow_hw_init_tags_set(struct rte_eth_dev *dev);
-
int flow_hw_create_vport_action(struct rte_eth_dev *dev);
void flow_hw_destroy_vport_action(struct rte_eth_dev *dev);
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index c48c2eec39..b0ef14c14e 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -8448,40 +8448,6 @@ flow_hw_clear_port_info(struct rte_eth_dev *dev)
info->is_wire = 0;
}
-/*
- * Initialize the information of available tag registers and an intersection
- * of all the probed devices' REG_C_Xs.
- * PS. No port concept in steering part, right now it cannot be per port level.
- *
- * @param[in] dev
- * Pointer to the rte_eth_dev structure.
- */
-void flow_hw_init_tags_set(struct rte_eth_dev *dev)
-{
- struct mlx5_dev_ctx_shared *sh = MLX5_SH(dev);
- struct mlx5_dev_registers *reg = &sh->registers;
- uint32_t meta_mode = sh->config.dv_xmeta_en;
- uint8_t masks = (uint8_t)sh->cdev->config.hca_attr.set_reg_c;
- uint8_t unset = 0;
- uint32_t i, j;
-
- /*
- * The CAPA is global for common device but only used in net.
- * It is shared per eswitch domain.
- */
- unset |= 1 << mlx5_regc_index(reg->aso_reg);
- unset |= 1 << mlx5_regc_index(REG_C_6);
- if (sh->config.dv_esw_en)
- unset |= 1 << mlx5_regc_index(REG_C_0);
- if (meta_mode == MLX5_XMETA_MODE_META32_HWS)
- unset |= 1 << mlx5_regc_index(REG_C_1);
- masks &= ~unset;
- for (i = 0, j = 0; i < MLX5_FLOW_HW_TAGS_MAX; i++) {
- if (!!((1 << i) & masks))
- reg->hw_avl_tags[j++] = mlx5_regc_value(i);
- }
-}
-
static int
flow_hw_conntrack_destroy(struct rte_eth_dev *dev __rte_unused,
uint32_t idx,
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 08/30] net/mlx5/hws: adding method to query rule hash
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (5 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 07/30] net/mlx5: initialize HWS flow tags registers in shared dev context Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 09/30] net/mlx5: add support for calc hash Gregory Etelson
` (22 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Itamar Gozlan, Matan Azrad,
Viacheslav Ovsiienko, Ori Kam, Suanming Mou
From: Itamar Gozlan <igozlan@nvidia.com>
Add a method to the HW steering API that allows querying
the hash result for a given matcher and a set of items. This
can be used to predict the location of the rule in the hash table.
Signed-off-by: Itamar Gozlan <igozlan@nvidia.com>
---
drivers/common/mlx5/mlx5_prm.h | 8 +++-
drivers/net/mlx5/hws/meson.build | 1 +
drivers/net/mlx5/hws/mlx5dr.h | 26 +++++++++++
drivers/net/mlx5/hws/mlx5dr_cmd.c | 3 ++
drivers/net/mlx5/hws/mlx5dr_cmd.h | 3 +-
drivers/net/mlx5/hws/mlx5dr_crc32.c | 61 ++++++++++++++++++++++++++
drivers/net/mlx5/hws/mlx5dr_crc32.h | 13 ++++++
drivers/net/mlx5/hws/mlx5dr_internal.h | 1 +
drivers/net/mlx5/hws/mlx5dr_rule.c | 37 ++++++++++++++++
drivers/net/mlx5/hws/mlx5dr_rule.h | 1 +
10 files changed, 152 insertions(+), 2 deletions(-)
create mode 100644 drivers/net/mlx5/hws/mlx5dr_crc32.c
create mode 100644 drivers/net/mlx5/hws/mlx5dr_crc32.h
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index e13ca3cd22..19c6d0282b 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -2279,6 +2279,9 @@ enum {
MLX5_GENERATE_WQE_TYPE_FLOW_UPDATE = 1 << 1,
};
+enum {
+ MLX5_FLOW_TABLE_HASH_TYPE_CRC32,
+};
/*
* HCA Capabilities 2
*/
@@ -2328,7 +2331,10 @@ struct mlx5_ifc_cmd_hca_cap_2_bits {
u8 format_select_dw_gtpu_dw_2[0x8];
u8 format_select_dw_gtpu_first_ext_dw_0[0x8];
u8 generate_wqe_type[0x20];
- u8 reserved_at_2c0[0x540];
+ u8 reserved_at_2c0[0x160];
+ u8 reserved_at_420[0x1c];
+ u8 flow_table_hash_type[0x4];
+ u8 reserved_at_440[0x3c0];
};
struct mlx5_ifc_esw_cap_bits {
diff --git a/drivers/net/mlx5/hws/meson.build b/drivers/net/mlx5/hws/meson.build
index 38776d5163..bbcc628557 100644
--- a/drivers/net/mlx5/hws/meson.build
+++ b/drivers/net/mlx5/hws/meson.build
@@ -19,4 +19,5 @@ sources += files(
'mlx5dr_definer.c',
'mlx5dr_debug.c',
'mlx5dr_pat_arg.c',
+ 'mlx5dr_crc32.c',
)
diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index 1995c55132..39d902e762 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -118,6 +118,11 @@ enum mlx5dr_matcher_distribute_mode {
MLX5DR_MATCHER_DISTRIBUTE_BY_LINEAR = 0x1,
};
+enum mlx5dr_rule_hash_calc_mode {
+ MLX5DR_RULE_HASH_CALC_MODE_RAW,
+ MLX5DR_RULE_HASH_CALC_MODE_IDX,
+};
+
struct mlx5dr_matcher_attr {
/* Processing priority inside table */
uint32_t priority;
@@ -430,6 +435,27 @@ int mlx5dr_rule_action_update(struct mlx5dr_rule *rule_handle,
struct mlx5dr_rule_action rule_actions[],
struct mlx5dr_rule_attr *attr);
+/* Calculate hash for a given set of items, which indicates rule location in
+ * the hash table.
+ *
+ * @param[in] matcher
+ * The matcher of the created rule.
+ * @param[in] items
+ * Matching pattern item definition.
+ * @param[in] mt_idx
+ * Match template index that the match was created with.
+ * @param[in] mode
+ * Hash calculation mode
+ * @param[in, out] ret_hash
+ * Returned calculated hash result
+ * @return zero on success non zero otherwise.
+ */
+int mlx5dr_rule_hash_calculate(struct mlx5dr_matcher *matcher,
+ const struct rte_flow_item items[],
+ uint8_t mt_idx,
+ enum mlx5dr_rule_hash_calc_mode mode,
+ uint32_t *ret_hash);
+
/* Create direct rule drop action.
*
* @param[in] ctx
diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c
index 781de40c02..c52cdd0767 100644
--- a/drivers/net/mlx5/hws/mlx5dr_cmd.c
+++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c
@@ -1154,6 +1154,9 @@ int mlx5dr_cmd_query_caps(struct ibv_context *ctx,
(res & MLX5_CROSS_VHCA_ALLOWED_OBJS_FT) &&
(res & MLX5_CROSS_VHCA_ALLOWED_OBJS_RTC);
+ caps->flow_table_hash_type = MLX5_GET(query_hca_cap_out, out,
+ capability.cmd_hca_cap_2.flow_table_hash_type);
+
MLX5_SET(query_hca_cap_in, in, op_mod,
MLX5_GET_HCA_CAP_OP_MOD_NIC_FLOW_TABLE |
MLX5_HCA_CAP_OPMOD_GET_CUR);
diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h
index 28e5ea4726..03db62e2e2 100644
--- a/drivers/net/mlx5/hws/mlx5dr_cmd.h
+++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h
@@ -217,10 +217,11 @@ struct mlx5dr_cmd_query_caps {
uint8_t rtc_log_depth_max;
uint8_t format_select_gtpu_dw_0;
uint8_t format_select_gtpu_dw_1;
+ uint8_t flow_table_hash_type;
uint8_t format_select_gtpu_dw_2;
uint8_t format_select_gtpu_ext_dw_0;
- uint32_t linear_match_definer;
uint8_t access_index_mode;
+ uint32_t linear_match_definer;
bool full_dw_jumbo_support;
bool rtc_hash_split_table;
bool rtc_linear_lookup_table;
diff --git a/drivers/net/mlx5/hws/mlx5dr_crc32.c b/drivers/net/mlx5/hws/mlx5dr_crc32.c
new file mode 100644
index 0000000000..9c454eda0c
--- /dev/null
+++ b/drivers/net/mlx5/hws/mlx5dr_crc32.c
@@ -0,0 +1,61 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2023 NVIDIA Corporation & Affiliates
+ */
+
+#include "mlx5dr_internal.h"
+
+uint32_t dr_ste_crc_tab32[] = {
+ 0x0, 0x77073096, 0xee0e612c, 0x990951ba, 0x76dc419, 0x706af48f,
+ 0xe963a535, 0x9e6495a3, 0xedb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,
+ 0x9b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91, 0x1db71064, 0x6ab020f2,
+ 0xf3b97148, 0x84be41de, 0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,
+ 0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec, 0x14015c4f, 0x63066cd9,
+ 0xfa0f3d63, 0x8d080df5, 0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,
+ 0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b, 0x35b5a8fa, 0x42b2986c,
+ 0xdbbbc9d6, 0xacbcf940, 0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,
+ 0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116, 0x21b4f4b5, 0x56b3c423,
+ 0xcfba9599, 0xb8bda50f, 0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,
+ 0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d, 0x76dc4190, 0x1db7106,
+ 0x98d220bc, 0xefd5102a, 0x71b18589, 0x6b6b51f, 0x9fbfe4a5, 0xe8b8d433,
+ 0x7807c9a2, 0xf00f934, 0x9609a88e, 0xe10e9818, 0x7f6a0dbb, 0x86d3d2d,
+ 0x91646c97, 0xe6635c01, 0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,
+ 0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457, 0x65b0d9c6, 0x12b7e950,
+ 0x8bbeb8ea, 0xfcb9887c, 0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,
+ 0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2, 0x4adfa541, 0x3dd895d7,
+ 0xa4d1c46d, 0xd3d6f4fb, 0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,
+ 0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9, 0x5005713c, 0x270241aa,
+ 0xbe0b1010, 0xc90c2086, 0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,
+ 0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4, 0x59b33d17, 0x2eb40d81,
+ 0xb7bd5c3b, 0xc0ba6cad, 0xedb88320, 0x9abfb3b6, 0x3b6e20c, 0x74b1d29a,
+ 0xead54739, 0x9dd277af, 0x4db2615, 0x73dc1683, 0xe3630b12, 0x94643b84,
+ 0xd6d6a3e, 0x7a6a5aa8, 0xe40ecf0b, 0x9309ff9d, 0xa00ae27, 0x7d079eb1,
+ 0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe, 0xf762575d, 0x806567cb,
+ 0x196c3671, 0x6e6b06e7, 0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,
+ 0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5, 0xd6d6a3e8, 0xa1d1937e,
+ 0x38d8c2c4, 0x4fdff252, 0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,
+ 0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60, 0xdf60efc3, 0xa867df55,
+ 0x316e8eef, 0x4669be79, 0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,
+ 0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f, 0xc5ba3bbe, 0xb2bd0b28,
+ 0x2bb45a92, 0x5cb36a04, 0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,
+ 0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x26d930a, 0x9c0906a9, 0xeb0e363f,
+ 0x72076785, 0x5005713, 0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0xcb61b38,
+ 0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0xbdbdf21, 0x86d3d2d4, 0xf1d4e242,
+ 0x68ddb3f8, 0x1fda836e, 0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,
+ 0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c, 0x8f659eff, 0xf862ae69,
+ 0x616bffd3, 0x166ccf45, 0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,
+ 0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db, 0xaed16a4a, 0xd9d65adc,
+ 0x40df0b66, 0x37d83bf0, 0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,
+ 0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6, 0xbad03605, 0xcdd70693,
+ 0x54de5729, 0x23d967bf, 0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,
+ 0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d
+};
+
+uint32_t mlx5dr_crc32_calc(uint8_t *p, size_t len)
+{
+ uint32_t crc = 0;
+
+ while (len--)
+ crc = (crc >> 8) ^ dr_ste_crc_tab32[(crc ^ *p++) & 255];
+
+ return rte_be_to_cpu_32(crc);
+}
diff --git a/drivers/net/mlx5/hws/mlx5dr_crc32.h b/drivers/net/mlx5/hws/mlx5dr_crc32.h
new file mode 100644
index 0000000000..9aab9e06ca
--- /dev/null
+++ b/drivers/net/mlx5/hws/mlx5dr_crc32.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2023 NVIDIA Corporation & Affiliates
+ */
+
+#ifndef MLX5DR_CRC32_C_
+#define MLX5DR_CRC32_C_
+
+/* Ethernet AUTODIN II CRC32 (little-endian)
+ * CRC32_POLY 0xedb88320
+ */
+uint32_t mlx5dr_crc32_calc(uint8_t *p, size_t len);
+
+#endif /* MLX5DR_CRC32_C_ */
diff --git a/drivers/net/mlx5/hws/mlx5dr_internal.h b/drivers/net/mlx5/hws/mlx5dr_internal.h
index 3770d28e62..021d599a56 100644
--- a/drivers/net/mlx5/hws/mlx5dr_internal.h
+++ b/drivers/net/mlx5/hws/mlx5dr_internal.h
@@ -38,6 +38,7 @@
#include "mlx5dr_matcher.h"
#include "mlx5dr_debug.h"
#include "mlx5dr_pat_arg.h"
+#include "mlx5dr_crc32.h"
#define DW_SIZE 4
#define BITS_IN_BYTE 8
diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c
index 931c68b160..980a99b226 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.c
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.c
@@ -770,3 +770,40 @@ size_t mlx5dr_rule_get_handle_size(void)
{
return sizeof(struct mlx5dr_rule);
}
+
+int mlx5dr_rule_hash_calculate(struct mlx5dr_matcher *matcher,
+ const struct rte_flow_item items[],
+ uint8_t mt_idx,
+ enum mlx5dr_rule_hash_calc_mode mode,
+ uint32_t *ret_hash)
+{
+ uint8_t tag[MLX5DR_STE_SZ] = {0};
+ struct mlx5dr_match_template *mt;
+
+ if (!matcher || !matcher->mt) {
+ rte_errno = EINVAL;
+ return -rte_errno;
+ }
+
+ mt = &matcher->mt[mt_idx];
+
+ if (mlx5dr_matcher_req_fw_wqe(matcher) ||
+ mlx5dr_table_is_root(matcher->tbl) ||
+ matcher->tbl->ctx->caps->access_index_mode == MLX5DR_MATCHER_INSERT_BY_HASH ||
+ matcher->tbl->ctx->caps->flow_table_hash_type != MLX5_FLOW_TABLE_HASH_TYPE_CRC32) {
+ rte_errno = ENOTSUP;
+ return -rte_errno;
+ }
+
+ mlx5dr_definer_create_tag(items, mt->fc, mt->fc_sz, tag);
+ if (mlx5dr_matcher_mt_is_jumbo(mt))
+ *ret_hash = mlx5dr_crc32_calc(tag, MLX5DR_JUMBO_TAG_SZ);
+ else
+ *ret_hash = mlx5dr_crc32_calc(tag + MLX5DR_ACTIONS_SZ,
+ MLX5DR_MATCH_TAG_SZ);
+
+ if (mode == MLX5DR_RULE_HASH_CALC_MODE_IDX)
+ *ret_hash = *ret_hash & (BIT(matcher->attr.rule.num_log) - 1);
+
+ return 0;
+}
diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h
index 886cf77992..f7d97eead5 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.h
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.h
@@ -10,6 +10,7 @@ enum {
MLX5DR_ACTIONS_SZ = 12,
MLX5DR_MATCH_TAG_SZ = 32,
MLX5DR_JUMBO_TAG_SZ = 44,
+ MLX5DR_STE_SZ = 64,
};
enum mlx5dr_rule_status {
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 09/30] net/mlx5: add support for calc hash
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (6 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 08/30] net/mlx5/hws: adding method to query rule hash Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 10/30] net/mlx5: fix insert by index Gregory Etelson
` (21 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Ori Kam, Matan Azrad,
Viacheslav Ovsiienko, Suanming Mou
From: Ori Kam <orika@nvidia.com>
This commit adds calculate hash function support for mlx5 PMD.
Signed-off-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/mlx5_flow.c | 32 ++++++++++++++++++++++++++++++++
drivers/net/mlx5/mlx5_flow.h | 8 ++++++++
drivers/net/mlx5/mlx5_flow_hw.c | 31 ++++++++++++++++++++++++++++++-
3 files changed, 70 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index ad9a2f2273..819831cff8 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1178,6 +1178,13 @@ mlx5_flow_async_action_list_handle_query_update(struct rte_eth_dev *dev,
enum rte_flow_query_update_mode mode,
void *user_data,
struct rte_flow_error *error);
+static int
+mlx5_flow_calc_table_hash(struct rte_eth_dev *dev,
+ const struct rte_flow_template_table *table,
+ const struct rte_flow_item pattern[],
+ uint8_t pattern_template_index,
+ uint32_t *hash, struct rte_flow_error *error);
+
static const struct rte_flow_ops mlx5_flow_ops = {
.validate = mlx5_flow_validate,
.create = mlx5_flow_create,
@@ -1231,6 +1238,7 @@ static const struct rte_flow_ops mlx5_flow_ops = {
mlx5_flow_action_list_handle_query_update,
.async_action_list_handle_query_update =
mlx5_flow_async_action_list_handle_query_update,
+ .flow_calc_table_hash = mlx5_flow_calc_table_hash,
};
/* Tunnel information. */
@@ -11058,6 +11066,30 @@ mlx5_flow_async_action_list_handle_query_update(struct rte_eth_dev *dev,
}
+static int
+mlx5_flow_calc_table_hash(struct rte_eth_dev *dev,
+ const struct rte_flow_template_table *table,
+ const struct rte_flow_item pattern[],
+ uint8_t pattern_template_index,
+ uint32_t *hash, struct rte_flow_error *error)
+{
+ struct rte_flow_attr attr = { .transfer = 0 };
+ enum mlx5_flow_drv_type drv_type = flow_get_drv_type(dev, &attr);
+ const struct mlx5_flow_driver_ops *fops;
+
+ if (drv_type == MLX5_FLOW_TYPE_MIN || drv_type == MLX5_FLOW_TYPE_MAX)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "invalid driver type");
+ fops = flow_get_drv_ops(drv_type);
+ if (!fops || !fops->action_query_update)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "no query_update handler");
+ return fops->flow_calc_table_hash(dev, table, pattern, pattern_template_index,
+ hash, error);
+}
+
/**
* Destroy all indirect actions (shared RSS).
*
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index aea8b38f39..64e2fc6f04 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -2059,6 +2059,13 @@ typedef int
const void **update, void **query,
enum rte_flow_query_update_mode mode,
void *user_data, struct rte_flow_error *error);
+typedef int
+(*mlx5_flow_calc_table_hash_t)
+ (struct rte_eth_dev *dev,
+ const struct rte_flow_template_table *table,
+ const struct rte_flow_item pattern[],
+ uint8_t pattern_template_index,
+ uint32_t *hash, struct rte_flow_error *error);
struct mlx5_flow_driver_ops {
mlx5_flow_validate_t validate;
@@ -2130,6 +2137,7 @@ struct mlx5_flow_driver_ops {
action_list_handle_query_update;
mlx5_flow_async_action_list_handle_query_update_t
async_action_list_handle_query_update;
+ mlx5_flow_calc_table_hash_t flow_calc_table_hash;
};
/* mlx5_flow.c */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index b0ef14c14e..67ef272a2d 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2773,7 +2773,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
static const struct rte_flow_item *
flow_hw_get_rule_items(struct rte_eth_dev *dev,
- struct rte_flow_template_table *table,
+ const struct rte_flow_template_table *table,
const struct rte_flow_item items[],
uint8_t pattern_template_index,
struct mlx5_hw_q_job *job)
@@ -10143,6 +10143,34 @@ flow_hw_action_list_handle_query_update(struct rte_eth_dev *dev,
update, query, mode, NULL, error);
}
+static int
+flow_hw_calc_table_hash(struct rte_eth_dev *dev,
+ const struct rte_flow_template_table *table,
+ const struct rte_flow_item pattern[],
+ uint8_t pattern_template_index,
+ uint32_t *hash, struct rte_flow_error *error)
+{
+ const struct rte_flow_item *items;
+ /* Temp job to allow adding missing items */
+ static struct rte_flow_item tmp_items[MLX5_HW_MAX_ITEMS];
+ static struct mlx5_hw_q_job job = {.items = tmp_items};
+ int res;
+
+ items = flow_hw_get_rule_items(dev, table, pattern,
+ pattern_template_index,
+ &job);
+ res = mlx5dr_rule_hash_calculate(table->matcher, items,
+ pattern_template_index,
+ MLX5DR_RULE_HASH_CALC_MODE_RAW,
+ hash);
+ if (res)
+ return rte_flow_error_set(error, res,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "hash could not be calculated");
+ return 0;
+}
+
const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = {
.info_get = flow_hw_info_get,
.configure = flow_hw_configure,
@@ -10186,6 +10214,7 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = {
.get_q_aged_flows = flow_hw_get_q_aged_flows,
.item_create = flow_dv_item_create,
.item_release = flow_dv_item_release,
+ .flow_calc_table_hash = flow_hw_calc_table_hash,
};
/**
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 10/30] net/mlx5: fix insert by index
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (7 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 09/30] net/mlx5: add support for calc hash Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 11/30] net/mlx5: fix query for NIC flow cap Gregory Etelson
` (20 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Ori Kam, erezsh, Matan Azrad,
Viacheslav Ovsiienko, Suanming Mou, Alex Vesker
From: Ori Kam <orika@nvidia.com>
Due to mlx5dr internal logic calling the rule_create function
must have items structure.
This commit create such temp structure.
Fixes: fa16fead9a68 ("net/mlx5/hws: support rule update after its creation")
Cc: erezsh@nvidia.com
Signed-off-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 67ef272a2d..9e549a1ba2 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2987,6 +2987,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
void *user_data,
struct rte_flow_error *error)
{
+ struct rte_flow_item items[] = {{.type = RTE_FLOW_ITEM_TYPE_END,}};
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5dr_rule_attr rule_attr = {
.queue_id = queue,
@@ -3050,7 +3051,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
goto free;
}
ret = mlx5dr_rule_create(table->matcher,
- 0, NULL, action_template_index, rule_acts,
+ 0, items, action_template_index, rule_acts,
&rule_attr, (struct mlx5dr_rule *)flow->rule);
if (likely(!ret))
return (struct rte_flow *)flow;
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 11/30] net/mlx5: fix query for NIC flow cap
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (8 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 10/30] net/mlx5: fix insert by index Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 12/30] net/mlx5: add support for more registers Gregory Etelson
` (19 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Ori Kam, bingz, Matan Azrad,
Viacheslav Ovsiienko, Suanming Mou
From: Ori Kam <orika@nvidia.com>
Add query for nic flow table support bit.
Fixes: 5f44fb1958e5 ("common/mlx5: query capability of registers")
Cc: bingz@nvidia.com
Signed-off-by: Ori Kam <orika@nvidia.com>
---
drivers/common/mlx5/mlx5_devx_cmds.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index ff2d6d10b7..3afb2e9f80 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -1082,6 +1082,7 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
attr->striding_rq = MLX5_GET(cmd_hca_cap, hcattr, striding_rq);
attr->ext_stride_num_range =
MLX5_GET(cmd_hca_cap, hcattr, ext_stride_num_range);
+ attr->nic_flow_table = MLX5_GET(cmd_hca_cap, hcattr, nic_flow_table);
attr->max_flow_counter_15_0 = MLX5_GET(cmd_hca_cap, hcattr,
max_flow_counter_15_0);
attr->max_flow_counter_31_16 = MLX5_GET(cmd_hca_cap, hcattr,
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 12/30] net/mlx5: add support for more registers
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (9 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 11/30] net/mlx5: fix query for NIC flow cap Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 13/30] net/mlx5: add validation support for tags Gregory Etelson
` (18 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Ori Kam, Matan Azrad,
Viacheslav Ovsiienko, Suanming Mou
From: Ori Kam <orika@nvidia.com>
This commit adds the support for a additional registers that were added
to the HW.
Signed-off-by: Ori Kam <orika@nvidia.com>
---
drivers/common/mlx5/mlx5_devx_cmds.c | 16 +++++++++----
drivers/common/mlx5/mlx5_devx_cmds.h | 2 +-
drivers/common/mlx5/mlx5_prm.h | 36 ++++++++++++++++++++++++----
drivers/net/mlx5/mlx5.c | 4 ++--
drivers/net/mlx5/mlx5.h | 2 +-
drivers/net/mlx5/mlx5_flow_dv.c | 4 ++++
drivers/net/mlx5/mlx5_flow_hw.c | 2 +-
7 files changed, 53 insertions(+), 13 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 3afb2e9f80..4d8818924a 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -1229,7 +1229,7 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
attr->modify_outer_ip_ecn = MLX5_GET
(flow_table_nic_cap, hcattr,
ft_header_modify_nic_receive.outer_ip_ecn);
- attr->set_reg_c = 0xff;
+ attr->set_reg_c = 0xffff;
if (attr->nic_flow_table) {
#define GET_RX_REG_X_BITS \
MLX5_GET(flow_table_nic_cap, hcattr, \
@@ -1238,10 +1238,16 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
MLX5_GET(flow_table_nic_cap, hcattr, \
ft_header_modify_nic_transmit.metadata_reg_c_x)
- uint32_t tx_reg, rx_reg;
+ uint32_t tx_reg, rx_reg, reg_c_8_15;
tx_reg = GET_TX_REG_X_BITS;
+ reg_c_8_15 = MLX5_GET(flow_table_nic_cap, hcattr,
+ ft_field_support_2_nic_transmit.metadata_reg_c_8_15);
+ tx_reg |= ((0xff & reg_c_8_15) << 8);
rx_reg = GET_RX_REG_X_BITS;
+ reg_c_8_15 = MLX5_GET(flow_table_nic_cap, hcattr,
+ ft_field_support_2_nic_receive.metadata_reg_c_8_15);
+ rx_reg |= ((0xff & reg_c_8_15) << 8);
attr->set_reg_c &= (rx_reg & tx_reg);
#undef GET_RX_REG_X_BITS
@@ -1371,7 +1377,7 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
MLX5_GET(esw_cap, hcattr, esw_manager_vport_number);
}
if (attr->eswitch_manager) {
- uint32_t esw_reg;
+ uint32_t esw_reg, reg_c_8_15;
hcattr = mlx5_devx_get_hca_cap(ctx, in, out, &rc,
MLX5_GET_HCA_CAP_OP_MOD_ESW_FLOW_TABLE |
@@ -1380,7 +1386,9 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
return rc;
esw_reg = MLX5_GET(flow_table_esw_cap, hcattr,
ft_header_modify_esw_fdb.metadata_reg_c_x);
- attr->set_reg_c &= esw_reg;
+ reg_c_8_15 = MLX5_GET(flow_table_esw_cap, hcattr,
+ ft_field_support_2_esw_fdb.metadata_reg_c_8_15);
+ attr->set_reg_c &= ((0xff & reg_c_8_15) << 8) | esw_reg;
}
return 0;
error:
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index 11772431ae..7f23e925a5 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -301,7 +301,7 @@ struct mlx5_hca_attr {
uint32_t cqe_compression_128:1;
uint32_t multi_pkt_send_wqe:1;
uint32_t enhanced_multi_pkt_send_wqe:1;
- uint32_t set_reg_c:8;
+ uint32_t set_reg_c:16;
uint32_t nic_flow_table:1;
uint32_t modify_outer_ip_ecn:1;
union {
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 19c6d0282b..2b499666f8 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -840,6 +840,14 @@ enum mlx5_modification_field {
MLX5_MODI_IN_MPLS_LABEL_3,
MLX5_MODI_IN_MPLS_LABEL_4,
MLX5_MODI_OUT_IPV6_NEXT_HDR = 0x4A,
+ MLX5_MODI_META_REG_C_8 = 0x8F,
+ MLX5_MODI_META_REG_C_9 = 0x90,
+ MLX5_MODI_META_REG_C_10 = 0x91,
+ MLX5_MODI_META_REG_C_11 = 0x92,
+ MLX5_MODI_META_REG_C_12 = 0x93,
+ MLX5_MODI_META_REG_C_13 = 0x94,
+ MLX5_MODI_META_REG_C_14 = 0x95,
+ MLX5_MODI_META_REG_C_15 = 0x96,
MLX5_MODI_INVALID = INT_MAX,
};
@@ -2227,8 +2235,22 @@ struct mlx5_ifc_ft_fields_support_2_bits {
u8 inner_ipv4_checksum_ok[0x1];
u8 inner_l4_checksum_ok[0x1];
u8 outer_ipv4_checksum_ok[0x1];
- u8 outer_l4_checksum_ok[0x1];
- u8 reserved_at_20[0x60];
+ u8 outer_l4_checksum_ok[0x1]; /* end of DW0 */
+ u8 reserved_at_20[0x18];
+ union {
+ struct {
+ u8 metadata_reg_c_15[0x1];
+ u8 metadata_reg_c_14[0x1];
+ u8 metadata_reg_c_13[0x1];
+ u8 metadata_reg_c_12[0x1];
+ u8 metadata_reg_c_11[0x1];
+ u8 metadata_reg_c_10[0x1];
+ u8 metadata_reg_c_9[0x1];
+ u8 metadata_reg_c_8[0x1];
+ };
+ u8 metadata_reg_c_8_15[0x8];
+ }; /* end of DW1 */
+ u8 reserved_at_40[0x40];
};
struct mlx5_ifc_flow_table_nic_cap_bits {
@@ -2250,7 +2272,10 @@ struct mlx5_ifc_flow_table_nic_cap_bits {
ft_header_modify_nic_receive;
struct mlx5_ifc_ft_fields_support_2_bits
ft_field_support_2_nic_receive;
- u8 reserved_at_1480[0x780];
+ u8 reserved_at_1480[0x280];
+ struct mlx5_ifc_ft_fields_support_2_bits
+ ft_field_support_2_nic_transmit;
+ u8 reserved_at_1780[0x480];
struct mlx5_ifc_ft_fields_support_bits
ft_header_modify_nic_transmit;
u8 reserved_at_2000[0x6000];
@@ -2259,7 +2284,10 @@ struct mlx5_ifc_flow_table_nic_cap_bits {
struct mlx5_ifc_flow_table_esw_cap_bits {
u8 reserved_at_0[0x800];
struct mlx5_ifc_ft_fields_support_bits ft_header_modify_esw_fdb;
- u8 reserved_at_C00[0x7400];
+ u8 reserved_at_C00[0x800];
+ struct mlx5_ifc_ft_fields_support_2_bits
+ ft_field_support_2_esw_fdb;
+ u8 reserved_at_1480[0x6b80];
};
enum mlx5_ifc_cross_vhca_object_to_object_supported_types {
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 840c566162..cdb4eeb612 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1604,8 +1604,8 @@ mlx5_init_hws_flow_tags_registers(struct mlx5_dev_ctx_shared *sh)
{
struct mlx5_dev_registers *reg = &sh->registers;
uint32_t meta_mode = sh->config.dv_xmeta_en;
- uint8_t masks = (uint8_t)sh->cdev->config.hca_attr.set_reg_c;
- uint8_t unset = 0;
+ uint16_t masks = (uint16_t)sh->cdev->config.hca_attr.set_reg_c;
+ uint16_t unset = 0;
uint32_t i, j;
/*
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index a0dcd788b4..0289cbd04b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1373,7 +1373,7 @@ struct mlx5_hws_cnt_svc_mng {
struct mlx5_hws_aso_mng aso_mng __rte_cache_aligned;
};
-#define MLX5_FLOW_HW_TAGS_MAX 8
+#define MLX5_FLOW_HW_TAGS_MAX 12
struct mlx5_dev_registers {
enum modify_reg aso_reg;
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 9268a07c84..bdc8d0076a 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -970,6 +970,10 @@ static enum mlx5_modification_field reg_to_field[] = {
[REG_C_5] = MLX5_MODI_META_REG_C_5,
[REG_C_6] = MLX5_MODI_META_REG_C_6,
[REG_C_7] = MLX5_MODI_META_REG_C_7,
+ [REG_C_8] = MLX5_MODI_META_REG_C_8,
+ [REG_C_9] = MLX5_MODI_META_REG_C_9,
+ [REG_C_10] = MLX5_MODI_META_REG_C_10,
+ [REG_C_11] = MLX5_MODI_META_REG_C_11,
};
/**
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 9e549a1ba2..ceeb82a649 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -5615,7 +5615,7 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev,
{
const struct rte_flow_item_tag *tag =
(const struct rte_flow_item_tag *)items[i].spec;
- uint8_t regcs = (uint8_t)priv->sh->cdev->config.hca_attr.set_reg_c;
+ uint16_t regcs = (uint8_t)priv->sh->cdev->config.hca_attr.set_reg_c;
if (!((1 << (tag->index - REG_C_0)) & regcs))
return rte_flow_error_set(error, EINVAL,
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 13/30] net/mlx5: add validation support for tags
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (10 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 12/30] net/mlx5: add support for more registers Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 14/30] net/mlx5: reuse reformat and modify header actions in a table Gregory Etelson
` (17 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Ori Kam, Matan Azrad,
Viacheslav Ovsiienko, Suanming Mou
From: Ori Kam <orika@nvidia.com>
This commit introduce validation for invalid tags
Signed-off-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 30 +++++++++++++++++++++++++++---
1 file changed, 27 insertions(+), 3 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index ceeb82a649..6fc649d736 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -4049,7 +4049,8 @@ flow_hw_modify_field_is_used(const struct rte_flow_action_modify_field *action,
}
static int
-flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
+flow_hw_validate_action_modify_field(struct rte_eth_dev *dev,
+ const struct rte_flow_action *action,
const struct rte_flow_action *mask,
struct rte_flow_error *error)
{
@@ -4118,6 +4119,22 @@ flow_hw_validate_action_modify_field(const struct rte_flow_action *action,
if (ret)
return ret;
}
+ if ((action_conf->dst.field == RTE_FLOW_FIELD_TAG &&
+ action_conf->dst.tag_index >= MLX5_FLOW_HW_TAGS_MAX &&
+ action_conf->dst.tag_index != MLX5_LINEAR_HASH_TAG_INDEX) ||
+ (action_conf->src.field == RTE_FLOW_FIELD_TAG &&
+ action_conf->src.tag_index >= MLX5_FLOW_HW_TAGS_MAX &&
+ action_conf->src.tag_index != MLX5_LINEAR_HASH_TAG_INDEX))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "tag index is out of range");
+ if ((action_conf->dst.field == RTE_FLOW_FIELD_TAG &&
+ flow_hw_get_reg_id(dev, RTE_FLOW_ITEM_TYPE_TAG, action_conf->dst.tag_index) == REG_NON) ||
+ (action_conf->src.field == RTE_FLOW_FIELD_TAG &&
+ flow_hw_get_reg_id(dev, RTE_FLOW_ITEM_TYPE_TAG, action_conf->src.tag_index) == REG_NON))
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "tag index is out of range");
if (mask_conf->width != UINT32_MAX)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
@@ -4728,7 +4745,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
action_flags |= MLX5_FLOW_ACTION_METER;
break;
case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
- ret = flow_hw_validate_action_modify_field(action, mask,
+ ret = flow_hw_validate_action_modify_field(dev, action, mask,
error);
if (ret < 0)
return ret;
@@ -5596,7 +5613,14 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev,
if (tag == NULL)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL, "Tag spec is NULL");
+ NULL,
+ "Tag spec is NULL");
+ if (tag->index >= MLX5_FLOW_HW_TAGS_MAX &&
+ tag->index != MLX5_LINEAR_HASH_TAG_INDEX)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "Invalid tag index");
tag_idx = flow_hw_get_reg_id(dev, RTE_FLOW_ITEM_TYPE_TAG, tag->index);
if (tag_idx == REG_NON)
return rte_flow_error_set(error, EINVAL,
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 14/30] net/mlx5: reuse reformat and modify header actions in a table
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (11 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 13/30] net/mlx5: add validation support for tags Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 15/30] net/mlx5/hws: check the rule status on rule update Gregory Etelson
` (16 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Ori Kam, Matan Azrad,
Viacheslav Ovsiienko, Suanming Mou
If application defined several actions templates with non-shared
reformat or modify headers actions AND used these templates to create
a table, HWS could share reformat or modify headers resources,
instead of creating a resource for each action template.
The patch activates HWS code in a way that provides reformat or
modify header resources sharing.
The patch updates modify field and raw encap template actions
validations:
- modify field does not allow empty action template masks.
- raw encap added action template mask validation.
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/mlx5_flow.h | 8 +-
drivers/net/mlx5/mlx5_flow_dv.c | 3 +-
drivers/net/mlx5/mlx5_flow_hw.c | 583 ++++++++++++++++++++++++--------
3 files changed, 451 insertions(+), 143 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 64e2fc6f04..ddb3b7b6fd 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1368,7 +1368,9 @@ struct mlx5_hw_jump_action {
struct mlx5_hw_encap_decap_action {
struct mlx5dr_action *action; /* Action object. */
/* Is header_reformat action shared across flows in table. */
- bool shared;
+ uint32_t shared:1;
+ uint32_t multi_pattern:1;
+ volatile uint32_t *multi_pattern_refcnt;
size_t data_size; /* Action metadata size. */
uint8_t data[]; /* Action data. */
};
@@ -1382,7 +1384,9 @@ struct mlx5_hw_modify_header_action {
/* Modify header action position in action rule table. */
uint16_t pos;
/* Is MODIFY_HEADER action shared across flows in table. */
- bool shared;
+ uint32_t shared:1;
+ uint32_t multi_pattern:1;
+ volatile uint32_t *multi_pattern_refcnt;
/* Amount of modification commands stored in the precompiled buffer. */
uint32_t mhdr_cmds_num;
/* Precompiled modification commands. */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index bdc8d0076a..84b94a9815 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -4579,7 +4579,8 @@ flow_dv_convert_encap_data(const struct rte_flow_item *items, uint8_t *buf,
(void *)items->type,
"items total size is too big"
" for encap action");
- rte_memcpy((void *)&buf[temp_size], items->spec, len);
+ if (items->spec)
+ rte_memcpy(&buf[temp_size], items->spec, len);
switch (items->type) {
case RTE_FLOW_ITEM_TYPE_ETH:
eth = (struct rte_ether_hdr *)&buf[temp_size];
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 6fc649d736..84c78ba19c 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -71,6 +71,95 @@ struct mlx5_indlst_legacy {
enum rte_flow_action_type legacy_type;
};
+#define MLX5_CONST_ENCAP_ITEM(encap_type, ptr) \
+(((const struct encap_type *)(ptr))->definition)
+
+struct mlx5_multi_pattern_ctx {
+ union {
+ struct mlx5dr_action_reformat_header reformat_hdr;
+ struct mlx5dr_action_mh_pattern mh_pattern;
+ };
+ union {
+ /* action template auxiliary structures for object destruction */
+ struct mlx5_hw_encap_decap_action *encap;
+ struct mlx5_hw_modify_header_action *mhdr;
+ };
+ /* multi pattern action */
+ struct mlx5dr_rule_action *rule_action;
+};
+
+#define MLX5_MULTIPATTERN_ENCAP_NUM 4
+
+struct mlx5_tbl_multi_pattern_ctx {
+ struct {
+ uint32_t elements_num;
+ struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+ } reformat[MLX5_MULTIPATTERN_ENCAP_NUM];
+
+ struct {
+ uint32_t elements_num;
+ struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+ } mh;
+};
+
+#define MLX5_EMPTY_MULTI_PATTERN_CTX {{{0,}},}
+
+static int
+mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev,
+ struct rte_flow_template_table *tbl,
+ struct mlx5_tbl_multi_pattern_ctx *mpat,
+ struct rte_flow_error *error);
+
+static __rte_always_inline int
+mlx5_multi_pattern_reformat_to_index(enum mlx5dr_action_type type)
+{
+ switch (type) {
+ case MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2:
+ return 0;
+ case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2:
+ return 1;
+ case MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2:
+ return 2;
+ case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3:
+ return 3;
+ default:
+ break;
+ }
+ return -1;
+}
+
+static __rte_always_inline enum mlx5dr_action_type
+mlx5_multi_pattern_reformat_index_to_type(uint32_t ix)
+{
+ switch (ix) {
+ case 0:
+ return MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2;
+ case 1:
+ return MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2;
+ case 2:
+ return MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2;
+ case 3:
+ return MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3;
+ default:
+ break;
+ }
+ return MLX5DR_ACTION_TYP_MAX;
+}
+
+static inline enum mlx5dr_table_type
+get_mlx5dr_table_type(const struct rte_flow_attr *attr)
+{
+ enum mlx5dr_table_type type;
+
+ if (attr->transfer)
+ type = MLX5DR_TABLE_TYPE_FDB;
+ else if (attr->egress)
+ type = MLX5DR_TABLE_TYPE_NIC_TX;
+ else
+ type = MLX5DR_TABLE_TYPE_NIC_RX;
+ return type;
+}
+
struct mlx5_mirror_clone {
enum rte_flow_action_type type;
void *action_ctx;
@@ -462,6 +551,34 @@ flow_hw_ct_compile(struct rte_eth_dev *dev,
return 0;
}
+static void
+flow_hw_template_destroy_reformat_action(struct mlx5_hw_encap_decap_action *encap_decap)
+{
+ if (encap_decap->multi_pattern) {
+ uint32_t refcnt = __atomic_sub_fetch(encap_decap->multi_pattern_refcnt,
+ 1, __ATOMIC_RELAXED);
+ if (refcnt)
+ return;
+ mlx5_free((void *)(uintptr_t)encap_decap->multi_pattern_refcnt);
+ }
+ if (encap_decap->action)
+ mlx5dr_action_destroy(encap_decap->action);
+}
+
+static void
+flow_hw_template_destroy_mhdr_action(struct mlx5_hw_modify_header_action *mhdr)
+{
+ if (mhdr->multi_pattern) {
+ uint32_t refcnt = __atomic_sub_fetch(mhdr->multi_pattern_refcnt,
+ 1, __ATOMIC_RELAXED);
+ if (refcnt)
+ return;
+ mlx5_free((void *)(uintptr_t)mhdr->multi_pattern_refcnt);
+ }
+ if (mhdr->action)
+ mlx5dr_action_destroy(mhdr->action);
+}
+
/**
* Destroy DR actions created by action template.
*
@@ -503,14 +620,12 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev,
acts->tir = NULL;
}
if (acts->encap_decap) {
- if (acts->encap_decap->action)
- mlx5dr_action_destroy(acts->encap_decap->action);
+ flow_hw_template_destroy_reformat_action(acts->encap_decap);
mlx5_free(acts->encap_decap);
acts->encap_decap = NULL;
}
if (acts->mhdr) {
- if (acts->mhdr->action)
- mlx5dr_action_destroy(acts->mhdr->action);
+ flow_hw_template_destroy_mhdr_action(acts->mhdr);
mlx5_free(acts->mhdr);
acts->mhdr = NULL;
}
@@ -881,8 +996,6 @@ flow_hw_action_modify_field_is_shared(const struct rte_flow_action *action,
if (v->src.field == RTE_FLOW_FIELD_VALUE) {
uint32_t j;
- if (m == NULL)
- return false;
for (j = 0; j < RTE_DIM(m->src.value); ++j) {
/*
* Immediate value is considered to be masked
@@ -1630,6 +1743,137 @@ table_template_translate_indirect_list(struct rte_eth_dev *dev,
return ret;
}
+static int
+mlx5_tbl_translate_reformat(struct mlx5_priv *priv,
+ const struct rte_flow_template_table_attr *table_attr,
+ struct mlx5_hw_actions *acts,
+ struct rte_flow_actions_template *at,
+ const struct rte_flow_item *enc_item,
+ const struct rte_flow_item *enc_item_m,
+ uint8_t *encap_data, uint8_t *encap_data_m,
+ struct mlx5_tbl_multi_pattern_ctx *mp_ctx,
+ size_t data_size, uint16_t reformat_src,
+ enum mlx5dr_action_type refmt_type,
+ struct rte_flow_error *error)
+{
+ int mp_reformat_ix = mlx5_multi_pattern_reformat_to_index(refmt_type);
+ const struct rte_flow_attr *attr = &table_attr->flow_attr;
+ enum mlx5dr_table_type tbl_type = get_mlx5dr_table_type(attr);
+ struct mlx5dr_action_reformat_header hdr;
+ uint8_t buf[MLX5_ENCAP_MAX_LEN];
+ bool shared_rfmt = false;
+ int ret;
+
+ MLX5_ASSERT(at->reformat_off != UINT16_MAX);
+ if (enc_item) {
+ MLX5_ASSERT(!encap_data);
+ ret = flow_dv_convert_encap_data(enc_item, buf, &data_size, error);
+ if (ret)
+ return ret;
+ encap_data = buf;
+ if (enc_item_m)
+ shared_rfmt = true;
+ } else if (encap_data && encap_data_m) {
+ shared_rfmt = true;
+ }
+ acts->encap_decap = mlx5_malloc(MLX5_MEM_ZERO,
+ sizeof(*acts->encap_decap) + data_size,
+ 0, SOCKET_ID_ANY);
+ if (!acts->encap_decap)
+ return rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL, "no memory for reformat context");
+ hdr.sz = data_size;
+ hdr.data = encap_data;
+ if (shared_rfmt || mp_reformat_ix < 0) {
+ uint16_t reformat_ix = at->reformat_off;
+ uint32_t flags = mlx5_hw_act_flag[!!attr->group][tbl_type] |
+ MLX5DR_ACTION_FLAG_SHARED;
+
+ acts->encap_decap->action =
+ mlx5dr_action_create_reformat(priv->dr_ctx, refmt_type,
+ 1, &hdr, 0, flags);
+ if (!acts->encap_decap->action)
+ return -rte_errno;
+ acts->rule_acts[reformat_ix].action = acts->encap_decap->action;
+ acts->rule_acts[reformat_ix].reformat.data = acts->encap_decap->data;
+ acts->rule_acts[reformat_ix].reformat.offset = 0;
+ acts->encap_decap->shared = true;
+ } else {
+ uint32_t ix;
+ typeof(mp_ctx->reformat[0]) *reformat_ctx = mp_ctx->reformat +
+ mp_reformat_ix;
+
+ ix = reformat_ctx->elements_num++;
+ reformat_ctx->ctx[ix].reformat_hdr = hdr;
+ reformat_ctx->ctx[ix].rule_action = &acts->rule_acts[at->reformat_off];
+ reformat_ctx->ctx[ix].encap = acts->encap_decap;
+ acts->rule_acts[at->reformat_off].reformat.hdr_idx = ix;
+ acts->encap_decap_pos = at->reformat_off;
+ acts->encap_decap->data_size = data_size;
+ ret = __flow_hw_act_data_encap_append
+ (priv, acts, (at->actions + reformat_src)->type,
+ reformat_src, at->reformat_off, data_size);
+ if (ret)
+ return -rte_errno;
+ }
+ return 0;
+}
+
+static int
+mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev,
+ const struct mlx5_flow_template_table_cfg *cfg,
+ struct mlx5_hw_actions *acts,
+ struct mlx5_tbl_multi_pattern_ctx *mp_ctx,
+ struct mlx5_hw_modify_header_action *mhdr,
+ struct rte_flow_error *error)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ const struct rte_flow_template_table_attr *table_attr = &cfg->attr;
+ const struct rte_flow_attr *attr = &table_attr->flow_attr;
+ enum mlx5dr_table_type tbl_type = get_mlx5dr_table_type(attr);
+ uint16_t mhdr_ix = mhdr->pos;
+ struct mlx5dr_action_mh_pattern pattern = {
+ .sz = sizeof(struct mlx5_modification_cmd) * mhdr->mhdr_cmds_num
+ };
+
+ if (flow_hw_validate_compiled_modify_field(dev, cfg, mhdr, error)) {
+ __flow_hw_action_template_destroy(dev, acts);
+ return -rte_errno;
+ }
+ acts->mhdr = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*acts->mhdr),
+ 0, SOCKET_ID_ANY);
+ if (!acts->mhdr)
+ return rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL, "translate modify_header: no memory for modify header context");
+ rte_memcpy(acts->mhdr, mhdr, sizeof(*mhdr));
+ pattern.data = (__be64 *)acts->mhdr->mhdr_cmds;
+ if (mhdr->shared) {
+ uint32_t flags = mlx5_hw_act_flag[!!attr->group][tbl_type] |
+ MLX5DR_ACTION_FLAG_SHARED;
+
+ acts->mhdr->action = mlx5dr_action_create_modify_header
+ (priv->dr_ctx, 1, &pattern, 0,
+ flags);
+ if (!acts->mhdr->action)
+ return rte_flow_error_set(error, rte_errno,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL, "translate modify_header: failed to create DR action");
+ acts->rule_acts[mhdr_ix].action = acts->mhdr->action;
+ } else {
+ typeof(mp_ctx->mh) *mh = &mp_ctx->mh;
+ uint32_t idx = mh->elements_num;
+ struct mlx5_multi_pattern_ctx *mh_ctx = mh->ctx + mh->elements_num++;
+
+ mh_ctx->mh_pattern = pattern;
+ mh_ctx->mhdr = acts->mhdr;
+ mh_ctx->rule_action = &acts->rule_acts[mhdr_ix];
+ acts->rule_acts[mhdr_ix].modify_header.pattern_idx = idx;
+ }
+ return 0;
+}
+
/**
* Translate rte_flow actions to DR action.
*
@@ -1658,6 +1902,7 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
const struct mlx5_flow_template_table_cfg *cfg,
struct mlx5_hw_actions *acts,
struct rte_flow_actions_template *at,
+ struct mlx5_tbl_multi_pattern_ctx *mp_ctx,
struct rte_flow_error *error)
{
struct mlx5_priv *priv = dev->data->dev_private;
@@ -1820,32 +2065,26 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
break;
case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
MLX5_ASSERT(!reformat_used);
- enc_item = ((const struct rte_flow_action_vxlan_encap *)
- actions->conf)->definition;
+ enc_item = MLX5_CONST_ENCAP_ITEM(rte_flow_action_vxlan_encap,
+ actions->conf);
if (masks->conf)
- enc_item_m = ((const struct rte_flow_action_vxlan_encap *)
- masks->conf)->definition;
+ enc_item_m = MLX5_CONST_ENCAP_ITEM(rte_flow_action_vxlan_encap,
+ masks->conf);
reformat_used = true;
reformat_src = src_pos;
refmt_type = MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2;
break;
case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
MLX5_ASSERT(!reformat_used);
- enc_item = ((const struct rte_flow_action_nvgre_encap *)
- actions->conf)->definition;
+ enc_item = MLX5_CONST_ENCAP_ITEM(rte_flow_action_nvgre_encap,
+ actions->conf);
if (masks->conf)
- enc_item_m = ((const struct rte_flow_action_nvgre_encap *)
- masks->conf)->definition;
+ enc_item_m = MLX5_CONST_ENCAP_ITEM(rte_flow_action_nvgre_encap,
+ masks->conf);
reformat_used = true;
reformat_src = src_pos;
refmt_type = MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2;
break;
- case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
- case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP:
- MLX5_ASSERT(!reformat_used);
- reformat_used = true;
- refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2;
- break;
case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
raw_encap_data =
(const struct rte_flow_action_raw_encap *)
@@ -1869,6 +2108,12 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
}
reformat_src = src_pos;
break;
+ case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
+ case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP:
+ MLX5_ASSERT(!reformat_used);
+ reformat_used = true;
+ refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2;
+ break;
case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
reformat_used = true;
refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2;
@@ -2005,83 +2250,20 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
}
}
if (mhdr.pos != UINT16_MAX) {
- struct mlx5dr_action_mh_pattern pattern;
- uint32_t flags;
- uint32_t bulk_size;
- size_t mhdr_len;
-
- if (flow_hw_validate_compiled_modify_field(dev, cfg, &mhdr, error)) {
- __flow_hw_action_template_destroy(dev, acts);
- return -rte_errno;
- }
- acts->mhdr = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*acts->mhdr),
- 0, SOCKET_ID_ANY);
- if (!acts->mhdr)
- goto err;
- rte_memcpy(acts->mhdr, &mhdr, sizeof(*acts->mhdr));
- mhdr_len = sizeof(struct mlx5_modification_cmd) * acts->mhdr->mhdr_cmds_num;
- flags = mlx5_hw_act_flag[!!attr->group][type];
- if (acts->mhdr->shared) {
- flags |= MLX5DR_ACTION_FLAG_SHARED;
- bulk_size = 0;
- } else {
- bulk_size = rte_log2_u32(table_attr->nb_flows);
- }
- pattern.data = (__be64 *)acts->mhdr->mhdr_cmds;
- pattern.sz = mhdr_len;
- acts->mhdr->action = mlx5dr_action_create_modify_header
- (priv->dr_ctx, 1, &pattern,
- bulk_size, flags);
- if (!acts->mhdr->action)
+ ret = mlx5_tbl_translate_modify_header(dev, cfg, acts, mp_ctx,
+ &mhdr, error);
+ if (ret)
goto err;
- acts->rule_acts[acts->mhdr->pos].action = acts->mhdr->action;
}
if (reformat_used) {
- struct mlx5dr_action_reformat_header hdr;
- uint8_t buf[MLX5_ENCAP_MAX_LEN];
- bool shared_rfmt = true;
-
- MLX5_ASSERT(at->reformat_off != UINT16_MAX);
- if (enc_item) {
- MLX5_ASSERT(!encap_data);
- if (flow_dv_convert_encap_data(enc_item, buf, &data_size, error))
- goto err;
- encap_data = buf;
- if (!enc_item_m)
- shared_rfmt = false;
- } else if (encap_data && !encap_data_m) {
- shared_rfmt = false;
- }
- acts->encap_decap = mlx5_malloc(MLX5_MEM_ZERO,
- sizeof(*acts->encap_decap) + data_size,
- 0, SOCKET_ID_ANY);
- if (!acts->encap_decap)
- goto err;
- if (data_size) {
- acts->encap_decap->data_size = data_size;
- memcpy(acts->encap_decap->data, encap_data, data_size);
- }
-
- hdr.sz = data_size;
- hdr.data = encap_data;
- acts->encap_decap->action = mlx5dr_action_create_reformat
- (priv->dr_ctx, refmt_type,
- 1, &hdr,
- shared_rfmt ? 0 : rte_log2_u32(table_attr->nb_flows),
- mlx5_hw_act_flag[!!attr->group][type] |
- (shared_rfmt ? MLX5DR_ACTION_FLAG_SHARED : 0));
- if (!acts->encap_decap->action)
- goto err;
- acts->rule_acts[at->reformat_off].action = acts->encap_decap->action;
- acts->rule_acts[at->reformat_off].reformat.data = acts->encap_decap->data;
- if (shared_rfmt)
- acts->rule_acts[at->reformat_off].reformat.offset = 0;
- else if (__flow_hw_act_data_encap_append(priv, acts,
- (at->actions + reformat_src)->type,
- reformat_src, at->reformat_off, data_size))
+ ret = mlx5_tbl_translate_reformat(priv, table_attr, acts, at,
+ enc_item, enc_item_m,
+ encap_data, encap_data_m,
+ mp_ctx, data_size,
+ reformat_src,
+ refmt_type, error);
+ if (ret)
goto err;
- acts->encap_decap->shared = shared_rfmt;
- acts->encap_decap_pos = at->reformat_off;
}
return 0;
err:
@@ -2110,15 +2292,20 @@ flow_hw_actions_translate(struct rte_eth_dev *dev,
struct rte_flow_template_table *tbl,
struct rte_flow_error *error)
{
+ int ret;
uint32_t i;
+ struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX;
for (i = 0; i < tbl->nb_action_templates; i++) {
if (__flow_hw_actions_translate(dev, &tbl->cfg,
&tbl->ats[i].acts,
tbl->ats[i].action_template,
- error))
+ &mpat, error))
goto err;
}
+ ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error);
+ if (ret)
+ goto err;
return 0;
err:
while (i--)
@@ -3627,6 +3814,143 @@ flow_hw_q_flow_flush(struct rte_eth_dev *dev,
return 0;
}
+static int
+mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev,
+ struct rte_flow_template_table *tbl,
+ struct mlx5_tbl_multi_pattern_ctx *mpat,
+ struct rte_flow_error *error)
+{
+ uint32_t i;
+ struct mlx5_priv *priv = dev->data->dev_private;
+ const struct rte_flow_template_table_attr *table_attr = &tbl->cfg.attr;
+ const struct rte_flow_attr *attr = &table_attr->flow_attr;
+ enum mlx5dr_table_type type = get_mlx5dr_table_type(attr);
+ uint32_t flags = mlx5_hw_act_flag[!!attr->group][type];
+ struct mlx5dr_action *dr_action;
+ uint32_t bulk_size = rte_log2_u32(table_attr->nb_flows);
+
+ for (i = 0; i < MLX5_MULTIPATTERN_ENCAP_NUM; i++) {
+ uint32_t j;
+ uint32_t *reformat_refcnt;
+ typeof(mpat->reformat[0]) *reformat = mpat->reformat + i;
+ struct mlx5dr_action_reformat_header hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+ enum mlx5dr_action_type reformat_type =
+ mlx5_multi_pattern_reformat_index_to_type(i);
+
+ if (!reformat->elements_num)
+ continue;
+ for (j = 0; j < reformat->elements_num; j++)
+ hdr[j] = reformat->ctx[j].reformat_hdr;
+ reformat_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t), 0,
+ rte_socket_id());
+ if (!reformat_refcnt)
+ return rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL, "failed to allocate multi-pattern encap counter");
+ *reformat_refcnt = reformat->elements_num;
+ dr_action = mlx5dr_action_create_reformat
+ (priv->dr_ctx, reformat_type, reformat->elements_num, hdr,
+ bulk_size, flags);
+ if (!dr_action) {
+ mlx5_free(reformat_refcnt);
+ return rte_flow_error_set(error, rte_errno,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "failed to create multi-pattern encap action");
+ }
+ for (j = 0; j < reformat->elements_num; j++) {
+ reformat->ctx[j].rule_action->action = dr_action;
+ reformat->ctx[j].encap->action = dr_action;
+ reformat->ctx[j].encap->multi_pattern = 1;
+ reformat->ctx[j].encap->multi_pattern_refcnt = reformat_refcnt;
+ }
+ }
+ if (mpat->mh.elements_num) {
+ typeof(mpat->mh) *mh = &mpat->mh;
+ struct mlx5dr_action_mh_pattern pattern[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+ uint32_t *mh_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t),
+ 0, rte_socket_id());
+
+ if (!mh_refcnt)
+ return rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL, "failed to allocate modify header counter");
+ *mh_refcnt = mpat->mh.elements_num;
+ for (i = 0; i < mpat->mh.elements_num; i++)
+ pattern[i] = mh->ctx[i].mh_pattern;
+ dr_action = mlx5dr_action_create_modify_header
+ (priv->dr_ctx, mpat->mh.elements_num, pattern,
+ bulk_size, flags);
+ if (!dr_action) {
+ mlx5_free(mh_refcnt);
+ return rte_flow_error_set(error, rte_errno,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "failed to create multi-pattern header modify action");
+ }
+ for (i = 0; i < mpat->mh.elements_num; i++) {
+ mh->ctx[i].rule_action->action = dr_action;
+ mh->ctx[i].mhdr->action = dr_action;
+ mh->ctx[i].mhdr->multi_pattern = 1;
+ mh->ctx[i].mhdr->multi_pattern_refcnt = mh_refcnt;
+ }
+ }
+
+ return 0;
+}
+
+static int
+mlx5_hw_build_template_table(struct rte_eth_dev *dev,
+ uint8_t nb_action_templates,
+ struct rte_flow_actions_template *action_templates[],
+ struct mlx5dr_action_template *at[],
+ struct rte_flow_template_table *tbl,
+ struct rte_flow_error *error)
+{
+ int ret;
+ uint8_t i;
+ struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX;
+
+ for (i = 0; i < nb_action_templates; i++) {
+ uint32_t refcnt = __atomic_add_fetch(&action_templates[i]->refcnt, 1,
+ __ATOMIC_RELAXED);
+
+ if (refcnt <= 1) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ &action_templates[i], "invalid AT refcount");
+ goto at_error;
+ }
+ at[i] = action_templates[i]->tmpl;
+ tbl->ats[i].action_template = action_templates[i];
+ LIST_INIT(&tbl->ats[i].acts.act_list);
+ /* do NOT translate table action if `dev` was not started */
+ if (!dev->data->dev_started)
+ continue;
+ ret = __flow_hw_actions_translate(dev, &tbl->cfg,
+ &tbl->ats[i].acts,
+ action_templates[i],
+ &mpat, error);
+ if (ret) {
+ i++;
+ goto at_error;
+ }
+ }
+ tbl->nb_action_templates = nb_action_templates;
+ ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error);
+ if (ret)
+ goto at_error;
+ return 0;
+
+at_error:
+ while (i--) {
+ __flow_hw_action_template_destroy(dev, &tbl->ats[i].acts);
+ __atomic_sub_fetch(&action_templates[i]->refcnt,
+ 1, __ATOMIC_RELAXED);
+ }
+ return rte_errno;
+}
+
/**
* Create flow table.
*
@@ -3779,29 +4103,12 @@ flow_hw_table_create(struct rte_eth_dev *dev,
}
tbl->nb_item_templates = nb_item_templates;
/* Build the action template. */
- for (i = 0; i < nb_action_templates; i++) {
- uint32_t ret;
-
- ret = __atomic_fetch_add(&action_templates[i]->refcnt, 1,
- __ATOMIC_RELAXED) + 1;
- if (ret <= 1) {
- rte_errno = EINVAL;
- goto at_error;
- }
- at[i] = action_templates[i]->tmpl;
- tbl->ats[i].action_template = action_templates[i];
- LIST_INIT(&tbl->ats[i].acts.act_list);
- if (!port_started)
- continue;
- err = __flow_hw_actions_translate(dev, &tbl->cfg,
- &tbl->ats[i].acts,
- action_templates[i], &sub_error);
- if (err) {
- i++;
- goto at_error;
- }
+ err = mlx5_hw_build_template_table(dev, nb_action_templates,
+ action_templates, at, tbl, &sub_error);
+ if (err) {
+ i = nb_item_templates;
+ goto it_error;
}
- tbl->nb_action_templates = nb_action_templates;
tbl->matcher = mlx5dr_matcher_create
(tbl->grp->tbl, mt, nb_item_templates, at, nb_action_templates, &matcher_attr);
if (!tbl->matcher)
@@ -3815,7 +4122,7 @@ flow_hw_table_create(struct rte_eth_dev *dev,
LIST_INSERT_HEAD(&priv->flow_hw_tbl_ongo, tbl, next);
return tbl;
at_error:
- while (i--) {
+ for (i = 0; i < nb_action_templates; i++) {
__flow_hw_action_template_destroy(dev, &tbl->ats[i].acts);
__atomic_fetch_sub(&action_templates[i]->refcnt,
1, __ATOMIC_RELAXED);
@@ -4058,6 +4365,10 @@ flow_hw_validate_action_modify_field(struct rte_eth_dev *dev,
const struct rte_flow_action_modify_field *mask_conf = mask->conf;
int ret;
+ if (!mask_conf)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "modify_field mask conf is missing");
if (action_conf->operation != mask_conf->operation)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
@@ -4434,16 +4745,25 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev,
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
static int
-flow_hw_validate_action_raw_encap(struct rte_eth_dev *dev __rte_unused,
- const struct rte_flow_action *action,
+flow_hw_validate_action_raw_encap(const struct rte_flow_action *action,
+ const struct rte_flow_action *mask,
struct rte_flow_error *error)
{
- const struct rte_flow_action_raw_encap *raw_encap_data = action->conf;
+ const struct rte_flow_action_raw_encap *mask_conf = mask->conf;
+ const struct rte_flow_action_raw_encap *action_conf = action->conf;
- if (!raw_encap_data || !raw_encap_data->size || !raw_encap_data->data)
+ if (!mask_conf || !mask_conf->size)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, mask,
+ "raw_encap: size must be masked");
+ if (!action_conf || !action_conf->size)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION, action,
- "invalid raw_encap_data");
+ "raw_encap: invalid action configuration");
+ if (mask_conf->data && !action_conf->data)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "raw_encap: masked data is missing");
return 0;
}
@@ -4724,7 +5044,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
action_flags |= MLX5_FLOW_ACTION_DECAP;
break;
case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
- ret = flow_hw_validate_action_raw_encap(dev, action, error);
+ ret = flow_hw_validate_action_raw_encap(action, mask, error);
if (ret < 0)
return ret;
action_flags |= MLX5_FLOW_ACTION_ENCAP;
@@ -9599,20 +9919,6 @@ mlx5_hw_mirror_destroy(struct rte_eth_dev *dev, struct mlx5_mirror *mirror)
mlx5_free(mirror);
}
-static inline enum mlx5dr_table_type
-get_mlx5dr_table_type(const struct rte_flow_attr *attr)
-{
- enum mlx5dr_table_type type;
-
- if (attr->transfer)
- type = MLX5DR_TABLE_TYPE_FDB;
- else if (attr->egress)
- type = MLX5DR_TABLE_TYPE_NIC_TX;
- else
- type = MLX5DR_TABLE_TYPE_NIC_RX;
- return type;
-}
-
static __rte_always_inline bool
mlx5_mirror_terminal_action(const struct rte_flow_action *action)
{
@@ -9751,9 +10057,6 @@ mirror_format_port(struct rte_eth_dev *dev,
return 0;
}
-#define MLX5_CONST_ENCAP_ITEM(encap_type, ptr) \
-(((const struct encap_type *)(ptr))->definition)
-
static int
hw_mirror_clone_reformat(const struct rte_flow_action *actions,
struct mlx5dr_action_dest_attr *dest_attr,
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 15/30] net/mlx5/hws: check the rule status on rule update
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (12 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 14/30] net/mlx5: reuse reformat and modify header actions in a table Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 16/30] net/mlx5/hws: support IPsec encryption/decryption action Gregory Etelson
` (15 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Itamar Gozlan, Hamdan Igbaria,
Alex Vesker, Matan Azrad, Viacheslav Ovsiienko, Ori Kam,
Suanming Mou
From: Itamar Gozlan <igozlan@nvidia.com>
Only allow rule updates for rules with their status value equal to
MLX5DR_RULE_STATUS_CREATED.
Otherwise, the rule may be in an unstable stage like deleting and
this will result in a faulty unexpected scenario.
Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_rule.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c
index 980a99b226..70d5c19e1f 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.c
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.c
@@ -756,6 +756,12 @@ int mlx5dr_rule_action_update(struct mlx5dr_rule *rule_handle,
if (mlx5dr_rule_enqueue_precheck(matcher->tbl->ctx, attr))
return -rte_errno;
+ if (rule_handle->status != MLX5DR_RULE_STATUS_CREATED) {
+ DR_LOG(ERR, "Current rule status does not allow update");
+ rte_errno = EBUSY;
+ return -rte_errno;
+ }
+
ret = mlx5dr_rule_create_hws(rule_handle,
attr,
0,
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 16/30] net/mlx5/hws: support IPsec encryption/decryption action
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (13 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 15/30] net/mlx5/hws: check the rule status on rule update Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 17/30] net/mlx5/hws: support ASO IPsec action Gregory Etelson
` (14 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Hamdan Igbaria, Alex Vesker,
Matan Azrad, Viacheslav Ovsiienko, Ori Kam, Suanming Mou
From: Hamdan Igbaria <hamdani@nvidia.com>
Support crypto action creation, this action allows encryption/decryption
of the packet according a specific security crypto protocol.
For now we support encryption/decryption according ipsec protocol.
ipsec encryption handles the encoding of the data.
ipsec decryption handles the decoding of the data and a decryption result
status will be placed in the ipsec_syndrome field.
Both operations should be used only for packets that have esp header and
ipsec trailer.
Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/common/mlx5/mlx5_prm.h | 12 ++
drivers/net/mlx5/hws/mlx5dr.h | 42 +++++++
drivers/net/mlx5/hws/mlx5dr_action.c | 172 +++++++++++++++++++++++++-
drivers/net/mlx5/hws/mlx5dr_action.h | 44 ++++---
drivers/net/mlx5/hws/mlx5dr_cmd.c | 8 ++
drivers/net/mlx5/hws/mlx5dr_cmd.h | 2 +-
drivers/net/mlx5/hws/mlx5dr_debug.c | 2 +
drivers/net/mlx5/hws/mlx5dr_matcher.c | 5 +
8 files changed, 266 insertions(+), 21 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 2b499666f8..0eecf0691b 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -3498,6 +3498,8 @@ enum mlx5_ifc_stc_action_type {
MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT = 0x0b,
MLX5_IFC_STC_ACTION_TYPE_TAG = 0x0c,
MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST = 0x0e,
+ MLX5_IFC_STC_ACTION_TYPE_CRYPTO_IPSEC_ENCRYPTION = 0x10,
+ MLX5_IFC_STC_ACTION_TYPE_CRYPTO_IPSEC_DECRYPTION = 0x11,
MLX5_IFC_STC_ACTION_TYPE_ASO = 0x12,
MLX5_IFC_STC_ACTION_TYPE_COUNTER = 0x14,
MLX5_IFC_STC_ACTION_TYPE_ADD_FIELD = 0x1b,
@@ -3546,6 +3548,14 @@ struct mlx5_ifc_stc_ste_param_execute_aso_bits {
u8 reserved_at_28[0x18];
};
+struct mlx5_ifc_stc_ste_param_ipsec_encrypt_bits {
+ u8 ipsec_object_id[0x20];
+};
+
+struct mlx5_ifc_stc_ste_param_ipsec_decrypt_bits {
+ u8 ipsec_object_id[0x20];
+};
+
struct mlx5_ifc_stc_ste_param_header_modify_list_bits {
u8 header_modify_pattern_id[0x20];
u8 header_modify_argument_id[0x20];
@@ -3612,6 +3622,8 @@ union mlx5_ifc_stc_param_bits {
struct mlx5_ifc_set_action_in_bits set;
struct mlx5_ifc_copy_action_in_bits copy;
struct mlx5_ifc_stc_ste_param_vport_bits vport;
+ struct mlx5_ifc_stc_ste_param_ipsec_encrypt_bits ipsec_encrypt;
+ struct mlx5_ifc_stc_ste_param_ipsec_decrypt_bits ipsec_decrypt;
u8 reserved_at_0[0x80];
};
diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index 39d902e762..74d05229c7 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -45,6 +45,8 @@ enum mlx5dr_action_type {
MLX5DR_ACTION_TYP_PUSH_VLAN,
MLX5DR_ACTION_TYP_ASO_METER,
MLX5DR_ACTION_TYP_ASO_CT,
+ MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT,
+ MLX5DR_ACTION_TYP_CRYPTO_DECRYPT,
MLX5DR_ACTION_TYP_DEST_ROOT,
MLX5DR_ACTION_TYP_DEST_ARRAY,
MLX5DR_ACTION_TYP_MAX,
@@ -176,6 +178,22 @@ struct mlx5dr_action_mh_pattern {
__be64 *data;
};
+enum mlx5dr_action_crypto_op {
+ MLX5DR_ACTION_CRYPTO_OP_NONE,
+ MLX5DR_ACTION_CRYPTO_OP_ENCRYPT,
+ MLX5DR_ACTION_CRYPTO_OP_DECRYPT,
+};
+
+enum mlx5dr_action_crypto_type {
+ MLX5DR_ACTION_CRYPTO_TYPE_NISP,
+ MLX5DR_ACTION_CRYPTO_TYPE_IPSEC,
+};
+
+struct mlx5dr_action_crypto_attr {
+ enum mlx5dr_action_crypto_type crypto_type;
+ enum mlx5dr_action_crypto_op op;
+};
+
/* In actions that take offset, the offset is unique, pointing to a single
* resource and the user should not reuse the same index because data changing
* is not atomic.
@@ -216,6 +234,10 @@ struct mlx5dr_rule_action {
uint32_t offset;
enum mlx5dr_action_aso_ct_flags direction;
} aso_ct;
+
+ struct {
+ uint32_t offset;
+ } crypto;
};
};
@@ -691,6 +713,26 @@ mlx5dr_action_create_dest_root(struct mlx5dr_context *ctx,
uint16_t priority,
uint32_t flags);
+/* Create crypto action, this action will create specific security protocol
+ * encryption/decryption, for now we only support IPSec protocol.
+ *
+ * @param[in] ctx
+ * The context in which the new action will be created.
+ * @param[in] devx_obj
+ * The SADB corresponding devx obj
+ * @param[in] attr
+ * attributes: specifies if to encrypt/decrypt,
+ * also specifies the crypto security protocol.
+ * @param[in] flags
+ * Action creation flags. (enum mlx5dr_action_flags)
+ * @return pointer to mlx5dr_action on success NULL otherwise.
+ */
+struct mlx5dr_action *
+mlx5dr_action_create_crypto(struct mlx5dr_context *ctx,
+ struct mlx5dr_devx_obj *devx_obj,
+ struct mlx5dr_action_crypto_attr *attr,
+ uint32_t flags);
+
/* Destroy direct rule action.
*
* @param[in] action
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 11a7c58925..4910b4f730 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -9,11 +9,12 @@
#define MLX5DR_ACTION_METER_INIT_COLOR_OFFSET 1
/* This is the maximum allowed action order for each table type:
- * TX: POP_VLAN, CTR, ASO_METER, AS_CT, PUSH_VLAN, MODIFY, ENCAP, Term
- * RX: TAG, DECAP, POP_VLAN, CTR, ASO_METER, ASO_CT, PUSH_VLAN, MODIFY,
- * ENCAP, Term
- * FDB: DECAP, POP_VLAN, CTR, ASO_METER, ASO_CT, PUSH_VLAN, MODIFY,
- * ENCAP, Term
+ * TX: POP_VLAN, CTR, ASO_METER, AS_CT, PUSH_VLAN, MODIFY, ENCAP, ENCRYPT,
+ * Term
+ * RX: TAG, DECAP, POP_VLAN, CTR, DECRYPT, ASO_METER, ASO_CT, PUSH_VLAN,
+ * MODIFY, ENCAP, Term
+ * FDB: DECAP, POP_VLAN, CTR, DECRYPT, ASO_METER, ASO_CT, PUSH_VLAN, MODIFY,
+ * ENCAP, ENCRYPT, Term
*/
static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_MAX] = {
[MLX5DR_TABLE_TYPE_NIC_RX] = {
@@ -23,6 +24,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_POP_VLAN),
BIT(MLX5DR_ACTION_TYP_POP_VLAN),
BIT(MLX5DR_ACTION_TYP_CTR),
+ BIT(MLX5DR_ACTION_TYP_CRYPTO_DECRYPT),
BIT(MLX5DR_ACTION_TYP_ASO_METER),
BIT(MLX5DR_ACTION_TYP_ASO_CT),
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
@@ -49,6 +51,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3),
+ BIT(MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT),
BIT(MLX5DR_ACTION_TYP_TBL) |
BIT(MLX5DR_ACTION_TYP_MISS) |
BIT(MLX5DR_ACTION_TYP_DROP) |
@@ -61,6 +64,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_POP_VLAN),
BIT(MLX5DR_ACTION_TYP_POP_VLAN),
BIT(MLX5DR_ACTION_TYP_CTR),
+ BIT(MLX5DR_ACTION_TYP_CRYPTO_DECRYPT),
BIT(MLX5DR_ACTION_TYP_ASO_METER),
BIT(MLX5DR_ACTION_TYP_ASO_CT),
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
@@ -68,6 +72,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3),
+ BIT(MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT),
BIT(MLX5DR_ACTION_TYP_TBL) |
BIT(MLX5DR_ACTION_TYP_MISS) |
BIT(MLX5DR_ACTION_TYP_VPORT) |
@@ -266,6 +271,41 @@ bool mlx5dr_action_check_combo(enum mlx5dr_action_type *user_actions,
return valid_combo;
}
+bool mlx5dr_action_check_restrictions(struct mlx5dr_matcher *matcher,
+ enum mlx5dr_action_type *actions)
+{
+ uint32_t restricted_bits;
+ uint8_t idx = 0;
+
+ /* Check for restricted actions, these actions are restricted
+ * to RX or TX only in FDB domain.
+ * if one of these actions presented require correct optimize_flow_src.
+ */
+ if (matcher->tbl->type != MLX5DR_TABLE_TYPE_FDB)
+ return false;
+
+ switch (matcher->attr.optimize_flow_src) {
+ case MLX5DR_MATCHER_FLOW_SRC_WIRE:
+ restricted_bits = BIT(MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT);
+ break;
+ case MLX5DR_MATCHER_FLOW_SRC_VPORT:
+ restricted_bits = BIT(MLX5DR_ACTION_TYP_CRYPTO_DECRYPT);
+ break;
+ default:
+ restricted_bits = BIT(MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT) |
+ BIT(MLX5DR_ACTION_TYP_CRYPTO_DECRYPT);
+ }
+
+ while (actions[idx] != MLX5DR_ACTION_TYP_LAST) {
+ if (BIT(actions[idx++]) & restricted_bits) {
+ DR_LOG(ERR, "Invalid actions combination containing restricted actions was provided");
+ return true;
+ }
+ }
+
+ return false;
+}
+
int mlx5dr_action_root_build_attr(struct mlx5dr_rule_action rule_actions[],
uint32_t num_actions,
struct mlx5dv_flow_action_attr *attr)
@@ -383,6 +423,24 @@ mlx5dr_action_fixup_stc_attr(struct mlx5dr_context *ctx,
use_fixup = true;
break;
+ case MLX5_IFC_STC_ACTION_TYPE_CRYPTO_IPSEC_ENCRYPTION:
+ if (fw_tbl_type == FS_FT_FDB_RX) {
+ fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_NOP;
+ fixup_stc_attr->action_offset = stc_attr->action_offset;
+ fixup_stc_attr->stc_offset = stc_attr->stc_offset;
+ use_fixup = true;
+ }
+ break;
+
+ case MLX5_IFC_STC_ACTION_TYPE_CRYPTO_IPSEC_DECRYPTION:
+ if (fw_tbl_type == FS_FT_FDB_TX) {
+ fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_NOP;
+ fixup_stc_attr->action_offset = stc_attr->action_offset;
+ fixup_stc_attr->stc_offset = stc_attr->stc_offset;
+ use_fixup = true;
+ }
+ break;
+
default:
break;
}
@@ -605,6 +663,16 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
attr->insert_header.insert_offset = MLX5DR_ACTION_HDR_LEN_L2_MACS;
attr->insert_header.header_size = MLX5DR_ACTION_HDR_LEN_L2_VLAN;
break;
+ case MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT:
+ attr->action_type = MLX5_IFC_STC_ACTION_TYPE_CRYPTO_IPSEC_ENCRYPTION;
+ attr->action_offset = MLX5DR_ACTION_OFFSET_DW5;
+ attr->id = obj->id;
+ break;
+ case MLX5DR_ACTION_TYP_CRYPTO_DECRYPT:
+ attr->action_type = MLX5_IFC_STC_ACTION_TYPE_CRYPTO_IPSEC_DECRYPTION;
+ attr->action_offset = MLX5DR_ACTION_OFFSET_DW5;
+ attr->id = obj->id;
+ break;
default:
DR_LOG(ERR, "Invalid action type %d", action->type);
assert(false);
@@ -1943,6 +2011,55 @@ mlx5dr_action_create_dest_root(struct mlx5dr_context *ctx,
return NULL;
}
+struct mlx5dr_action *
+mlx5dr_action_create_crypto(struct mlx5dr_context *ctx,
+ struct mlx5dr_devx_obj *devx_obj,
+ struct mlx5dr_action_crypto_attr *attr,
+ uint32_t flags)
+{
+ enum mlx5dr_action_type action_type;
+ struct mlx5dr_action *action;
+
+ if (mlx5dr_action_is_root_flags(flags)) {
+ DR_LOG(ERR, "Action flags must be only non root (HWS)");
+ rte_errno = ENOTSUP;
+ return NULL;
+ }
+
+ if (attr->crypto_type != MLX5DR_ACTION_CRYPTO_TYPE_IPSEC) {
+ rte_errno = ENOTSUP;
+ return NULL;
+ }
+
+ if (attr->op == MLX5DR_ACTION_CRYPTO_OP_ENCRYPT) {
+ if (flags & MLX5DR_ACTION_FLAG_HWS_RX) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+ action_type = MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT;
+ } else if (attr->op == MLX5DR_ACTION_CRYPTO_OP_DECRYPT) {
+ if (flags & MLX5DR_ACTION_FLAG_HWS_TX) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+ action_type = MLX5DR_ACTION_TYP_CRYPTO_DECRYPT;
+ } else {
+ rte_errno = ENOTSUP;
+ return NULL;
+ }
+
+ action = mlx5dr_action_create_generic(ctx, flags, action_type);
+ if (!action)
+ return NULL;
+
+ if (mlx5dr_action_create_stcs(action, devx_obj)) {
+ simple_free(action);
+ return NULL;
+ }
+
+ return action;
+}
+
static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
{
struct mlx5dr_devx_obj *obj = NULL;
@@ -1963,6 +2080,8 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
case MLX5DR_ACTION_TYP_ASO_METER:
case MLX5DR_ACTION_TYP_ASO_CT:
case MLX5DR_ACTION_TYP_PUSH_VLAN:
+ case MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT:
+ case MLX5DR_ACTION_TYP_CRYPTO_DECRYPT:
mlx5dr_action_destroy_stcs(action);
break;
case MLX5DR_ACTION_TYP_DEST_ROOT:
@@ -2460,6 +2579,33 @@ mlx5dr_action_setter_common_decap(struct mlx5dr_actions_apply_data *apply,
MLX5DR_CONTEXT_SHARED_STC_DECAP));
}
+static void
+mlx5dr_action_setter_crypto_encryption(struct mlx5dr_actions_apply_data *apply,
+ struct mlx5dr_actions_wqe_setter *setter)
+{
+ struct mlx5dr_rule_action *rule_action;
+
+ rule_action = &apply->rule_action[setter->idx_single];
+ apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = htobe32(rule_action->crypto.offset);
+ mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW5, setter->idx_single);
+}
+
+static void
+mlx5dr_action_setter_crypto_decryption(struct mlx5dr_actions_apply_data *apply,
+ struct mlx5dr_actions_wqe_setter *setter)
+{
+ struct mlx5dr_rule_action *rule_action;
+
+ rule_action = &apply->rule_action[setter->idx_triple];
+
+ mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW5, setter->idx_triple);
+ apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = 0;
+ apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0;
+ apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = htobe32(rule_action->crypto.offset);
+ apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0;
+ apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0;
+}
+
int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
{
struct mlx5dr_actions_wqe_setter *start_setter = at->setters + 1;
@@ -2594,6 +2740,22 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
setter->idx_ctr = i;
break;
+ case MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT:
+ /* Single encryption action, consume triple due to HW limitations */
+ setter = mlx5dr_action_setter_find_first(last_setter, ASF_TRIPLE);
+ setter->flags |= ASF_TRIPLE;
+ setter->set_single = &mlx5dr_action_setter_crypto_encryption;
+ setter->idx_single = i;
+ break;
+
+ case MLX5DR_ACTION_TYP_CRYPTO_DECRYPT:
+ /* Triple decryption action */
+ setter = mlx5dr_action_setter_find_first(last_setter, ASF_TRIPLE);
+ setter->flags |= ASF_TRIPLE;
+ setter->set_triple = &mlx5dr_action_setter_crypto_decryption;
+ setter->idx_triple = i;
+ break;
+
default:
DR_LOG(ERR, "Unsupported action type: %d", action_type[i]);
rte_errno = ENOTSUP;
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h
index 582a38bebc..6bfa0bcc4a 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.h
+++ b/drivers/net/mlx5/hws/mlx5dr_action.h
@@ -21,6 +21,8 @@ enum mlx5dr_action_stc_idx {
MLX5DR_ACTION_STC_IDX_LAST_COMBO1 = 3,
/* STC combo2: CTR, 3 x SINGLE, Hit */
MLX5DR_ACTION_STC_IDX_LAST_COMBO2 = 4,
+ /* STC combo2: CTR, TRIPLE, Hit */
+ MLX5DR_ACTION_STC_IDX_LAST_COMBO3 = 2,
};
enum mlx5dr_action_offset {
@@ -52,6 +54,7 @@ enum mlx5dr_action_setter_flag {
ASF_SINGLE2 = 1 << 1,
ASF_SINGLE3 = 1 << 2,
ASF_DOUBLE = ASF_SINGLE2 | ASF_SINGLE3,
+ ASF_TRIPLE = ASF_SINGLE1 | ASF_DOUBLE,
ASF_REPARSE = 1 << 3,
ASF_REMOVE = 1 << 4,
ASF_MODIFY = 1 << 5,
@@ -94,10 +97,12 @@ typedef void (*mlx5dr_action_setter_fp)
struct mlx5dr_actions_wqe_setter {
mlx5dr_action_setter_fp set_single;
mlx5dr_action_setter_fp set_double;
+ mlx5dr_action_setter_fp set_triple;
mlx5dr_action_setter_fp set_hit;
mlx5dr_action_setter_fp set_ctr;
uint8_t idx_single;
uint8_t idx_double;
+ uint8_t idx_triple;
uint8_t idx_ctr;
uint8_t idx_hit;
uint8_t flags;
@@ -183,6 +188,9 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at);
bool mlx5dr_action_check_combo(enum mlx5dr_action_type *user_actions,
enum mlx5dr_table_type table_type);
+bool mlx5dr_action_check_restrictions(struct mlx5dr_matcher *matcher,
+ enum mlx5dr_action_type *actions);
+
int mlx5dr_action_alloc_single_stc(struct mlx5dr_context *ctx,
struct mlx5dr_cmd_stc_modify_attr *stc_attr,
uint32_t table_type,
@@ -230,26 +238,32 @@ mlx5dr_action_apply_setter(struct mlx5dr_actions_apply_data *apply,
uint8_t num_of_actions;
/* Set control counter */
- if (setter->flags & ASF_CTR)
+ if (setter->set_ctr)
setter->set_ctr(apply, setter);
else
mlx5dr_action_setter_default_ctr(apply, setter);
- /* Set single and double on match */
if (!is_jumbo) {
- if (setter->flags & ASF_SINGLE1)
- setter->set_single(apply, setter);
- else
- mlx5dr_action_setter_default_single(apply, setter);
-
- if (setter->flags & ASF_DOUBLE)
- setter->set_double(apply, setter);
- else
- mlx5dr_action_setter_default_double(apply, setter);
-
- num_of_actions = setter->flags & ASF_DOUBLE ?
- MLX5DR_ACTION_STC_IDX_LAST_COMBO1 :
- MLX5DR_ACTION_STC_IDX_LAST_COMBO2;
+ if (unlikely(setter->set_triple)) {
+ /* Set triple on match */
+ setter->set_triple(apply, setter);
+ num_of_actions = MLX5DR_ACTION_STC_IDX_LAST_COMBO3;
+ } else {
+ /* Set single and double on match */
+ if (setter->set_single)
+ setter->set_single(apply, setter);
+ else
+ mlx5dr_action_setter_default_single(apply, setter);
+
+ if (setter->set_double)
+ setter->set_double(apply, setter);
+ else
+ mlx5dr_action_setter_default_double(apply, setter);
+
+ num_of_actions = setter->set_double ?
+ MLX5DR_ACTION_STC_IDX_LAST_COMBO1 :
+ MLX5DR_ACTION_STC_IDX_LAST_COMBO2;
+ }
} else {
apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0;
apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0;
diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c
index c52cdd0767..3b3690699d 100644
--- a/drivers/net/mlx5/hws/mlx5dr_cmd.c
+++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c
@@ -541,6 +541,14 @@ mlx5dr_cmd_stc_modify_set_stc_param(struct mlx5dr_cmd_stc_modify_attr *stc_attr,
MLX5_SET(stc_ste_param_remove_words, stc_parm,
remove_size, stc_attr->remove_words.num_of_words);
break;
+ case MLX5_IFC_STC_ACTION_TYPE_CRYPTO_IPSEC_ENCRYPTION:
+ MLX5_SET(stc_ste_param_ipsec_encrypt, stc_parm, ipsec_object_id,
+ stc_attr->id);
+ break;
+ case MLX5_IFC_STC_ACTION_TYPE_CRYPTO_IPSEC_DECRYPTION:
+ MLX5_SET(stc_ste_param_ipsec_decrypt, stc_parm, ipsec_object_id,
+ stc_attr->id);
+ break;
default:
DR_LOG(ERR, "Not supported type %d", stc_attr->action_type);
rte_errno = EINVAL;
diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h
index 03db62e2e2..7bbb684dbd 100644
--- a/drivers/net/mlx5/hws/mlx5dr_cmd.h
+++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h
@@ -100,7 +100,7 @@ struct mlx5dr_cmd_stc_modify_attr {
uint8_t action_offset;
enum mlx5_ifc_stc_action_type action_type;
union {
- uint32_t id; /* TIRN, TAG, FT ID, STE ID */
+ uint32_t id; /* TIRN, TAG, FT ID, STE ID, CRYPTO */
struct {
uint8_t decap;
uint16_t start_anchor;
diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c
index e7b1f2cc32..8cf3909606 100644
--- a/drivers/net/mlx5/hws/mlx5dr_debug.c
+++ b/drivers/net/mlx5/hws/mlx5dr_debug.c
@@ -24,6 +24,8 @@ const char *mlx5dr_debug_action_type_str[] = {
[MLX5DR_ACTION_TYP_ASO_CT] = "ASO_CT",
[MLX5DR_ACTION_TYP_DEST_ROOT] = "DEST_ROOT",
[MLX5DR_ACTION_TYP_DEST_ARRAY] = "DEST_ARRAY",
+ [MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT] = "CRYPTO_ENCRYPT",
+ [MLX5DR_ACTION_TYP_CRYPTO_DECRYPT] = "CRYPTO_DECRYPT",
};
static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX,
diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c
index a82c182460..6f74cf3677 100644
--- a/drivers/net/mlx5/hws/mlx5dr_matcher.c
+++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c
@@ -714,6 +714,11 @@ static int mlx5dr_matcher_check_and_process_at(struct mlx5dr_matcher *matcher,
return rte_errno;
}
+ if (mlx5dr_action_check_restrictions(matcher, at->action_type_arr)) {
+ rte_errno = EINVAL;
+ return rte_errno;
+ }
+
/* Process action template to setters */
ret = mlx5dr_action_template_process(at);
if (ret) {
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 17/30] net/mlx5/hws: support ASO IPsec action
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (14 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 16/30] net/mlx5/hws: support IPsec encryption/decryption action Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 18/30] net/mlx5/hws: support reformat trailer action Gregory Etelson
` (13 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Hamdan Igbaria, Alex Vesker,
Matan Azrad, Viacheslav Ovsiienko, Ori Kam, Suanming Mou
From: Hamdan Igbaria <hamdani@nvidia.com>
Support ASO IPsec action, this action will allow performing
some of ipsec full offload operations, for example replay
protection and sequence number incrementation.
In Tx flow this action used before encrypting the packet to
increase the sequence number.
In Rx flow this action used after decrypting the packet to
check it against the replay protection window for validity.
Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/common/mlx5/mlx5_prm.h | 1 +
drivers/net/mlx5/hws/mlx5dr.h | 23 ++++++++++++++++++++
drivers/net/mlx5/hws/mlx5dr_action.c | 32 +++++++++++++++++++++++++---
drivers/net/mlx5/hws/mlx5dr_debug.c | 1 +
4 files changed, 54 insertions(+), 3 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 0eecf0691b..31ebec7bcf 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -3539,6 +3539,7 @@ struct mlx5_ifc_stc_ste_param_flow_counter_bits {
enum {
MLX5_ASO_CT_NUM_PER_OBJ = 1,
MLX5_ASO_METER_NUM_PER_OBJ = 2,
+ MLX5_ASO_IPSEC_NUM_PER_OBJ = 1,
};
struct mlx5_ifc_stc_ste_param_execute_aso_bits {
diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index 74d05229c7..bd352fa26d 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -45,6 +45,7 @@ enum mlx5dr_action_type {
MLX5DR_ACTION_TYP_PUSH_VLAN,
MLX5DR_ACTION_TYP_ASO_METER,
MLX5DR_ACTION_TYP_ASO_CT,
+ MLX5DR_ACTION_TYP_ASO_IPSEC,
MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT,
MLX5DR_ACTION_TYP_CRYPTO_DECRYPT,
MLX5DR_ACTION_TYP_DEST_ROOT,
@@ -235,6 +236,10 @@ struct mlx5dr_rule_action {
enum mlx5dr_action_aso_ct_flags direction;
} aso_ct;
+ struct {
+ uint32_t offset;
+ } aso_ipsec;
+
struct {
uint32_t offset;
} crypto;
@@ -659,6 +664,24 @@ mlx5dr_action_create_aso_ct(struct mlx5dr_context *ctx,
uint8_t return_reg_id,
uint32_t flags);
+/* Create direct rule ASO IPSEC action.
+ *
+ * @param[in] ctx
+ * The context in which the new action will be created.
+ * @param[in] devx_obj
+ * The DEVX ASO object.
+ * @param[in] return_reg_id
+ * Copy the ASO object value into this reg_id, after a packet hits a rule with this ASO object.
+ * @param[in] flags
+ * Action creation flags. (enum mlx5dr_action_flags)
+ * @return pointer to mlx5dr_action on success NULL otherwise.
+ */
+struct mlx5dr_action *
+mlx5dr_action_create_aso_ipsec(struct mlx5dr_context *ctx,
+ struct mlx5dr_devx_obj *devx_obj,
+ uint8_t return_reg_id,
+ uint32_t flags);
+
/* Create direct rule pop vlan action.
* @param[in] ctx
* The context in which the new action will be created.
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 4910b4f730..956909a628 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -9,11 +9,11 @@
#define MLX5DR_ACTION_METER_INIT_COLOR_OFFSET 1
/* This is the maximum allowed action order for each table type:
- * TX: POP_VLAN, CTR, ASO_METER, AS_CT, PUSH_VLAN, MODIFY, ENCAP, ENCRYPT,
+ * TX: POP_VLAN, CTR, ASO, PUSH_VLAN, MODIFY, ENCAP, ENCRYPT,
* Term
- * RX: TAG, DECAP, POP_VLAN, CTR, DECRYPT, ASO_METER, ASO_CT, PUSH_VLAN,
+ * RX: TAG, DECAP, POP_VLAN, CTR, DECRYPT, ASO, PUSH_VLAN,
* MODIFY, ENCAP, Term
- * FDB: DECAP, POP_VLAN, CTR, DECRYPT, ASO_METER, ASO_CT, PUSH_VLAN, MODIFY,
+ * FDB: DECAP, POP_VLAN, CTR, DECRYPT, ASO, PUSH_VLAN, MODIFY,
* ENCAP, ENCRYPT, Term
*/
static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_MAX] = {
@@ -27,6 +27,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_CRYPTO_DECRYPT),
BIT(MLX5DR_ACTION_TYP_ASO_METER),
BIT(MLX5DR_ACTION_TYP_ASO_CT),
+ BIT(MLX5DR_ACTION_TYP_ASO_IPSEC),
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
@@ -46,6 +47,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_CTR),
BIT(MLX5DR_ACTION_TYP_ASO_METER),
BIT(MLX5DR_ACTION_TYP_ASO_CT),
+ BIT(MLX5DR_ACTION_TYP_ASO_IPSEC),
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
@@ -67,6 +69,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_CRYPTO_DECRYPT),
BIT(MLX5DR_ACTION_TYP_ASO_METER),
BIT(MLX5DR_ACTION_TYP_ASO_CT),
+ BIT(MLX5DR_ACTION_TYP_ASO_IPSEC),
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
@@ -642,6 +645,13 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
attr->aso.devx_obj_id = obj->id;
attr->aso.return_reg_id = action->aso.return_reg_id;
break;
+ case MLX5DR_ACTION_TYP_ASO_IPSEC:
+ attr->action_offset = MLX5DR_ACTION_OFFSET_DW6;
+ attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ASO;
+ attr->aso.aso_type = ASO_OPC_MOD_IPSEC;
+ attr->aso.devx_obj_id = obj->id;
+ attr->aso.return_reg_id = action->aso.return_reg_id;
+ break;
case MLX5DR_ACTION_TYP_VPORT:
attr->action_offset = MLX5DR_ACTION_OFFSET_HIT;
attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT;
@@ -1076,6 +1086,16 @@ mlx5dr_action_create_aso_ct(struct mlx5dr_context *ctx,
devx_obj, return_reg_id, flags);
}
+struct mlx5dr_action *
+mlx5dr_action_create_aso_ipsec(struct mlx5dr_context *ctx,
+ struct mlx5dr_devx_obj *devx_obj,
+ uint8_t return_reg_id,
+ uint32_t flags)
+{
+ return mlx5dr_action_create_aso(ctx, MLX5DR_ACTION_TYP_ASO_IPSEC,
+ devx_obj, return_reg_id, flags);
+}
+
struct mlx5dr_action *
mlx5dr_action_create_counter(struct mlx5dr_context *ctx,
struct mlx5dr_devx_obj *obj,
@@ -2079,6 +2099,7 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
case MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2:
case MLX5DR_ACTION_TYP_ASO_METER:
case MLX5DR_ACTION_TYP_ASO_CT:
+ case MLX5DR_ACTION_TYP_ASO_IPSEC:
case MLX5DR_ACTION_TYP_PUSH_VLAN:
case MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT:
case MLX5DR_ACTION_TYP_CRYPTO_DECRYPT:
@@ -2490,6 +2511,10 @@ mlx5dr_action_setter_aso(struct mlx5dr_actions_apply_data *apply,
offset = rule_action->aso_ct.offset / MLX5_ASO_CT_NUM_PER_OBJ;
exe_aso_ctrl = rule_action->aso_ct.direction;
break;
+ case MLX5DR_ACTION_TYP_ASO_IPSEC:
+ offset = rule_action->aso_ipsec.offset / MLX5_ASO_IPSEC_NUM_PER_OBJ;
+ exe_aso_ctrl = 0;
+ break;
default:
DR_LOG(ERR, "Unsupported ASO action type: %d", rule_action->action->type);
rte_errno = ENOTSUP;
@@ -2679,6 +2704,7 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
case MLX5DR_ACTION_TYP_ASO_METER:
case MLX5DR_ACTION_TYP_ASO_CT:
+ case MLX5DR_ACTION_TYP_ASO_IPSEC:
setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE);
setter->flags |= ASF_DOUBLE;
setter->set_double = &mlx5dr_action_setter_aso;
diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c
index 8cf3909606..74893f61fb 100644
--- a/drivers/net/mlx5/hws/mlx5dr_debug.c
+++ b/drivers/net/mlx5/hws/mlx5dr_debug.c
@@ -22,6 +22,7 @@ const char *mlx5dr_debug_action_type_str[] = {
[MLX5DR_ACTION_TYP_PUSH_VLAN] = "PUSH_VLAN",
[MLX5DR_ACTION_TYP_ASO_METER] = "ASO_METER",
[MLX5DR_ACTION_TYP_ASO_CT] = "ASO_CT",
+ [MLX5DR_ACTION_TYP_ASO_IPSEC] = "ASO_IPSEC",
[MLX5DR_ACTION_TYP_DEST_ROOT] = "DEST_ROOT",
[MLX5DR_ACTION_TYP_DEST_ARRAY] = "DEST_ARRAY",
[MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT] = "CRYPTO_ENCRYPT",
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 18/30] net/mlx5/hws: support reformat trailer action
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (15 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 17/30] net/mlx5/hws: support ASO IPsec action Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 19/30] net/mlx5/hws: support ASO first hit action Gregory Etelson
` (12 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Hamdan Igbaria, Alex Vesker,
Matan Azrad, Viacheslav Ovsiienko, Ori Kam, Suanming Mou
From: Hamdan Igbaria <hamdani@nvidia.com>
Support reformat trailer action, this action allows
to insert/remove specific crypto security protocol
trailer on the packet.
For now support IPsec crypto protocol trailer.
The trailer should be added before encrypting the
packet in Tx flow, and it can be removed after decryption
in Rx flow.
Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/common/mlx5/mlx5_prm.h | 11 +++
drivers/net/mlx5/hws/mlx5dr.h | 32 ++++++++
drivers/net/mlx5/hws/mlx5dr_action.c | 114 ++++++++++++++++++++++++++-
drivers/net/mlx5/hws/mlx5dr_action.h | 5 ++
drivers/net/mlx5/hws/mlx5dr_cmd.c | 8 ++
drivers/net/mlx5/hws/mlx5dr_cmd.h | 5 ++
drivers/net/mlx5/hws/mlx5dr_debug.c | 1 +
7 files changed, 172 insertions(+), 4 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 31ebec7bcf..793fc1a674 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -3501,6 +3501,7 @@ enum mlx5_ifc_stc_action_type {
MLX5_IFC_STC_ACTION_TYPE_CRYPTO_IPSEC_ENCRYPTION = 0x10,
MLX5_IFC_STC_ACTION_TYPE_CRYPTO_IPSEC_DECRYPTION = 0x11,
MLX5_IFC_STC_ACTION_TYPE_ASO = 0x12,
+ MLX5_IFC_STC_ACTION_TYPE_TRAILER = 0x13,
MLX5_IFC_STC_ACTION_TYPE_COUNTER = 0x14,
MLX5_IFC_STC_ACTION_TYPE_ADD_FIELD = 0x1b,
MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE = 0x80,
@@ -3557,6 +3558,15 @@ struct mlx5_ifc_stc_ste_param_ipsec_decrypt_bits {
u8 ipsec_object_id[0x20];
};
+struct mlx5_ifc_stc_ste_param_trailer_bits {
+ u8 reserved_at_0[0x8];
+ u8 command[0x4];
+ u8 reserved_at_c[0x2];
+ u8 type[0x2];
+ u8 reserved_at_10[0xa];
+ u8 length[0x6];
+};
+
struct mlx5_ifc_stc_ste_param_header_modify_list_bits {
u8 header_modify_pattern_id[0x20];
u8 header_modify_argument_id[0x20];
@@ -3625,6 +3635,7 @@ union mlx5_ifc_stc_param_bits {
struct mlx5_ifc_stc_ste_param_vport_bits vport;
struct mlx5_ifc_stc_ste_param_ipsec_encrypt_bits ipsec_encrypt;
struct mlx5_ifc_stc_ste_param_ipsec_decrypt_bits ipsec_decrypt;
+ struct mlx5_ifc_stc_ste_param_trailer_bits trailer;
u8 reserved_at_0[0x80];
};
diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index bd352fa26d..e425a8803a 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -33,6 +33,7 @@ enum mlx5dr_action_type {
MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2,
MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2,
MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3,
+ MLX5DR_ACTION_TYP_REFORMAT_TRAILER,
MLX5DR_ACTION_TYP_DROP,
MLX5DR_ACTION_TYP_TIR,
MLX5DR_ACTION_TYP_TBL,
@@ -195,6 +196,21 @@ struct mlx5dr_action_crypto_attr {
enum mlx5dr_action_crypto_op op;
};
+enum mlx5dr_action_trailer_type {
+ MLX5DR_ACTION_TRAILER_TYPE_IPSEC,
+};
+
+enum mlx5dr_action_trailer_op {
+ MLX5DR_ACTION_TRAILER_OP_INSERT,
+ MLX5DR_ACTION_TRAILER_OP_REMOVE,
+};
+
+struct mlx5dr_action_trailer_attr {
+ enum mlx5dr_action_trailer_type type;
+ enum mlx5dr_action_trailer_op op;
+ uint8_t size;
+};
+
/* In actions that take offset, the offset is unique, pointing to a single
* resource and the user should not reuse the same index because data changing
* is not atomic.
@@ -607,6 +623,22 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx,
uint32_t log_bulk_size,
uint32_t flags);
+/* Create reformat trailer action.
+ *
+ * @param[in] ctx
+ * The context in which the new action will be created.
+ * @param[in] attr
+ * attributes: specifies if to insert/remove trailer,
+ * also specifies the trailer type and size in bytes.
+ * @param[in] flags
+ * Action creation flags. (enum mlx5dr_action_flags)
+ * @return pointer to mlx5dr_action on success NULL otherwise.
+ */
+struct mlx5dr_action *
+mlx5dr_action_create_reformat_trailer(struct mlx5dr_context *ctx,
+ struct mlx5dr_action_trailer_attr *attr,
+ uint32_t flags);
+
/* Create direct rule modify header action.
*
* @param[in] ctx
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 956909a628..f8de3d8d98 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -9,16 +9,17 @@
#define MLX5DR_ACTION_METER_INIT_COLOR_OFFSET 1
/* This is the maximum allowed action order for each table type:
- * TX: POP_VLAN, CTR, ASO, PUSH_VLAN, MODIFY, ENCAP, ENCRYPT,
+ * TX: POP_VLAN, CTR, ASO, PUSH_VLAN, MODIFY, ENCAP, TRAILER, ENCRYPT,
* Term
- * RX: TAG, DECAP, POP_VLAN, CTR, DECRYPT, ASO, PUSH_VLAN,
+ * RX: TAG, TRAILER, DECAP, POP_VLAN, CTR, DECRYPT, ASO, PUSH_VLAN,
* MODIFY, ENCAP, Term
- * FDB: DECAP, POP_VLAN, CTR, DECRYPT, ASO, PUSH_VLAN, MODIFY,
+ * FDB: TRAILER, DECAP, POP_VLAN, CTR, DECRYPT, ASO, PUSH_VLAN, MODIFY,
* ENCAP, ENCRYPT, Term
*/
static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_MAX] = {
[MLX5DR_TABLE_TYPE_NIC_RX] = {
BIT(MLX5DR_ACTION_TYP_TAG),
+ BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER),
BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2),
BIT(MLX5DR_ACTION_TYP_POP_VLAN),
@@ -53,6 +54,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3),
+ BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER),
BIT(MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT),
BIT(MLX5DR_ACTION_TYP_TBL) |
BIT(MLX5DR_ACTION_TYP_MISS) |
@@ -61,6 +63,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_LAST),
},
[MLX5DR_TABLE_TYPE_FDB] = {
+ BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER),
BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2),
BIT(MLX5DR_ACTION_TYP_POP_VLAN),
@@ -75,6 +78,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3),
+ BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER),
BIT(MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT),
BIT(MLX5DR_ACTION_TYP_TBL) |
BIT(MLX5DR_ACTION_TYP_MISS) |
@@ -296,7 +300,8 @@ bool mlx5dr_action_check_restrictions(struct mlx5dr_matcher *matcher,
break;
default:
restricted_bits = BIT(MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT) |
- BIT(MLX5DR_ACTION_TYP_CRYPTO_DECRYPT);
+ BIT(MLX5DR_ACTION_TYP_CRYPTO_DECRYPT) |
+ BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER);
}
while (actions[idx] != MLX5DR_ACTION_TYP_LAST) {
@@ -377,6 +382,7 @@ mlx5dr_action_fixup_stc_attr(struct mlx5dr_context *ctx,
struct mlx5dr_devx_obj *devx_obj;
bool use_fixup = false;
uint32_t fw_tbl_type;
+ uint32_t val;
fw_tbl_type = mlx5dr_table_get_res_fw_ft_type(table_type, is_mirror);
@@ -444,6 +450,20 @@ mlx5dr_action_fixup_stc_attr(struct mlx5dr_context *ctx,
}
break;
+ case MLX5_IFC_STC_ACTION_TYPE_TRAILER:
+ if (table_type != MLX5DR_TABLE_TYPE_FDB)
+ break;
+
+ val = stc_attr->reformat_trailer.op;
+ if ((val == MLX5DR_ACTION_TRAILER_OP_INSERT && !is_mirror) ||
+ (val == MLX5DR_ACTION_TRAILER_OP_REMOVE && is_mirror)) {
+ fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_NOP;
+ fixup_stc_attr->action_offset = stc_attr->action_offset;
+ fixup_stc_attr->stc_offset = stc_attr->stc_offset;
+ use_fixup = true;
+ }
+ break;
+
default:
break;
}
@@ -683,6 +703,13 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
attr->action_offset = MLX5DR_ACTION_OFFSET_DW5;
attr->id = obj->id;
break;
+ case MLX5DR_ACTION_TYP_REFORMAT_TRAILER:
+ attr->action_type = MLX5_IFC_STC_ACTION_TYPE_TRAILER;
+ attr->action_offset = MLX5DR_ACTION_OFFSET_DW5;
+ attr->reformat_trailer.type = action->reformat_trailer.type;
+ attr->reformat_trailer.op = action->reformat_trailer.op;
+ attr->reformat_trailer.size = action->reformat_trailer.size;
+ break;
default:
DR_LOG(ERR, "Invalid action type %d", action->type);
assert(false);
@@ -2080,6 +2107,64 @@ mlx5dr_action_create_crypto(struct mlx5dr_context *ctx,
return action;
}
+struct mlx5dr_action *
+mlx5dr_action_create_reformat_trailer(struct mlx5dr_context *ctx,
+ struct mlx5dr_action_trailer_attr *attr,
+ uint32_t flags)
+{
+ struct mlx5dr_action *action;
+
+ if (mlx5dr_action_is_root_flags(flags)) {
+ DR_LOG(ERR, "Action flags must be only non root (HWS)");
+ rte_errno = ENOTSUP;
+ return NULL;
+ }
+
+ if (attr->type != MLX5DR_ACTION_TRAILER_TYPE_IPSEC) {
+ DR_LOG(ERR, "Only trailer of IPsec is supported");
+ rte_errno = ENOTSUP;
+ return NULL;
+ }
+
+ if (attr->op == MLX5DR_ACTION_TRAILER_OP_INSERT) {
+ if (flags & MLX5DR_ACTION_FLAG_HWS_RX) {
+ DR_LOG(ERR, "Trailer insertion is not supported in Rx");
+ rte_errno = EINVAL;
+ return NULL;
+ }
+ } else if (attr->op == MLX5DR_ACTION_TRAILER_OP_REMOVE) {
+ if (flags & MLX5DR_ACTION_FLAG_HWS_TX) {
+ DR_LOG(ERR, "Trailer removal is not supported in Tx");
+ rte_errno = EINVAL;
+ return NULL;
+ }
+ } else {
+ rte_errno = ENOTSUP;
+ return NULL;
+ }
+
+ if (attr->size % DW_SIZE) {
+ DR_LOG(ERR, "Wrong trailer size, size should divide by %u", DW_SIZE);
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_REFORMAT_TRAILER);
+ if (!action)
+ return NULL;
+
+ action->reformat_trailer.type = attr->type;
+ action->reformat_trailer.op = attr->op;
+ action->reformat_trailer.size = attr->size;
+
+ if (mlx5dr_action_create_stcs(action, NULL)) {
+ simple_free(action);
+ return NULL;
+ }
+
+ return action;
+}
+
static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
{
struct mlx5dr_devx_obj *obj = NULL;
@@ -2103,6 +2188,7 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
case MLX5DR_ACTION_TYP_PUSH_VLAN:
case MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT:
case MLX5DR_ACTION_TYP_CRYPTO_DECRYPT:
+ case MLX5DR_ACTION_TYP_REFORMAT_TRAILER:
mlx5dr_action_destroy_stcs(action);
break;
case MLX5DR_ACTION_TYP_DEST_ROOT:
@@ -2631,6 +2717,18 @@ mlx5dr_action_setter_crypto_decryption(struct mlx5dr_actions_apply_data *apply,
apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0;
}
+static void
+mlx5dr_action_setter_reformat_trailer(struct mlx5dr_actions_apply_data *apply,
+ struct mlx5dr_actions_wqe_setter *setter)
+{
+ mlx5dr_action_apply_stc(apply, MLX5DR_ACTION_STC_IDX_DW5, setter->idx_triple);
+ apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW6] = 0;
+ apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW7] = 0;
+ apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0;
+ apply->wqe_data[MLX5DR_ACTION_OFFSET_DW6] = 0;
+ apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0;
+}
+
int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
{
struct mlx5dr_actions_wqe_setter *start_setter = at->setters + 1;
@@ -2782,6 +2880,14 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
setter->idx_triple = i;
break;
+ case MLX5DR_ACTION_TYP_REFORMAT_TRAILER:
+ /* Single push trailer, consume triple due to HW limitations */
+ setter = mlx5dr_action_setter_find_first(last_setter, ASF_TRIPLE);
+ setter->flags |= ASF_TRIPLE;
+ setter->set_triple = &mlx5dr_action_setter_reformat_trailer;
+ setter->idx_triple = i;
+ break;
+
default:
DR_LOG(ERR, "Unsupported action type: %d", action_type[i]);
rte_errno = ENOTSUP;
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h
index 6bfa0bcc4a..b64d6cc9a8 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.h
+++ b/drivers/net/mlx5/hws/mlx5dr_action.h
@@ -153,6 +153,11 @@ struct mlx5dr_action {
struct {
struct mlx5dv_steering_anchor *sa;
} root_tbl;
+ struct {
+ uint8_t type;
+ uint8_t op;
+ uint8_t size;
+ } reformat_trailer;
struct {
struct mlx5dr_devx_obj *devx_obj;
} devx_dest;
diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c
index 3b3690699d..02547e7178 100644
--- a/drivers/net/mlx5/hws/mlx5dr_cmd.c
+++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c
@@ -549,6 +549,14 @@ mlx5dr_cmd_stc_modify_set_stc_param(struct mlx5dr_cmd_stc_modify_attr *stc_attr,
MLX5_SET(stc_ste_param_ipsec_decrypt, stc_parm, ipsec_object_id,
stc_attr->id);
break;
+ case MLX5_IFC_STC_ACTION_TYPE_TRAILER:
+ MLX5_SET(stc_ste_param_trailer, stc_parm, command,
+ stc_attr->reformat_trailer.op);
+ MLX5_SET(stc_ste_param_trailer, stc_parm, type,
+ stc_attr->reformat_trailer.type);
+ MLX5_SET(stc_ste_param_trailer, stc_parm, length,
+ stc_attr->reformat_trailer.size);
+ break;
default:
DR_LOG(ERR, "Not supported type %d", stc_attr->action_type);
rte_errno = EINVAL;
diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h
index 7bbb684dbd..c082157538 100644
--- a/drivers/net/mlx5/hws/mlx5dr_cmd.h
+++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h
@@ -141,6 +141,11 @@ struct mlx5dr_cmd_stc_modify_attr {
uint16_t start_anchor;
uint16_t num_of_words;
} remove_words;
+ struct {
+ uint8_t type;
+ uint8_t op;
+ uint8_t size;
+ } reformat_trailer;
uint32_t dest_table_id;
uint32_t dest_tir_num;
diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c
index 74893f61fb..976a1993e3 100644
--- a/drivers/net/mlx5/hws/mlx5dr_debug.c
+++ b/drivers/net/mlx5/hws/mlx5dr_debug.c
@@ -10,6 +10,7 @@ const char *mlx5dr_debug_action_type_str[] = {
[MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2] = "L2_TO_TNL_L2",
[MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2] = "TNL_L3_TO_L2",
[MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3] = "L2_TO_TNL_L3",
+ [MLX5DR_ACTION_TYP_REFORMAT_TRAILER] = "REFORMAT_TRAILER",
[MLX5DR_ACTION_TYP_DROP] = "DROP",
[MLX5DR_ACTION_TYP_TIR] = "TIR",
[MLX5DR_ACTION_TYP_TBL] = "TBL",
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 19/30] net/mlx5/hws: support ASO first hit action
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (16 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 18/30] net/mlx5/hws: support reformat trailer action Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 20/30] net/mlx5/hws: support insert header action Gregory Etelson
` (11 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Hamdan Igbaria, Alex Vesker,
Matan Azrad, Viacheslav Ovsiienko, Ori Kam, Suanming Mou
From: Hamdan Igbaria <hamdani@nvidia.com>
Support ASO first hit action.
This action allows tracking if a rule gets hit by a packet.
Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/common/mlx5/mlx5_prm.h | 5 +++++
drivers/net/mlx5/hws/mlx5dr.h | 25 +++++++++++++++++++++
drivers/net/mlx5/hws/mlx5dr_action.c | 33 ++++++++++++++++++++++++++++
drivers/net/mlx5/hws/mlx5dr_debug.c | 1 +
4 files changed, 64 insertions(+)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 793fc1a674..40e461cb82 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -3541,6 +3541,7 @@ enum {
MLX5_ASO_CT_NUM_PER_OBJ = 1,
MLX5_ASO_METER_NUM_PER_OBJ = 2,
MLX5_ASO_IPSEC_NUM_PER_OBJ = 1,
+ MLX5_ASO_FIRST_HIT_NUM_PER_OBJ = 512,
};
struct mlx5_ifc_stc_ste_param_execute_aso_bits {
@@ -5371,6 +5372,10 @@ enum {
MLX5_FLOW_COLOR_UNDEFINED,
};
+enum {
+ MLX5_ASO_FIRST_HIT_SET = 1,
+};
+
/* Maximum value of srTCM & trTCM metering parameters. */
#define MLX5_SRTCM_XBS_MAX (0xFF * (1ULL << 0x1F))
#define MLX5_SRTCM_XIR_MAX (8 * (1ULL << 30) * 0xFF)
diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index e425a8803a..e7d89ad7ec 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -47,6 +47,7 @@ enum mlx5dr_action_type {
MLX5DR_ACTION_TYP_ASO_METER,
MLX5DR_ACTION_TYP_ASO_CT,
MLX5DR_ACTION_TYP_ASO_IPSEC,
+ MLX5DR_ACTION_TYP_ASO_FIRST_HIT,
MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT,
MLX5DR_ACTION_TYP_CRYPTO_DECRYPT,
MLX5DR_ACTION_TYP_DEST_ROOT,
@@ -256,6 +257,11 @@ struct mlx5dr_rule_action {
uint32_t offset;
} aso_ipsec;
+ struct {
+ uint32_t offset;
+ bool set;
+ } aso_first_hit;
+
struct {
uint32_t offset;
} crypto;
@@ -714,6 +720,25 @@ mlx5dr_action_create_aso_ipsec(struct mlx5dr_context *ctx,
uint8_t return_reg_id,
uint32_t flags);
+/* Create direct rule ASO FIRST HIT action.
+ *
+ * @param[in] ctx
+ * The context in which the new action will be created.
+ * @param[in] devx_obj
+ * The DEVX ASO object.
+ * @param[in] return_reg_id
+ * When a packet hits a flow connected to this object, a flag is set indicating this event,
+ * copy the original value of this flag into this reg_id.
+ * @param[in] flags
+ * Action creation flags. (enum mlx5dr_action_flags)
+ * @return pointer to mlx5dr_action on success NULL otherwise.
+ */
+struct mlx5dr_action *
+mlx5dr_action_create_aso_first_hit(struct mlx5dr_context *ctx,
+ struct mlx5dr_devx_obj *devx_obj,
+ uint8_t return_reg_id,
+ uint32_t flags);
+
/* Create direct rule pop vlan action.
* @param[in] ctx
* The context in which the new action will be created.
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index f8de3d8d98..fe9c39b207 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -7,6 +7,7 @@
#define WIRE_PORT 0xFFFF
#define MLX5DR_ACTION_METER_INIT_COLOR_OFFSET 1
+#define MLX5DR_ACTION_ASO_FIRST_HIT_SET_OFFSET 9
/* This is the maximum allowed action order for each table type:
* TX: POP_VLAN, CTR, ASO, PUSH_VLAN, MODIFY, ENCAP, TRAILER, ENCRYPT,
@@ -29,6 +30,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_ASO_METER),
BIT(MLX5DR_ACTION_TYP_ASO_CT),
BIT(MLX5DR_ACTION_TYP_ASO_IPSEC),
+ BIT(MLX5DR_ACTION_TYP_ASO_FIRST_HIT),
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
@@ -49,6 +51,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_ASO_METER),
BIT(MLX5DR_ACTION_TYP_ASO_CT),
BIT(MLX5DR_ACTION_TYP_ASO_IPSEC),
+ BIT(MLX5DR_ACTION_TYP_ASO_FIRST_HIT),
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
@@ -73,6 +76,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_ASO_METER),
BIT(MLX5DR_ACTION_TYP_ASO_CT),
BIT(MLX5DR_ACTION_TYP_ASO_IPSEC),
+ BIT(MLX5DR_ACTION_TYP_ASO_FIRST_HIT),
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
@@ -672,6 +676,13 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
attr->aso.devx_obj_id = obj->id;
attr->aso.return_reg_id = action->aso.return_reg_id;
break;
+ case MLX5DR_ACTION_TYP_ASO_FIRST_HIT:
+ attr->action_offset = MLX5DR_ACTION_OFFSET_DW6;
+ attr->action_type = MLX5_IFC_STC_ACTION_TYPE_ASO;
+ attr->aso.aso_type = ASO_OPC_MOD_FLOW_HIT;
+ attr->aso.devx_obj_id = obj->id;
+ attr->aso.return_reg_id = action->aso.return_reg_id;
+ break;
case MLX5DR_ACTION_TYP_VPORT:
attr->action_offset = MLX5DR_ACTION_OFFSET_HIT;
attr->action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT;
@@ -1123,6 +1134,16 @@ mlx5dr_action_create_aso_ipsec(struct mlx5dr_context *ctx,
devx_obj, return_reg_id, flags);
}
+struct mlx5dr_action *
+mlx5dr_action_create_aso_first_hit(struct mlx5dr_context *ctx,
+ struct mlx5dr_devx_obj *devx_obj,
+ uint8_t return_reg_id,
+ uint32_t flags)
+{
+ return mlx5dr_action_create_aso(ctx, MLX5DR_ACTION_TYP_ASO_FIRST_HIT,
+ devx_obj, return_reg_id, flags);
+}
+
struct mlx5dr_action *
mlx5dr_action_create_counter(struct mlx5dr_context *ctx,
struct mlx5dr_devx_obj *obj,
@@ -2185,6 +2206,7 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
case MLX5DR_ACTION_TYP_ASO_METER:
case MLX5DR_ACTION_TYP_ASO_CT:
case MLX5DR_ACTION_TYP_ASO_IPSEC:
+ case MLX5DR_ACTION_TYP_ASO_FIRST_HIT:
case MLX5DR_ACTION_TYP_PUSH_VLAN:
case MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT:
case MLX5DR_ACTION_TYP_CRYPTO_DECRYPT:
@@ -2601,6 +2623,15 @@ mlx5dr_action_setter_aso(struct mlx5dr_actions_apply_data *apply,
offset = rule_action->aso_ipsec.offset / MLX5_ASO_IPSEC_NUM_PER_OBJ;
exe_aso_ctrl = 0;
break;
+ case MLX5DR_ACTION_TYP_ASO_FIRST_HIT:
+ /* exe_aso_ctrl FIRST HIT format:
+ * [STC only and reserved bits 22b][set 1b][offset 9b]
+ */
+ offset = rule_action->aso_first_hit.offset / MLX5_ASO_FIRST_HIT_NUM_PER_OBJ;
+ exe_aso_ctrl = rule_action->aso_first_hit.offset % MLX5_ASO_FIRST_HIT_NUM_PER_OBJ;
+ exe_aso_ctrl |= rule_action->aso_first_hit.set <<
+ MLX5DR_ACTION_ASO_FIRST_HIT_SET_OFFSET;
+ break;
default:
DR_LOG(ERR, "Unsupported ASO action type: %d", rule_action->action->type);
rte_errno = ENOTSUP;
@@ -2803,6 +2834,8 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
case MLX5DR_ACTION_TYP_ASO_METER:
case MLX5DR_ACTION_TYP_ASO_CT:
case MLX5DR_ACTION_TYP_ASO_IPSEC:
+ case MLX5DR_ACTION_TYP_ASO_FIRST_HIT:
+ /* Double ASO action */
setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE);
setter->flags |= ASF_DOUBLE;
setter->set_double = &mlx5dr_action_setter_aso;
diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c
index 976a1993e3..552dba5e63 100644
--- a/drivers/net/mlx5/hws/mlx5dr_debug.c
+++ b/drivers/net/mlx5/hws/mlx5dr_debug.c
@@ -24,6 +24,7 @@ const char *mlx5dr_debug_action_type_str[] = {
[MLX5DR_ACTION_TYP_ASO_METER] = "ASO_METER",
[MLX5DR_ACTION_TYP_ASO_CT] = "ASO_CT",
[MLX5DR_ACTION_TYP_ASO_IPSEC] = "ASO_IPSEC",
+ [MLX5DR_ACTION_TYP_ASO_FIRST_HIT] = "ASO_FIRST_HIT",
[MLX5DR_ACTION_TYP_DEST_ROOT] = "DEST_ROOT",
[MLX5DR_ACTION_TYP_DEST_ARRAY] = "DEST_ARRAY",
[MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT] = "CRYPTO_ENCRYPT",
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 20/30] net/mlx5/hws: support insert header action
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (17 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 19/30] net/mlx5/hws: support ASO first hit action Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 21/30] net/mlx5/hws: support remove " Gregory Etelson
` (10 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Hamdan Igbaria, Alex Vesker,
Matan Azrad, Viacheslav Ovsiienko, Ori Kam, Suanming Mou
From: Hamdan Igbaria <hamdani@nvidia.com>
Support insert header action, this will allow encap at
a specific anchor and offset selected by the user.
Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr.h | 36 ++++++++
drivers/net/mlx5/hws/mlx5dr_action.c | 112 +++++++++++++++++++++----
drivers/net/mlx5/hws/mlx5dr_action.h | 5 +-
drivers/net/mlx5/hws/mlx5dr_cmd.c | 4 +-
drivers/net/mlx5/hws/mlx5dr_debug.c | 1 +
drivers/net/mlx5/hws/mlx5dr_internal.h | 1 +
6 files changed, 141 insertions(+), 18 deletions(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index e7d89ad7ec..a6bbb85eed 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -50,6 +50,7 @@ enum mlx5dr_action_type {
MLX5DR_ACTION_TYP_ASO_FIRST_HIT,
MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT,
MLX5DR_ACTION_TYP_CRYPTO_DECRYPT,
+ MLX5DR_ACTION_TYP_INSERT_HEADER,
MLX5DR_ACTION_TYP_DEST_ROOT,
MLX5DR_ACTION_TYP_DEST_ARRAY,
MLX5DR_ACTION_TYP_MAX,
@@ -174,6 +175,20 @@ struct mlx5dr_action_reformat_header {
void *data;
};
+struct mlx5dr_action_insert_header {
+ struct mlx5dr_action_reformat_header hdr;
+ /* PRM start anchor to which header will be inserted */
+ uint8_t anchor;
+ /* Header insertion offset in bytes, from the start
+ * anchor to the location where new header will be inserted.
+ */
+ uint8_t offset;
+ /* Indicates this header insertion adds encapsulation header to the packet,
+ * requiring device to update offloaded fields (for example IPv4 total length).
+ */
+ bool encap;
+};
+
struct mlx5dr_action_mh_pattern {
/* Byte size of modify actions provided by "data" */
size_t sz;
@@ -813,6 +828,27 @@ mlx5dr_action_create_crypto(struct mlx5dr_context *ctx,
struct mlx5dr_action_crypto_attr *attr,
uint32_t flags);
+/* Create insert header action.
+ *
+ * @param[in] ctx
+ * The context in which the new action will be created.
+ * @param[in] num_of_hdrs
+ * Number of provided headers in "hdrs" array.
+ * @param[in] hdrs
+ * Headers array containing header information.
+ * @param[in] log_bulk_size
+ * Number of unique values used with this insert header.
+ * @param[in] flags
+ * Action creation flags. (enum mlx5dr_action_flags)
+ * @return pointer to mlx5dr_action on success NULL otherwise.
+ */
+struct mlx5dr_action *
+mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx,
+ uint8_t num_of_hdrs,
+ struct mlx5dr_action_insert_header *hdrs,
+ uint32_t log_bulk_size,
+ uint32_t flags);
+
/* Destroy direct rule action.
*
* @param[in] action
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index fe9c39b207..9885555a8f 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -34,6 +34,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
+ BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3),
BIT(MLX5DR_ACTION_TYP_TBL) |
@@ -55,6 +56,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
+ BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3),
BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER),
@@ -80,6 +82,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
+ BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3),
BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER),
@@ -640,20 +643,15 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
attr->remove_header.end_anchor = MLX5_HEADER_ANCHOR_INNER_MAC;
break;
case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2:
- attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT;
- attr->action_offset = MLX5DR_ACTION_OFFSET_DW6;
- attr->insert_header.encap = 1;
- attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START;
- attr->insert_header.arg_id = action->reformat.arg_obj->id;
- attr->insert_header.header_size = action->reformat.header_size;
- break;
case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3:
+ case MLX5DR_ACTION_TYP_INSERT_HEADER:
attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT;
attr->action_offset = MLX5DR_ACTION_OFFSET_DW6;
- attr->insert_header.encap = 1;
- attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START;
+ attr->insert_header.encap = action->reformat.encap;
+ attr->insert_header.insert_anchor = action->reformat.anchor;
attr->insert_header.arg_id = action->reformat.arg_obj->id;
attr->insert_header.header_size = action->reformat.header_size;
+ attr->insert_header.insert_offset = action->reformat.offset;
break;
case MLX5DR_ACTION_TYP_ASO_METER:
attr->action_offset = MLX5DR_ACTION_OFFSET_DW6;
@@ -1382,7 +1380,7 @@ mlx5dr_action_create_reformat_root(struct mlx5dr_action *action,
}
static int
-mlx5dr_action_handle_l2_to_tunnel_l2(struct mlx5dr_action *action,
+mlx5dr_action_handle_insert_with_ptr(struct mlx5dr_action *action,
uint8_t num_of_hdrs,
struct mlx5dr_action_reformat_header *hdrs,
uint32_t log_bulk_sz)
@@ -1392,8 +1390,8 @@ mlx5dr_action_handle_l2_to_tunnel_l2(struct mlx5dr_action *action,
int ret, i;
for (i = 0; i < num_of_hdrs; i++) {
- if (hdrs[i].sz % 2 != 0) {
- DR_LOG(ERR, "Header data size should be multiply of 2");
+ if (hdrs[i].sz % W_SIZE != 0) {
+ DR_LOG(ERR, "Header data size should be in WORD granularity");
rte_errno = EINVAL;
return rte_errno;
}
@@ -1415,6 +1413,13 @@ mlx5dr_action_handle_l2_to_tunnel_l2(struct mlx5dr_action *action,
action[i].reformat.num_of_hdrs = num_of_hdrs;
action[i].reformat.max_hdr_sz = max_sz;
+ if (action[i].type == MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2 ||
+ action[i].type == MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3) {
+ action[i].reformat.anchor = MLX5_HEADER_ANCHOR_PACKET_START;
+ action[i].reformat.offset = 0;
+ action[i].reformat.encap = 1;
+ }
+
ret = mlx5dr_action_create_stcs(&action[i], NULL);
if (ret) {
DR_LOG(ERR, "Failed to create stc for reformat");
@@ -1448,7 +1453,7 @@ mlx5dr_action_handle_l2_to_tunnel_l3(struct mlx5dr_action *action,
}
/* Reuse the insert with pointer for the L2L3 header */
- ret = mlx5dr_action_handle_l2_to_tunnel_l2(action,
+ ret = mlx5dr_action_handle_insert_with_ptr(action,
num_of_hdrs,
hdrs,
log_bulk_sz);
@@ -1592,7 +1597,7 @@ mlx5dr_action_create_reformat_hws(struct mlx5dr_action *action,
ret = mlx5dr_action_create_stcs(action, NULL);
break;
case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2:
- ret = mlx5dr_action_handle_l2_to_tunnel_l2(action, num_of_hdrs, hdrs, bulk_size);
+ ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, hdrs, bulk_size);
break;
case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3:
ret = mlx5dr_action_handle_l2_to_tunnel_l3(action, num_of_hdrs, hdrs, bulk_size);
@@ -1622,6 +1627,7 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx,
if (!num_of_hdrs) {
DR_LOG(ERR, "Reformat num_of_hdrs cannot be zero");
+ rte_errno = EINVAL;
return NULL;
}
@@ -1657,7 +1663,6 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx,
ret = mlx5dr_action_create_reformat_hws(action, num_of_hdrs, hdrs, log_bulk_size);
if (ret) {
DR_LOG(ERR, "Failed to create HWS reformat action");
- rte_errno = EINVAL;
goto free_action;
}
@@ -2186,6 +2191,81 @@ mlx5dr_action_create_reformat_trailer(struct mlx5dr_context *ctx,
return action;
}
+struct mlx5dr_action *
+mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx,
+ uint8_t num_of_hdrs,
+ struct mlx5dr_action_insert_header *hdrs,
+ uint32_t log_bulk_size,
+ uint32_t flags)
+{
+ struct mlx5dr_action_reformat_header *reformat_hdrs;
+ struct mlx5dr_action *action;
+ int i, ret;
+
+ if (!num_of_hdrs) {
+ DR_LOG(ERR, "Reformat num_of_hdrs cannot be zero");
+ return NULL;
+ }
+
+ if (mlx5dr_action_is_root_flags(flags)) {
+ DR_LOG(ERR, "Dynamic reformat action not supported over root");
+ rte_errno = ENOTSUP;
+ return NULL;
+ }
+
+ if (!mlx5dr_action_is_hws_flags(flags) ||
+ ((flags & MLX5DR_ACTION_FLAG_SHARED) && (log_bulk_size || num_of_hdrs > 1))) {
+ DR_LOG(ERR, "Reformat flags don't fit HWS (flags: 0x%x)", flags);
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ action = mlx5dr_action_create_generic_bulk(ctx, flags,
+ MLX5DR_ACTION_TYP_INSERT_HEADER,
+ num_of_hdrs);
+ if (!action)
+ return NULL;
+
+ reformat_hdrs = simple_calloc(num_of_hdrs, sizeof(*reformat_hdrs));
+ if (!reformat_hdrs) {
+ DR_LOG(ERR, "Failed to allocate memory for reformat_hdrs");
+ rte_errno = ENOMEM;
+ goto free_action;
+ }
+
+ for (i = 0; i < num_of_hdrs; i++) {
+ if (hdrs[i].offset % W_SIZE != 0) {
+ DR_LOG(ERR, "Header offset should be in WORD granularity");
+ rte_errno = EINVAL;
+ goto free_reformat_hdrs;
+ }
+
+ action[i].reformat.anchor = hdrs[i].anchor;
+ action[i].reformat.encap = hdrs[i].encap;
+ action[i].reformat.offset = hdrs[i].offset;
+ reformat_hdrs[i].sz = hdrs[i].hdr.sz;
+ reformat_hdrs[i].data = hdrs[i].hdr.data;
+ }
+
+ ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs,
+ reformat_hdrs, log_bulk_size);
+ if (ret) {
+ DR_LOG(ERR, "Failed to create HWS reformat action");
+ rte_errno = EINVAL;
+ goto free_reformat_hdrs;
+ }
+
+ simple_free(reformat_hdrs);
+
+ return action;
+
+free_reformat_hdrs:
+ simple_free(reformat_hdrs);
+free_action:
+ simple_free(action);
+ return NULL;
+}
+
static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
{
struct mlx5dr_devx_obj *obj = NULL;
@@ -2252,6 +2332,7 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
mlx5dr_action_destroy_stcs(&action[i]);
mlx5dr_cmd_destroy_obj(action->reformat.arg_obj);
break;
+ case MLX5DR_ACTION_TYP_INSERT_HEADER:
case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2:
for (i = 0; i < action->reformat.num_of_hdrs; i++)
mlx5dr_action_destroy_stcs(&action[i]);
@@ -2850,6 +2931,7 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
setter->idx_single = i;
break;
+ case MLX5DR_ACTION_TYP_INSERT_HEADER:
case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2:
/* Double insert header with pointer */
setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE);
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h
index b64d6cc9a8..02358da4cb 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.h
+++ b/drivers/net/mlx5/hws/mlx5dr_action.h
@@ -136,8 +136,11 @@ struct mlx5dr_action {
struct {
struct mlx5dr_devx_obj *arg_obj;
uint32_t header_size;
- uint8_t num_of_hdrs;
uint16_t max_hdr_sz;
+ uint8_t num_of_hdrs;
+ uint8_t anchor;
+ uint8_t offset;
+ bool encap;
} reformat;
struct {
struct mlx5dr_devx_obj *devx_obj;
diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c
index 02547e7178..0ba4774f08 100644
--- a/drivers/net/mlx5/hws/mlx5dr_cmd.c
+++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c
@@ -492,9 +492,9 @@ mlx5dr_cmd_stc_modify_set_stc_param(struct mlx5dr_cmd_stc_modify_attr *stc_attr,
stc_attr->insert_header.insert_anchor);
/* HW gets the next 2 sizes in words */
MLX5_SET(stc_ste_param_insert, stc_parm, insert_size,
- stc_attr->insert_header.header_size / 2);
+ stc_attr->insert_header.header_size / W_SIZE);
MLX5_SET(stc_ste_param_insert, stc_parm, insert_offset,
- stc_attr->insert_header.insert_offset / 2);
+ stc_attr->insert_header.insert_offset / W_SIZE);
MLX5_SET(stc_ste_param_insert, stc_parm, insert_argument,
stc_attr->insert_header.arg_id);
break;
diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c
index 552dba5e63..29e207765b 100644
--- a/drivers/net/mlx5/hws/mlx5dr_debug.c
+++ b/drivers/net/mlx5/hws/mlx5dr_debug.c
@@ -29,6 +29,7 @@ const char *mlx5dr_debug_action_type_str[] = {
[MLX5DR_ACTION_TYP_DEST_ARRAY] = "DEST_ARRAY",
[MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT] = "CRYPTO_ENCRYPT",
[MLX5DR_ACTION_TYP_CRYPTO_DECRYPT] = "CRYPTO_DECRYPT",
+ [MLX5DR_ACTION_TYP_INSERT_HEADER] = "INSERT_HEADER",
};
static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX,
diff --git a/drivers/net/mlx5/hws/mlx5dr_internal.h b/drivers/net/mlx5/hws/mlx5dr_internal.h
index 021d599a56..b9efdc4a9a 100644
--- a/drivers/net/mlx5/hws/mlx5dr_internal.h
+++ b/drivers/net/mlx5/hws/mlx5dr_internal.h
@@ -40,6 +40,7 @@
#include "mlx5dr_pat_arg.h"
#include "mlx5dr_crc32.h"
+#define W_SIZE 2
#define DW_SIZE 4
#define BITS_IN_BYTE 8
#define BITS_IN_DW (BITS_IN_BYTE * DW_SIZE)
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 21/30] net/mlx5/hws: support remove header action
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (18 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 20/30] net/mlx5/hws: support insert header action Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 22/30] net/mlx5/hws: allow jump to TIR over FDB Gregory Etelson
` (9 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Hamdan Igbaria, Alex Vesker,
Matan Azrad, Viacheslav Ovsiienko, Ori Kam, Suanming Mou
From: Hamdan Igbaria <hamdani@nvidia.com>
Support remove header action, this action will allow the user
to execute dynamic decaps by choosing to decap by providing a
start anchor and number of words to remove, or providing a
start anchor and end anchor.
Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr.h | 40 ++++++++++++++
drivers/net/mlx5/hws/mlx5dr_action.c | 78 ++++++++++++++++++++++++++++
drivers/net/mlx5/hws/mlx5dr_action.h | 7 +++
drivers/net/mlx5/hws/mlx5dr_debug.c | 1 +
4 files changed, 126 insertions(+)
diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index a6bbb85eed..2e692f76c3 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -51,6 +51,7 @@ enum mlx5dr_action_type {
MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT,
MLX5DR_ACTION_TYP_CRYPTO_DECRYPT,
MLX5DR_ACTION_TYP_INSERT_HEADER,
+ MLX5DR_ACTION_TYP_REMOVE_HEADER,
MLX5DR_ACTION_TYP_DEST_ROOT,
MLX5DR_ACTION_TYP_DEST_ARRAY,
MLX5DR_ACTION_TYP_MAX,
@@ -189,6 +190,29 @@ struct mlx5dr_action_insert_header {
bool encap;
};
+enum mlx5dr_action_remove_header_type {
+ MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_OFFSET,
+ MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_HEADER,
+};
+
+struct mlx5dr_action_remove_header_attr {
+ enum mlx5dr_action_remove_header_type type;
+ union {
+ struct {
+ /* PRM start anchor from which header will be removed */
+ uint8_t start_anchor;
+ /* PRM end anchor till which header will be removed */
+ uint8_t end_anchor;
+ bool decap;
+ } by_anchor;
+ struct {
+ /* PRM start anchor from which header will be removed */
+ uint8_t start_anchor;
+ uint8_t size;
+ } by_offset;
+ };
+};
+
struct mlx5dr_action_mh_pattern {
/* Byte size of modify actions provided by "data" */
size_t sz;
@@ -849,6 +873,22 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx,
uint32_t log_bulk_size,
uint32_t flags);
+/* Create remove header action.
+ *
+ * @param[in] ctx
+ * The context in which the new action will be created.
+ * @param[in] attr
+ * attributes: specifies the remove header type, PRM start anchor and
+ * the PRM end anchor or the PRM start anchor and remove size in bytes.
+ * @param[in] flags
+ * Action creation flags. (enum mlx5dr_action_flags)
+ * @return pointer to mlx5dr_action on success NULL otherwise.
+ */
+struct mlx5dr_action *
+mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx,
+ struct mlx5dr_action_remove_header_attr *attr,
+ uint32_t flags);
+
/* Destroy direct rule action.
*
* @param[in] action
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 9885555a8f..1a6296a728 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -9,6 +9,9 @@
#define MLX5DR_ACTION_METER_INIT_COLOR_OFFSET 1
#define MLX5DR_ACTION_ASO_FIRST_HIT_SET_OFFSET 9
+/* Header removal size limited to 128B (64 words) */
+#define MLX5DR_ACTION_REMOVE_HEADER_MAX_SIZE 128
+
/* This is the maximum allowed action order for each table type:
* TX: POP_VLAN, CTR, ASO, PUSH_VLAN, MODIFY, ENCAP, TRAILER, ENCRYPT,
* Term
@@ -21,6 +24,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
[MLX5DR_TABLE_TYPE_NIC_RX] = {
BIT(MLX5DR_ACTION_TYP_TAG),
BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER),
+ BIT(MLX5DR_ACTION_TYP_REMOVE_HEADER) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2),
BIT(MLX5DR_ACTION_TYP_POP_VLAN),
@@ -69,6 +73,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
},
[MLX5DR_TABLE_TYPE_FDB] = {
BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER),
+ BIT(MLX5DR_ACTION_TYP_REMOVE_HEADER) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2),
BIT(MLX5DR_ACTION_TYP_POP_VLAN),
@@ -719,6 +724,19 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
attr->reformat_trailer.op = action->reformat_trailer.op;
attr->reformat_trailer.size = action->reformat_trailer.size;
break;
+ case MLX5DR_ACTION_TYP_REMOVE_HEADER:
+ if (action->remove_header.type == MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_HEADER) {
+ attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE;
+ attr->remove_header.decap = action->remove_header.decap;
+ attr->remove_header.start_anchor = action->remove_header.start_anchor;
+ attr->remove_header.end_anchor = action->remove_header.end_anchor;
+ } else {
+ attr->action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS;
+ attr->remove_words.start_anchor = action->remove_header.start_anchor;
+ attr->remove_words.num_of_words = action->remove_header.num_of_words;
+ }
+ attr->action_offset = MLX5DR_ACTION_OFFSET_DW5;
+ break;
default:
DR_LOG(ERR, "Invalid action type %d", action->type);
assert(false);
@@ -2266,6 +2284,64 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx,
return NULL;
}
+struct mlx5dr_action *
+mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx,
+ struct mlx5dr_action_remove_header_attr *attr,
+ uint32_t flags)
+{
+ struct mlx5dr_action *action;
+
+ if (mlx5dr_action_is_root_flags(flags)) {
+ DR_LOG(ERR, "Remove header action not supported over root");
+ rte_errno = ENOTSUP;
+ return NULL;
+ }
+
+ action = mlx5dr_action_create_generic(ctx, flags, MLX5DR_ACTION_TYP_REMOVE_HEADER);
+ if (!action)
+ return NULL;
+
+ switch (attr->type) {
+ case MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_HEADER:
+ action->remove_header.type = MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_HEADER;
+ action->remove_header.start_anchor = attr->by_anchor.start_anchor;
+ action->remove_header.end_anchor = attr->by_anchor.end_anchor;
+ action->remove_header.decap = attr->by_anchor.decap;
+ break;
+ case MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_OFFSET:
+ if (attr->by_offset.size % W_SIZE != 0) {
+ DR_LOG(ERR, "Invalid size, HW supports header remove in WORD granularity");
+ rte_errno = EINVAL;
+ goto free_action;
+ }
+
+ if (attr->by_offset.size > MLX5DR_ACTION_REMOVE_HEADER_MAX_SIZE) {
+ DR_LOG(ERR, "Header removal size limited to %u bytes",
+ MLX5DR_ACTION_REMOVE_HEADER_MAX_SIZE);
+ rte_errno = EINVAL;
+ goto free_action;
+ }
+
+ action->remove_header.type = MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_OFFSET;
+ action->remove_header.start_anchor = attr->by_offset.start_anchor;
+ action->remove_header.num_of_words = attr->by_offset.size / W_SIZE;
+ break;
+ default:
+ DR_LOG(ERR, "Unsupported remove header type %u", attr->type);
+ rte_errno = ENOTSUP;
+ goto free_action;
+ }
+
+ if (mlx5dr_action_create_stcs(action, NULL))
+ goto free_action;
+
+ return action;
+
+free_action:
+ simple_free(action);
+ return NULL;
+}
+
static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
{
struct mlx5dr_devx_obj *obj = NULL;
@@ -2291,6 +2367,7 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
case MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT:
case MLX5DR_ACTION_TYP_CRYPTO_DECRYPT:
case MLX5DR_ACTION_TYP_REFORMAT_TRAILER:
+ case MLX5DR_ACTION_TYP_REMOVE_HEADER:
mlx5dr_action_destroy_stcs(action);
break;
case MLX5DR_ACTION_TYP_DEST_ROOT:
@@ -2923,6 +3000,7 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
setter->idx_double = i;
break;
+ case MLX5DR_ACTION_TYP_REMOVE_HEADER:
case MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2:
/* Single remove header to header */
setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY);
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h
index 02358da4cb..4046f658e6 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.h
+++ b/drivers/net/mlx5/hws/mlx5dr_action.h
@@ -161,6 +161,13 @@ struct mlx5dr_action {
uint8_t op;
uint8_t size;
} reformat_trailer;
+ struct {
+ uint8_t type;
+ uint8_t start_anchor;
+ uint8_t end_anchor;
+ uint8_t num_of_words;
+ bool decap;
+ } remove_header;
struct {
struct mlx5dr_devx_obj *devx_obj;
} devx_dest;
diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c
index 29e207765b..5111f41648 100644
--- a/drivers/net/mlx5/hws/mlx5dr_debug.c
+++ b/drivers/net/mlx5/hws/mlx5dr_debug.c
@@ -30,6 +30,7 @@ const char *mlx5dr_debug_action_type_str[] = {
[MLX5DR_ACTION_TYP_CRYPTO_ENCRYPT] = "CRYPTO_ENCRYPT",
[MLX5DR_ACTION_TYP_CRYPTO_DECRYPT] = "CRYPTO_DECRYPT",
[MLX5DR_ACTION_TYP_INSERT_HEADER] = "INSERT_HEADER",
+ [MLX5DR_ACTION_TYP_REMOVE_HEADER] = "REMOVE_HEADER",
};
static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX,
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 22/30] net/mlx5/hws: allow jump to TIR over FDB
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (19 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 21/30] net/mlx5/hws: support remove " Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 23/30] net/mlx5/hws: support dynamic re-parse Gregory Etelson
` (8 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Alex Vesker, Erez Shitrit,
Matan Azrad, Viacheslav Ovsiienko, Ori Kam, Suanming Mou
From: Alex Vesker <valex@nvidia.com>
Current TIR action is allowed to be used only for NIC RX,
this will allow TIR action over FDB for RX traffic in case
of TX traffic packets will be dropped.
Signed-off-by: Alex Vesker <valex@nvidia.com>
Reviewed-by: Erez Shitrit <erezsh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/common/mlx5/mlx5_prm.h | 2 ++
drivers/net/mlx5/hws/mlx5dr_action.c | 27 ++++++++++++++++++++++-----
drivers/net/mlx5/hws/mlx5dr_cmd.c | 4 ++++
drivers/net/mlx5/hws/mlx5dr_cmd.h | 1 +
4 files changed, 29 insertions(+), 5 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 40e461cb82..bb2b990d5b 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -2418,6 +2418,8 @@ struct mlx5_ifc_wqe_based_flow_table_cap_bits {
u8 reserved_at_180[0x10];
u8 ste_format_gen_wqe[0x10];
u8 linear_match_definer_reg_c3[0x20];
+ u8 fdb_jump_to_tir_stc[0x1];
+ u8 reserved_at_1c1[0x1f];
};
union mlx5_ifc_hca_cap_union_bits {
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 1a6296a728..05b6e97576 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -445,6 +445,7 @@ mlx5dr_action_fixup_stc_attr(struct mlx5dr_context *ctx,
break;
case MLX5_IFC_STC_ACTION_TYPE_CRYPTO_IPSEC_ENCRYPTION:
+ /* Encrypt is allowed on RX side, requires mask in case of FDB */
if (fw_tbl_type == FS_FT_FDB_RX) {
fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_NOP;
fixup_stc_attr->action_offset = stc_attr->action_offset;
@@ -454,6 +455,7 @@ mlx5dr_action_fixup_stc_attr(struct mlx5dr_context *ctx,
break;
case MLX5_IFC_STC_ACTION_TYPE_CRYPTO_IPSEC_DECRYPTION:
+ /* Decrypt is allowed on TX side, requires mask in case of FDB */
if (fw_tbl_type == FS_FT_FDB_TX) {
fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_NOP;
fixup_stc_attr->action_offset = stc_attr->action_offset;
@@ -463,12 +465,10 @@ mlx5dr_action_fixup_stc_attr(struct mlx5dr_context *ctx,
break;
case MLX5_IFC_STC_ACTION_TYPE_TRAILER:
- if (table_type != MLX5DR_TABLE_TYPE_FDB)
- break;
-
+ /* Trailer has FDB limitations on RX and TX based on operation */
val = stc_attr->reformat_trailer.op;
- if ((val == MLX5DR_ACTION_TRAILER_OP_INSERT && !is_mirror) ||
- (val == MLX5DR_ACTION_TRAILER_OP_REMOVE && is_mirror)) {
+ if ((val == MLX5DR_ACTION_TRAILER_OP_INSERT && fw_tbl_type == FS_FT_FDB_RX) ||
+ (val == MLX5DR_ACTION_TRAILER_OP_REMOVE && fw_tbl_type == FS_FT_FDB_TX)) {
fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_NOP;
fixup_stc_attr->action_offset = stc_attr->action_offset;
fixup_stc_attr->stc_offset = stc_attr->stc_offset;
@@ -476,6 +476,16 @@ mlx5dr_action_fixup_stc_attr(struct mlx5dr_context *ctx,
}
break;
+ case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR:
+ /* TIR is allowed on RX side, requires mask in case of FDB */
+ if (fw_tbl_type == FS_FT_FDB_TX) {
+ fixup_stc_attr->action_type = MLX5_IFC_STC_ACTION_TYPE_DROP;
+ fixup_stc_attr->action_offset = MLX5DR_ACTION_OFFSET_HIT;
+ fixup_stc_attr->stc_offset = stc_attr->stc_offset;
+ use_fixup = true;
+ }
+ break;
+
default:
break;
}
@@ -976,6 +986,13 @@ mlx5dr_action_create_dest_tir(struct mlx5dr_context *ctx,
return NULL;
}
+ if ((flags & MLX5DR_ACTION_FLAG_ROOT_FDB) ||
+ (flags & MLX5DR_ACTION_FLAG_HWS_FDB && !ctx->caps->fdb_tir_stc)) {
+ DR_LOG(ERR, "TIR action not support on FDB");
+ rte_errno = ENOTSUP;
+ return NULL;
+ }
+
if (!is_local) {
DR_LOG(ERR, "TIR should be created on local ibv_device, flags: 0x%x",
flags);
diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c
index 0ba4774f08..135d31dca1 100644
--- a/drivers/net/mlx5/hws/mlx5dr_cmd.c
+++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c
@@ -1275,6 +1275,10 @@ int mlx5dr_cmd_query_caps(struct ibv_context *ctx,
caps->supp_ste_format_gen_wqe = MLX5_GET(query_hca_cap_out, out,
capability.wqe_based_flow_table_cap.
ste_format_gen_wqe);
+
+ caps->fdb_tir_stc = MLX5_GET(query_hca_cap_out, out,
+ capability.wqe_based_flow_table_cap.
+ fdb_jump_to_tir_stc);
}
if (caps->eswitch_manager) {
diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h
index c082157538..cb27212a5b 100644
--- a/drivers/net/mlx5/hws/mlx5dr_cmd.h
+++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h
@@ -241,6 +241,7 @@ struct mlx5dr_cmd_query_caps {
uint8_t log_header_modify_argument_granularity;
uint8_t log_header_modify_argument_max_alloc;
uint8_t sq_ts_format;
+ uint8_t fdb_tir_stc;
uint64_t definer_format_sup;
uint32_t trivial_match_definer;
uint32_t vhca_id;
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 23/30] net/mlx5/hws: support dynamic re-parse
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (20 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 22/30] net/mlx5/hws: allow jump to TIR over FDB Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 24/30] net/mlx5/hws: dynamic re-parse for modify header Gregory Etelson
` (7 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Alex Vesker, Erez Shitrit,
Matan Azrad, Viacheslav Ovsiienko, Ori Kam, Suanming Mou
From: Alex Vesker <valex@nvidia.com>
Each steering entry (STE) has a bit called re-parse used for
re-parsing the packet in HW, re-parsing is needed after
reformat (e.g. push/pop/encapsulate/...) or when modifying the
packet headers requiring structure change (e.g. TCP to UDP).
Until now we re-parsed the packet in each STE leading to
longer processing per packet. With supported devices we
can control re-parse bit to allow better performance.
Signed-off-by: Alex Vesker <valex@nvidia.com>
Reviewed-by: Erez Shitrit <erezsh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/common/mlx5/mlx5_prm.h | 10 ++++-
drivers/net/mlx5/hws/mlx5dr_action.c | 58 +++++++++++++++++----------
drivers/net/mlx5/hws/mlx5dr_action.h | 2 +-
drivers/net/mlx5/hws/mlx5dr_cmd.c | 3 +-
drivers/net/mlx5/hws/mlx5dr_cmd.h | 2 +
drivers/net/mlx5/hws/mlx5dr_context.c | 15 +++++++
drivers/net/mlx5/hws/mlx5dr_context.h | 9 ++++-
drivers/net/mlx5/hws/mlx5dr_matcher.c | 2 +
8 files changed, 75 insertions(+), 26 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index bb2b990d5b..a5ecce98e9 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -3445,6 +3445,7 @@ enum mlx5_ifc_rtc_ste_format {
enum mlx5_ifc_rtc_reparse_mode {
MLX5_IFC_RTC_REPARSE_NEVER = 0x0,
MLX5_IFC_RTC_REPARSE_ALWAYS = 0x1,
+ MLX5_IFC_RTC_REPARSE_BY_STC = 0x2,
};
#define MLX5_IFC_RTC_LINEAR_LOOKUP_TBL_LOG_MAX 16
@@ -3515,6 +3516,12 @@ enum mlx5_ifc_stc_action_type {
MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK = 0x86,
};
+enum mlx5_ifc_stc_reparse_mode {
+ MLX5_IFC_STC_REPARSE_IGNORE = 0x0,
+ MLX5_IFC_STC_REPARSE_NEVER = 0x1,
+ MLX5_IFC_STC_REPARSE_ALWAYS = 0x2,
+};
+
struct mlx5_ifc_stc_ste_param_ste_table_bits {
u8 ste_obj_id[0x20];
u8 match_definer_id[0x20];
@@ -3648,7 +3655,8 @@ enum {
struct mlx5_ifc_stc_bits {
u8 modify_field_select[0x40];
- u8 reserved_at_40[0x48];
+ u8 reserved_at_40[0x46];
+ u8 reparse_mode[0x2];
u8 table_type[0x8];
u8 ste_action_offset[0x8];
u8 action_type[0x8];
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 05b6e97576..bdccfb9cf3 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -124,16 +124,18 @@ static int mlx5dr_action_get_shared_stc_nic(struct mlx5dr_context *ctx,
goto unlock_and_out;
}
switch (stc_type) {
- case MLX5DR_CONTEXT_SHARED_STC_DECAP:
+ case MLX5DR_CONTEXT_SHARED_STC_DECAP_L3:
stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE;
stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5;
+ stc_attr.reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE;
stc_attr.remove_header.decap = 0;
stc_attr.remove_header.start_anchor = MLX5_HEADER_ANCHOR_PACKET_START;
stc_attr.remove_header.end_anchor = MLX5_HEADER_ANCHOR_IPV6_IPV4;
break;
- case MLX5DR_CONTEXT_SHARED_STC_POP:
+ case MLX5DR_CONTEXT_SHARED_STC_DOUBLE_POP:
stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS;
stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW5;
+ stc_attr.reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS;
stc_attr.remove_words.start_anchor = MLX5_HEADER_ANCHOR_FIRST_VLAN_START;
stc_attr.remove_words.num_of_words = MLX5DR_ACTION_HDR_LEN_L2_VLAN;
break;
@@ -512,6 +514,11 @@ int mlx5dr_action_alloc_single_stc(struct mlx5dr_context *ctx,
}
stc_attr->stc_offset = stc->offset;
+
+ /* Dynamic reparse not supported, overwrite and use default */
+ if (!mlx5dr_context_cap_dynamic_reparse(ctx))
+ stc_attr->reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE;
+
devx_obj_0 = mlx5dr_pool_chunk_get_base_devx_obj(stc_pool, stc);
/* According to table/action limitation change the stc_attr */
@@ -600,6 +607,8 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
struct mlx5dr_devx_obj *obj,
struct mlx5dr_cmd_stc_modify_attr *attr)
{
+ attr->reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE;
+
switch (action->type) {
case MLX5DR_ACTION_TYP_TAG:
attr->action_type = MLX5_IFC_STC_ACTION_TYPE_TAG;
@@ -626,6 +635,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
case MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2:
case MLX5DR_ACTION_TYP_MODIFY_HDR:
attr->action_offset = MLX5DR_ACTION_OFFSET_DW6;
+ attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS;
if (action->modify_header.num_of_actions == 1) {
attr->modify_action.data = action->modify_header.single_action;
attr->action_type = mlx5dr_action_get_mh_stc_type(attr->modify_action.data);
@@ -653,6 +663,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
case MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2:
attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE;
attr->action_offset = MLX5DR_ACTION_OFFSET_DW5;
+ attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS;
attr->remove_header.decap = 1;
attr->remove_header.start_anchor = MLX5_HEADER_ANCHOR_PACKET_START;
attr->remove_header.end_anchor = MLX5_HEADER_ANCHOR_INNER_MAC;
@@ -662,6 +673,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
case MLX5DR_ACTION_TYP_INSERT_HEADER:
attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT;
attr->action_offset = MLX5DR_ACTION_OFFSET_DW6;
+ attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS;
attr->insert_header.encap = action->reformat.encap;
attr->insert_header.insert_anchor = action->reformat.anchor;
attr->insert_header.arg_id = action->reformat.arg_obj->id;
@@ -705,12 +717,14 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
case MLX5DR_ACTION_TYP_POP_VLAN:
attr->action_type = MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS;
attr->action_offset = MLX5DR_ACTION_OFFSET_DW5;
+ attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS;
attr->remove_words.start_anchor = MLX5_HEADER_ANCHOR_FIRST_VLAN_START;
attr->remove_words.num_of_words = MLX5DR_ACTION_HDR_LEN_L2_VLAN / 2;
break;
case MLX5DR_ACTION_TYP_PUSH_VLAN:
attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT;
attr->action_offset = MLX5DR_ACTION_OFFSET_DW6;
+ attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS;
attr->insert_header.encap = 0;
attr->insert_header.is_inline = 1;
attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START;
@@ -730,6 +744,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
case MLX5DR_ACTION_TYP_REFORMAT_TRAILER:
attr->action_type = MLX5_IFC_STC_ACTION_TYPE_TRAILER;
attr->action_offset = MLX5DR_ACTION_OFFSET_DW5;
+ attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS;
attr->reformat_trailer.type = action->reformat_trailer.type;
attr->reformat_trailer.op = action->reformat_trailer.op;
attr->reformat_trailer.size = action->reformat_trailer.size;
@@ -746,6 +761,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
attr->remove_words.num_of_words = action->remove_header.num_of_words;
}
attr->action_offset = MLX5DR_ACTION_OFFSET_DW5;
+ attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS;
break;
default:
DR_LOG(ERR, "Invalid action type %d", action->type);
@@ -1310,7 +1326,7 @@ mlx5dr_action_create_pop_vlan(struct mlx5dr_context *ctx, uint32_t flags)
if (!action)
return NULL;
- ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP);
+ ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DOUBLE_POP);
if (ret) {
DR_LOG(ERR, "Failed to create remove stc for reformat");
goto free_action;
@@ -1325,7 +1341,7 @@ mlx5dr_action_create_pop_vlan(struct mlx5dr_context *ctx, uint32_t flags)
return action;
free_shared:
- mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP);
+ mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DOUBLE_POP);
free_action:
simple_free(action);
return NULL;
@@ -1481,7 +1497,7 @@ mlx5dr_action_handle_l2_to_tunnel_l3(struct mlx5dr_action *action,
int ret;
/* The action is remove-l2-header + insert-l3-header */
- ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP);
+ ret = mlx5dr_action_get_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP_L3);
if (ret) {
DR_LOG(ERR, "Failed to create remove stc for reformat");
return ret;
@@ -1498,7 +1514,7 @@ mlx5dr_action_handle_l2_to_tunnel_l3(struct mlx5dr_action *action,
return 0;
put_shared_stc:
- mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP);
+ mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP_L3);
return ret;
}
@@ -2393,7 +2409,7 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
break;
case MLX5DR_ACTION_TYP_POP_VLAN:
mlx5dr_action_destroy_stcs(action);
- mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_POP);
+ mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DOUBLE_POP);
break;
case MLX5DR_ACTION_TYP_DEST_ARRAY:
mlx5dr_action_destroy_stcs(action);
@@ -2421,7 +2437,7 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
mlx5dr_cmd_destroy_obj(obj);
break;
case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3:
- mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP);
+ mlx5dr_action_put_shared_stc(action, MLX5DR_CONTEXT_SHARED_STC_DECAP_L3);
for (i = 0; i < action->reformat.num_of_hdrs; i++)
mlx5dr_action_destroy_stcs(&action[i]);
mlx5dr_cmd_destroy_obj(action->reformat.arg_obj);
@@ -2481,6 +2497,7 @@ int mlx5dr_action_get_default_stc(struct mlx5dr_context *ctx,
stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_NOP;
stc_attr.action_offset = MLX5DR_ACTION_OFFSET_DW0;
+ stc_attr.reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE;
ret = mlx5dr_action_alloc_single_stc(ctx, &stc_attr, tbl_type,
&default_stc->nop_ctr);
if (ret) {
@@ -2858,7 +2875,7 @@ mlx5dr_action_setter_single_double_pop(struct mlx5dr_actions_apply_data *apply,
apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0;
apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] =
htobe32(mlx5dr_action_get_shared_stc_offset(apply->common_res,
- MLX5DR_CONTEXT_SHARED_STC_POP));
+ MLX5DR_CONTEXT_SHARED_STC_DOUBLE_POP));
}
static void
@@ -2893,7 +2910,7 @@ mlx5dr_action_setter_common_decap(struct mlx5dr_actions_apply_data *apply,
apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0;
apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] =
htobe32(mlx5dr_action_get_shared_stc_offset(apply->common_res,
- MLX5DR_CONTEXT_SHARED_STC_DECAP));
+ MLX5DR_CONTEXT_SHARED_STC_DECAP_L3));
}
static void
@@ -2983,8 +3000,8 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
pop_setter->set_single = &mlx5dr_action_setter_single_double_pop;
break;
}
- setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY);
- setter->flags |= ASF_SINGLE1 | ASF_REPARSE | ASF_REMOVE;
+ setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY | ASF_INSERT);
+ setter->flags |= ASF_SINGLE1 | ASF_REMOVE;
setter->set_single = &mlx5dr_action_setter_single;
setter->idx_single = i;
pop_setter = setter;
@@ -2993,7 +3010,7 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
case MLX5DR_ACTION_TYP_PUSH_VLAN:
/* Double insert inline */
setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE);
- setter->flags |= ASF_DOUBLE | ASF_REPARSE | ASF_MODIFY;
+ setter->flags |= ASF_DOUBLE | ASF_INSERT;
setter->set_double = &mlx5dr_action_setter_push_vlan;
setter->idx_double = i;
break;
@@ -3001,7 +3018,7 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
case MLX5DR_ACTION_TYP_MODIFY_HDR:
/* Double modify header list */
setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE);
- setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE;
+ setter->flags |= ASF_DOUBLE | ASF_MODIFY;
setter->set_double = &mlx5dr_action_setter_modify_header;
setter->idx_double = i;
break;
@@ -3021,7 +3038,7 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
case MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2:
/* Single remove header to header */
setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY);
- setter->flags |= ASF_SINGLE1 | ASF_REMOVE | ASF_REPARSE;
+ setter->flags |= ASF_SINGLE1 | ASF_REMOVE;
setter->set_single = &mlx5dr_action_setter_single;
setter->idx_single = i;
break;
@@ -3029,8 +3046,8 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
case MLX5DR_ACTION_TYP_INSERT_HEADER:
case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2:
/* Double insert header with pointer */
- setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE);
- setter->flags |= ASF_DOUBLE | ASF_REPARSE;
+ setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE);
+ setter->flags |= ASF_DOUBLE | ASF_INSERT;
setter->set_double = &mlx5dr_action_setter_insert_ptr;
setter->idx_double = i;
break;
@@ -3038,7 +3055,7 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3:
/* Single remove + Double insert header with pointer */
setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_DOUBLE);
- setter->flags |= ASF_SINGLE1 | ASF_DOUBLE | ASF_REPARSE | ASF_REMOVE;
+ setter->flags |= ASF_SINGLE1 | ASF_DOUBLE;
setter->set_double = &mlx5dr_action_setter_insert_ptr;
setter->idx_double = i;
setter->set_single = &mlx5dr_action_setter_common_decap;
@@ -3047,9 +3064,8 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
case MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2:
/* Double modify header list with remove and push inline */
- setter = mlx5dr_action_setter_find_first(last_setter,
- ASF_DOUBLE | ASF_REMOVE);
- setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_REPARSE;
+ setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE);
+ setter->flags |= ASF_DOUBLE | ASF_MODIFY | ASF_INSERT;
setter->set_double = &mlx5dr_action_setter_tnl_l3_to_l2;
setter->idx_double = i;
break;
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h
index 4046f658e6..328de65a1e 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.h
+++ b/drivers/net/mlx5/hws/mlx5dr_action.h
@@ -55,7 +55,7 @@ enum mlx5dr_action_setter_flag {
ASF_SINGLE3 = 1 << 2,
ASF_DOUBLE = ASF_SINGLE2 | ASF_SINGLE3,
ASF_TRIPLE = ASF_SINGLE1 | ASF_DOUBLE,
- ASF_REPARSE = 1 << 3,
+ ASF_INSERT = 1 << 3,
ASF_REMOVE = 1 << 4,
ASF_MODIFY = 1 << 5,
ASF_CTR = 1 << 6,
diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c
index 135d31dca1..07c820afe5 100644
--- a/drivers/net/mlx5/hws/mlx5dr_cmd.c
+++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c
@@ -394,7 +394,7 @@ mlx5dr_cmd_rtc_create(struct ibv_context *ctx,
MLX5_SET(rtc, attr, ste_table_base_id, rtc_attr->ste_base);
MLX5_SET(rtc, attr, ste_table_offset, rtc_attr->ste_offset);
MLX5_SET(rtc, attr, miss_flow_table_id, rtc_attr->miss_ft_id);
- MLX5_SET(rtc, attr, reparse_mode, MLX5_IFC_RTC_REPARSE_ALWAYS);
+ MLX5_SET(rtc, attr, reparse_mode, rtc_attr->reparse_mode);
devx_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in), out, sizeof(out));
if (!devx_obj->obj) {
@@ -586,6 +586,7 @@ mlx5dr_cmd_stc_modify(struct mlx5dr_devx_obj *devx_obj,
attr = MLX5_ADDR_OF(create_stc_in, in, stc);
MLX5_SET(stc, attr, ste_action_offset, stc_attr->action_offset);
MLX5_SET(stc, attr, action_type, stc_attr->action_type);
+ MLX5_SET(stc, attr, reparse_mode, stc_attr->reparse_mode);
MLX5_SET64(stc, attr, modify_field_select,
MLX5_IFC_MODIFY_STC_FIELD_SELECT_NEW_STC);
diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h
index cb27212a5b..7792fc48aa 100644
--- a/drivers/net/mlx5/hws/mlx5dr_cmd.h
+++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h
@@ -79,6 +79,7 @@ struct mlx5dr_cmd_rtc_create_attr {
uint8_t table_type;
uint8_t match_definer_0;
uint8_t match_definer_1;
+ uint8_t reparse_mode;
bool is_frst_jumbo;
bool is_scnd_range;
};
@@ -98,6 +99,7 @@ struct mlx5dr_cmd_stc_create_attr {
struct mlx5dr_cmd_stc_modify_attr {
uint32_t stc_offset;
uint8_t action_offset;
+ uint8_t reparse_mode;
enum mlx5_ifc_stc_action_type action_type;
union {
uint32_t id; /* TIRN, TAG, FT ID, STE ID, CRYPTO */
diff --git a/drivers/net/mlx5/hws/mlx5dr_context.c b/drivers/net/mlx5/hws/mlx5dr_context.c
index 08a5ee92a5..15d53c578a 100644
--- a/drivers/net/mlx5/hws/mlx5dr_context.c
+++ b/drivers/net/mlx5/hws/mlx5dr_context.c
@@ -4,6 +4,21 @@
#include "mlx5dr_internal.h"
+bool mlx5dr_context_cap_dynamic_reparse(struct mlx5dr_context *ctx)
+{
+ return IS_BIT_SET(ctx->caps->rtc_reparse_mode, MLX5_IFC_RTC_REPARSE_BY_STC);
+}
+
+uint8_t mlx5dr_context_get_reparse_mode(struct mlx5dr_context *ctx)
+{
+ /* Prefer to use dynamic reparse, reparse only specific actions */
+ if (mlx5dr_context_cap_dynamic_reparse(ctx))
+ return MLX5_IFC_RTC_REPARSE_NEVER;
+
+ /* Otherwise use less efficient static */
+ return MLX5_IFC_RTC_REPARSE_ALWAYS;
+}
+
static int mlx5dr_context_pools_init(struct mlx5dr_context *ctx)
{
struct mlx5dr_pool_attr pool_attr = {0};
diff --git a/drivers/net/mlx5/hws/mlx5dr_context.h b/drivers/net/mlx5/hws/mlx5dr_context.h
index 0ba8d0c92e..f476c2308c 100644
--- a/drivers/net/mlx5/hws/mlx5dr_context.h
+++ b/drivers/net/mlx5/hws/mlx5dr_context.h
@@ -11,8 +11,8 @@ enum mlx5dr_context_flags {
};
enum mlx5dr_context_shared_stc_type {
- MLX5DR_CONTEXT_SHARED_STC_DECAP = 0,
- MLX5DR_CONTEXT_SHARED_STC_POP = 1,
+ MLX5DR_CONTEXT_SHARED_STC_DECAP_L3 = 0,
+ MLX5DR_CONTEXT_SHARED_STC_DOUBLE_POP = 1,
MLX5DR_CONTEXT_SHARED_STC_MAX = 2,
};
@@ -60,4 +60,9 @@ mlx5dr_context_get_local_ibv(struct mlx5dr_context *ctx)
return ctx->ibv_ctx;
}
+
+bool mlx5dr_context_cap_dynamic_reparse(struct mlx5dr_context *ctx);
+
+uint8_t mlx5dr_context_get_reparse_mode(struct mlx5dr_context *ctx);
+
#endif /* MLX5DR_CONTEXT_H_ */
diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c
index 6f74cf3677..cd6cbdeceb 100644
--- a/drivers/net/mlx5/hws/mlx5dr_matcher.c
+++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c
@@ -583,6 +583,7 @@ static int mlx5dr_matcher_create_rtc(struct mlx5dr_matcher *matcher,
rtc_attr.pd = ctx->pd_num;
rtc_attr.ste_base = devx_obj->id;
rtc_attr.ste_offset = ste->offset;
+ rtc_attr.reparse_mode = mlx5dr_context_get_reparse_mode(ctx);
rtc_attr.table_type = mlx5dr_table_get_res_fw_ft_type(tbl->type, false);
mlx5dr_matcher_set_rtc_attr_sz(matcher, &rtc_attr, rtc_type, false);
@@ -790,6 +791,7 @@ static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher)
/* Allocate STC for jumps to STE */
stc_attr.action_offset = MLX5DR_ACTION_OFFSET_HIT;
stc_attr.action_type = MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE;
+ stc_attr.reparse_mode = MLX5_IFC_STC_REPARSE_NEVER;
stc_attr.ste_table.ste = matcher->action_ste.ste;
stc_attr.ste_table.ste_pool = matcher->action_ste.pool;
stc_attr.ste_table.match_definer_id = ctx->caps->trivial_match_definer;
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 24/30] net/mlx5/hws: dynamic re-parse for modify header
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (21 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 23/30] net/mlx5/hws: support dynamic re-parse Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 25/30] net/mlx5: sample the srv6 last segment Gregory Etelson
` (6 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Alex Vesker, Erez Shitrit,
Matan Azrad, Viacheslav Ovsiienko, Ori Kam, Suanming Mou
From: Alex Vesker <valex@nvidia.com>
With dynamic re-parse we would always require re-parse but
this is not always necessary. Re-parse is only needed when
the packet structure is changed. This support will allow
dynamically deciding based on the action pattern if re-parse
is required or no.
Signed-off-by: Alex Vesker <valex@nvidia.com>
Reviewed-by: Erez Shitrit <erezsh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_action.c | 15 +++++++---
drivers/net/mlx5/hws/mlx5dr_action.h | 1 +
drivers/net/mlx5/hws/mlx5dr_pat_arg.c | 41 +++++++++++++++++++++++++--
drivers/net/mlx5/hws/mlx5dr_pat_arg.h | 2 ++
4 files changed, 53 insertions(+), 6 deletions(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index bdccfb9cf3..59be8ae2c5 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -635,7 +635,9 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
case MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2:
case MLX5DR_ACTION_TYP_MODIFY_HDR:
attr->action_offset = MLX5DR_ACTION_OFFSET_DW6;
- attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS;
+ if (action->modify_header.require_reparse)
+ attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS;
+
if (action->modify_header.num_of_actions == 1) {
attr->modify_action.data = action->modify_header.single_action;
attr->action_type = mlx5dr_action_get_mh_stc_type(attr->modify_action.data);
@@ -1614,6 +1616,8 @@ mlx5dr_action_handle_tunnel_l3_to_l2(struct mlx5dr_action *action,
action[i].modify_header.num_of_actions = num_of_actions;
action[i].modify_header.arg_obj = arg_obj;
action[i].modify_header.pat_obj = pat_obj;
+ action[i].modify_header.require_reparse =
+ mlx5dr_pat_require_reparse((__be64 *)mh_data, num_of_actions);
ret = mlx5dr_action_create_stcs(&action[i], NULL);
if (ret) {
@@ -1760,7 +1764,7 @@ mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action,
{
struct mlx5dr_devx_obj *pat_obj, *arg_obj = NULL;
struct mlx5dr_context *ctx = action->ctx;
- uint16_t max_mh_actions = 0;
+ uint16_t num_actions, max_mh_actions = 0;
int i, ret;
/* Calculate maximum number of mh actions for shared arg allocation */
@@ -1786,11 +1790,14 @@ mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action,
goto free_stc_and_pat;
}
+ num_actions = pattern[i].sz / MLX5DR_MODIFY_ACTION_SIZE;
action[i].modify_header.num_of_patterns = num_of_patterns;
action[i].modify_header.max_num_of_actions = max_mh_actions;
- action[i].modify_header.num_of_actions = pattern[i].sz / MLX5DR_MODIFY_ACTION_SIZE;
+ action[i].modify_header.num_of_actions = num_actions;
+ action[i].modify_header.require_reparse =
+ mlx5dr_pat_require_reparse(pattern[i].data, num_actions);
- if (action[i].modify_header.num_of_actions == 1) {
+ if (num_actions == 1) {
pat_obj = NULL;
/* Optimize single modify action to be used inline */
action[i].modify_header.single_action = pattern[i].data[0];
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h
index 328de65a1e..e56f5b59c7 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.h
+++ b/drivers/net/mlx5/hws/mlx5dr_action.h
@@ -132,6 +132,7 @@ struct mlx5dr_action {
uint8_t single_action_type;
uint8_t num_of_actions;
uint8_t max_num_of_actions;
+ uint8_t require_reparse;
} modify_header;
struct {
struct mlx5dr_devx_obj *arg_obj;
diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.c b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c
index 349d77f296..a949844d24 100644
--- a/drivers/net/mlx5/hws/mlx5dr_pat_arg.c
+++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.c
@@ -37,6 +37,43 @@ uint32_t mlx5dr_arg_get_arg_size(uint16_t num_of_actions)
return BIT(mlx5dr_arg_get_arg_log_size(num_of_actions));
}
+bool mlx5dr_pat_require_reparse(__be64 *actions, uint16_t num_of_actions)
+{
+ uint16_t i, field;
+ uint8_t action_id;
+
+ for (i = 0; i < num_of_actions; i++) {
+ action_id = MLX5_GET(set_action_in, &actions[i], action_type);
+
+ switch (action_id) {
+ case MLX5_MODIFICATION_TYPE_NOP:
+ field = MLX5_MODI_OUT_NONE;
+ break;
+
+ case MLX5_MODIFICATION_TYPE_SET:
+ case MLX5_MODIFICATION_TYPE_ADD:
+ field = MLX5_GET(set_action_in, &actions[i], field);
+ break;
+
+ case MLX5_MODIFICATION_TYPE_COPY:
+ case MLX5_MODIFICATION_TYPE_ADD_FIELD:
+ field = MLX5_GET(copy_action_in, &actions[i], dst_field);
+ break;
+
+ default:
+ /* Insert/Remove/Unknown actions require reparse */
+ return true;
+ }
+
+ /* Below fields can change packet structure require a reparse */
+ if (field == MLX5_MODI_OUT_ETHERTYPE ||
+ field == MLX5_MODI_OUT_IPV6_NEXT_HDR)
+ return true;
+ }
+
+ return false;
+}
+
/* Cache and cache element handling */
int mlx5dr_pat_init_pattern_cache(struct mlx5dr_pattern_cache **cache)
{
@@ -228,8 +265,8 @@ mlx5dr_pat_get_pattern(struct mlx5dr_context *ctx,
}
pat_obj = mlx5dr_cmd_header_modify_pattern_create(ctx->ibv_ctx,
- pattern_sz,
- (uint8_t *)pattern);
+ pattern_sz,
+ (uint8_t *)pattern);
if (!pat_obj) {
DR_LOG(ERR, "Failed to create pattern FW object");
goto out_unlock;
diff --git a/drivers/net/mlx5/hws/mlx5dr_pat_arg.h b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h
index 2a38891c4d..bbe313102f 100644
--- a/drivers/net/mlx5/hws/mlx5dr_pat_arg.h
+++ b/drivers/net/mlx5/hws/mlx5dr_pat_arg.h
@@ -79,6 +79,8 @@ void mlx5dr_pat_put_pattern(struct mlx5dr_context *ctx,
bool mlx5dr_arg_is_valid_arg_request_size(struct mlx5dr_context *ctx,
uint32_t arg_size);
+bool mlx5dr_pat_require_reparse(__be64 *actions, uint16_t num_of_actions);
+
void mlx5dr_arg_write(struct mlx5dr_send_engine *queue,
void *comp_data,
uint32_t arg_idx,
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 25/30] net/mlx5: sample the srv6 last segment
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (22 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 24/30] net/mlx5/hws: dynamic re-parse for modify header Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 26/30] net/mlx5/hws: fix potential wrong errno value Gregory Etelson
` (5 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Rongwei Liu, Ori Kam, Suanming Mou,
Matan Azrad, Viacheslav Ovsiienko
From: Rongwei Liu <rongweil@nvidia.com>
When removing the IPv6 routing extension header from the
packets, the destination address should be updated to the
last one in the segment list.
Enlarge the hardware sample scope to cover the last segment.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
---
drivers/net/mlx5/mlx5.c | 41 ++++++++++++++++++++++++++++++-----------
drivers/net/mlx5/mlx5.h | 6 ++++++
2 files changed, 36 insertions(+), 11 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index cdb4eeb612..afb9c717dc 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1067,6 +1067,7 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev)
struct mlx5_devx_graph_node_attr node = {
.modify_field_select = 0,
};
+ uint32_t i;
uint32_t ids[MLX5_GRAPH_NODE_SAMPLE_NUM];
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_common_dev_config *config = &priv->sh->cdev->config;
@@ -1100,10 +1101,18 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev)
node.next_header_field_size = 0x8;
node.in[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_IP;
node.in[0].compare_condition_value = IPPROTO_ROUTING;
- node.sample[0].flow_match_sample_en = 1;
- /* First come first serve no matter inner or outer. */
- node.sample[0].flow_match_sample_tunnel_mode = MLX5_GRAPH_SAMPLE_TUNNEL_FIRST;
- node.sample[0].flow_match_sample_offset_mode = MLX5_GRAPH_SAMPLE_OFFSET_FIXED;
+ /* Final IPv6 address. */
+ for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
+ node.sample[i].flow_match_sample_en = 1;
+ node.sample[i].flow_match_sample_offset_mode =
+ MLX5_GRAPH_SAMPLE_OFFSET_FIXED;
+ /* First come first serve no matter inner or outer. */
+ node.sample[i].flow_match_sample_tunnel_mode =
+ MLX5_GRAPH_SAMPLE_TUNNEL_FIRST;
+ node.sample[i].flow_match_sample_field_base_offset =
+ (i + 1) * sizeof(uint32_t); /* in bytes */
+ }
+ node.sample[0].flow_match_sample_field_base_offset = 0;
node.out[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_TCP;
node.out[0].compare_condition_value = IPPROTO_TCP;
node.out[1].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_UDP;
@@ -1116,8 +1125,8 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev)
goto error;
}
priv->sh->srh_flex_parser.flex.devx_fp->devx_obj = fp;
- priv->sh->srh_flex_parser.flex.mapnum = 1;
- priv->sh->srh_flex_parser.flex.devx_fp->num_samples = 1;
+ priv->sh->srh_flex_parser.flex.mapnum = MLX5_SRV6_SAMPLE_NUM;
+ priv->sh->srh_flex_parser.flex.devx_fp->num_samples = MLX5_SRV6_SAMPLE_NUM;
ret = mlx5_devx_cmd_query_parse_samples(fp, ids, priv->sh->srh_flex_parser.flex.mapnum,
&priv->sh->srh_flex_parser.flex.devx_fp->anchor_id);
@@ -1125,12 +1134,22 @@ mlx5_alloc_srh_flex_parser(struct rte_eth_dev *dev)
DRV_LOG(ERR, "Failed to query sample IDs.");
goto error;
}
- ret = mlx5_devx_cmd_match_sample_info_query(ibv_ctx, ids[0],
- &priv->sh->srh_flex_parser.flex.devx_fp->sample_info[0]);
- if (ret) {
- DRV_LOG(ERR, "Failed to query sample id information.");
- goto error;
+ for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
+ ret = mlx5_devx_cmd_match_sample_info_query(ibv_ctx, ids[i],
+ &priv->sh->srh_flex_parser.flex.devx_fp->sample_info[i]);
+ if (ret) {
+ DRV_LOG(ERR, "Failed to query sample id %u information.", ids[i]);
+ goto error;
+ }
+ }
+ for (i = 0; i <= MLX5_SRV6_SAMPLE_NUM - 1 && i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
+ priv->sh->srh_flex_parser.flex.devx_fp->sample_ids[i] = ids[i];
+ priv->sh->srh_flex_parser.flex.map[i].width = sizeof(uint32_t) * CHAR_BIT;
+ priv->sh->srh_flex_parser.flex.map[i].reg_id = i;
+ priv->sh->srh_flex_parser.flex.map[i].shift =
+ (i + 1) * sizeof(uint32_t) * CHAR_BIT;
}
+ priv->sh->srh_flex_parser.flex.map[0].shift = 0;
return 0;
error:
if (fp)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 0289cbd04b..ad82d8060e 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1335,6 +1335,7 @@ struct mlx5_flex_pattern_field {
uint16_t shift:5;
uint16_t reg_id:5;
};
+
#define MLX5_INVALID_SAMPLE_REG_ID 0x1F
/* Port flex item context. */
@@ -1346,6 +1347,11 @@ struct mlx5_flex_item {
struct mlx5_flex_pattern_field map[MLX5_FLEX_ITEM_MAPPING_NUM];
};
+/*
+ * Sample an IPv6 address and the first dword of SRv6 header.
+ * Then it is 16 + 4 = 20 bytes which is 5 dwords.
+ */
+#define MLX5_SRV6_SAMPLE_NUM 5
/* Mlx5 internal flex parser profile structure. */
struct mlx5_internal_flex_parser_profile {
uint32_t refcnt;
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 26/30] net/mlx5/hws: fix potential wrong errno value
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (23 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 25/30] net/mlx5: sample the srv6 last segment Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:31 ` [PATCH 27/30] net/mlx5/hws: add IPv6 routing extension push remove actions Gregory Etelson
` (4 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Rongwei Liu, hamdani, Alex Vesker,
Ori Kam, Matan Azrad, Viacheslav Ovsiienko, Suanming Mou
From: Rongwei Liu <rongweil@nvidia.com>
A valid rte_errno is desired when DR layer api returns error
and it can't over-write the value set by under-layer.
Fixes: df61fcd5f3ca ("net/mlx5/hws: support insert header action")
Cc: hamdani@nvidia.com
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_action.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 59be8ae2c5..76ca57d302 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -2262,6 +2262,7 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx,
if (!num_of_hdrs) {
DR_LOG(ERR, "Reformat num_of_hdrs cannot be zero");
+ rte_errno = EINVAL;
return NULL;
}
@@ -2309,7 +2310,6 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx,
reformat_hdrs, log_bulk_size);
if (ret) {
DR_LOG(ERR, "Failed to create HWS reformat action");
- rte_errno = EINVAL;
goto free_reformat_hdrs;
}
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 27/30] net/mlx5/hws: add IPv6 routing extension push remove actions
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (24 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 26/30] net/mlx5/hws: fix potential wrong errno value Gregory Etelson
@ 2023-10-29 16:31 ` Gregory Etelson
2023-10-29 16:32 ` [PATCH 28/30] net/mlx5/hws: add setter for IPv6 routing push remove Gregory Etelson
` (3 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:31 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Rongwei Liu, Alex Vesker, Ori Kam,
Matan Azrad, Viacheslav Ovsiienko, Suanming Mou
From: Rongwei Liu <rongweil@nvidia.com>
Add two dr_actions to implement IPv6 routing extension push and
remove, the new actions are multiple actions combination instead
of new types.
Basically, there are two modify headers plus one reformat action.
Action order is the same as encap and decap actions.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/common/mlx5/mlx5_prm.h | 1 +
drivers/net/mlx5/hws/mlx5dr.h | 29 +++
drivers/net/mlx5/hws/mlx5dr_action.c | 358 ++++++++++++++++++++++++++-
drivers/net/mlx5/hws/mlx5dr_action.h | 7 +
drivers/net/mlx5/hws/mlx5dr_debug.c | 2 +
drivers/net/mlx5/mlx5_flow.h | 44 ++++
6 files changed, 438 insertions(+), 3 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index a5ecce98e9..32ec3df7ef 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -3586,6 +3586,7 @@ enum mlx5_ifc_header_anchors {
MLX5_HEADER_ANCHOR_PACKET_START = 0x0,
MLX5_HEADER_ANCHOR_FIRST_VLAN_START = 0x2,
MLX5_HEADER_ANCHOR_IPV6_IPV4 = 0x07,
+ MLX5_HEADER_ANCHOR_TCP_UDP = 0x09,
MLX5_HEADER_ANCHOR_INNER_MAC = 0x13,
MLX5_HEADER_ANCHOR_INNER_IPV6_IPV4 = 0x19,
};
diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index 2e692f76c3..9e7dd9c429 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -54,6 +54,8 @@ enum mlx5dr_action_type {
MLX5DR_ACTION_TYP_REMOVE_HEADER,
MLX5DR_ACTION_TYP_DEST_ROOT,
MLX5DR_ACTION_TYP_DEST_ARRAY,
+ MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT,
+ MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT,
MLX5DR_ACTION_TYP_MAX,
};
@@ -278,6 +280,11 @@ struct mlx5dr_rule_action {
uint8_t *data;
} reformat;
+ struct {
+ uint32_t offset;
+ uint8_t *header;
+ } ipv6_ext;
+
struct {
rte_be32_t vlan_hdr;
} push_vlan;
@@ -889,6 +896,28 @@ mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx,
struct mlx5dr_action_remove_header_attr *attr,
uint32_t flags);
+/* Create action to push or remove IPv6 extension header.
+ *
+ * @param[in] ctx
+ * The context in which the new action will be created.
+ * @param[in] type
+ * Type of direct rule action: MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT or
+ * MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT.
+ * @param[in] hdr
+ * Header for packet reformat.
+ * @param[in] log_bulk_size
+ * Number of unique values used with this pattern.
+ * @param[in] flags
+ * Action creation flags. (enum mlx5dr_action_flags)
+ * @return pointer to mlx5dr_action on success NULL otherwise.
+ */
+struct mlx5dr_action *
+mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx,
+ enum mlx5dr_action_type type,
+ struct mlx5dr_action_reformat_header *hdr,
+ uint32_t log_bulk_size,
+ uint32_t flags);
+
/* Destroy direct rule action.
*
* @param[in] action
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 76ca57d302..6ac3c2f782 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -26,7 +26,8 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER),
BIT(MLX5DR_ACTION_TYP_REMOVE_HEADER) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) |
- BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2),
+ BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2) |
+ BIT(MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT),
BIT(MLX5DR_ACTION_TYP_POP_VLAN),
BIT(MLX5DR_ACTION_TYP_POP_VLAN),
BIT(MLX5DR_ACTION_TYP_CTR),
@@ -39,6 +40,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
+ BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3),
BIT(MLX5DR_ACTION_TYP_TBL) |
@@ -61,6 +63,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
+ BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3),
BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER),
@@ -75,7 +78,8 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER),
BIT(MLX5DR_ACTION_TYP_REMOVE_HEADER) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2) |
- BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2),
+ BIT(MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2) |
+ BIT(MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT),
BIT(MLX5DR_ACTION_TYP_POP_VLAN),
BIT(MLX5DR_ACTION_TYP_POP_VLAN),
BIT(MLX5DR_ACTION_TYP_CTR),
@@ -88,6 +92,7 @@ static const uint32_t action_order_arr[MLX5DR_TABLE_TYPE_MAX][MLX5DR_ACTION_TYP_
BIT(MLX5DR_ACTION_TYP_PUSH_VLAN),
BIT(MLX5DR_ACTION_TYP_MODIFY_HDR),
BIT(MLX5DR_ACTION_TYP_INSERT_HEADER) |
+ BIT(MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2) |
BIT(MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3),
BIT(MLX5DR_ACTION_TYP_REFORMAT_TRAILER),
@@ -1710,7 +1715,7 @@ mlx5dr_action_create_reformat(struct mlx5dr_context *ctx,
if (!mlx5dr_action_is_hws_flags(flags) ||
((flags & MLX5DR_ACTION_FLAG_SHARED) && (log_bulk_size || num_of_hdrs > 1))) {
- DR_LOG(ERR, "Reformat flags don't fit HWS (flags: %x0x)", flags);
+ DR_LOG(ERR, "Reformat flags don't fit HWS (flags: 0x%x)", flags);
rte_errno = EINVAL;
goto free_action;
}
@@ -2382,6 +2387,347 @@ mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx,
return NULL;
}
+static void *
+mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(struct mlx5dr_action *action)
+{
+ struct mlx5dr_action_mh_pattern pattern;
+ __be64 cmd[3] = {0};
+ uint16_t mod_id;
+
+ mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0);
+ if (!mod_id) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ /*
+ * Backup ipv6_route_ext.next_hdr to ipv6_route_ext.seg_left.
+ * Next_hdr will be copied to ipv6.protocol after pop done.
+ */
+ MLX5_SET(copy_action_in, &cmd[0], action_type, MLX5_MODIFICATION_TYPE_COPY);
+ MLX5_SET(copy_action_in, &cmd[0], length, 8);
+ MLX5_SET(copy_action_in, &cmd[0], src_offset, 24);
+ MLX5_SET(copy_action_in, &cmd[0], src_field, mod_id);
+ MLX5_SET(copy_action_in, &cmd[0], dst_field, mod_id);
+
+ /* Add nop between the continuous same modify field id */
+ MLX5_SET(copy_action_in, &cmd[1], action_type, MLX5_MODIFICATION_TYPE_NOP);
+
+ /* Clear next_hdr for right checksum */
+ MLX5_SET(set_action_in, &cmd[2], action_type, MLX5_MODIFICATION_TYPE_SET);
+ MLX5_SET(set_action_in, &cmd[2], length, 8);
+ MLX5_SET(set_action_in, &cmd[2], offset, 24);
+ MLX5_SET(set_action_in, &cmd[2], field, mod_id);
+
+ pattern.data = cmd;
+ pattern.sz = sizeof(cmd);
+
+ return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern,
+ 0, action->flags);
+}
+
+static void *
+mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(struct mlx5dr_action *action)
+{
+ enum mlx5_modification_field field[MLX5_ST_SZ_DW(definer_hl_ipv6_addr)] = {
+ MLX5_MODI_OUT_DIPV6_127_96,
+ MLX5_MODI_OUT_DIPV6_95_64,
+ MLX5_MODI_OUT_DIPV6_63_32,
+ MLX5_MODI_OUT_DIPV6_31_0
+ };
+ struct mlx5dr_action_mh_pattern pattern;
+ __be64 cmd[5] = {0};
+ uint16_t mod_id;
+ uint32_t i;
+
+ /* Copy ipv6_route_ext[first_segment].dst_addr by flex parser to ipv6.dst_addr */
+ for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) {
+ mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, i + 1);
+ if (!mod_id) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ MLX5_SET(copy_action_in, &cmd[i], action_type, MLX5_MODIFICATION_TYPE_COPY);
+ MLX5_SET(copy_action_in, &cmd[i], dst_field, field[i]);
+ MLX5_SET(copy_action_in, &cmd[i], src_field, mod_id);
+ }
+
+ mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0);
+ if (!mod_id) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ /* Restore next_hdr from seg_left for flex parser identifying */
+ MLX5_SET(copy_action_in, &cmd[4], action_type, MLX5_MODIFICATION_TYPE_COPY);
+ MLX5_SET(copy_action_in, &cmd[4], length, 8);
+ MLX5_SET(copy_action_in, &cmd[4], dst_offset, 24);
+ MLX5_SET(copy_action_in, &cmd[4], src_field, mod_id);
+ MLX5_SET(copy_action_in, &cmd[4], dst_field, mod_id);
+
+ pattern.data = cmd;
+ pattern.sz = sizeof(cmd);
+
+ return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern,
+ 0, action->flags);
+}
+
+static void *
+mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(struct mlx5dr_action *action)
+{
+ uint8_t cmd[MLX5DR_MODIFY_ACTION_SIZE] = {0};
+ struct mlx5dr_action_mh_pattern pattern;
+ uint16_t mod_id;
+
+ mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0);
+ if (!mod_id) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ /* Copy ipv6_route_ext.next_hdr to ipv6.protocol */
+ MLX5_SET(copy_action_in, cmd, action_type, MLX5_MODIFICATION_TYPE_COPY);
+ MLX5_SET(copy_action_in, cmd, length, 8);
+ MLX5_SET(copy_action_in, cmd, src_offset, 24);
+ MLX5_SET(copy_action_in, cmd, src_field, mod_id);
+ MLX5_SET(copy_action_in, cmd, dst_field, MLX5_MODI_OUT_IPV6_NEXT_HDR);
+
+ pattern.data = (__be64 *)cmd;
+ pattern.sz = sizeof(cmd);
+
+ return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern,
+ 0, action->flags);
+}
+
+static int
+mlx5dr_action_create_pop_ipv6_route_ext(struct mlx5dr_action *action)
+{
+ uint8_t anchor_id = flow_hw_get_ipv6_route_ext_anchor_from_ctx(action->ctx);
+ struct mlx5dr_action_remove_header_attr hdr_attr;
+ uint32_t i;
+
+ if (!anchor_id) {
+ rte_errno = EINVAL;
+ return rte_errno;
+ }
+
+ action->ipv6_route_ext.action[0] =
+ mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(action);
+ action->ipv6_route_ext.action[1] =
+ mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(action);
+ action->ipv6_route_ext.action[2] =
+ mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(action);
+
+ hdr_attr.by_anchor.decap = 1;
+ hdr_attr.by_anchor.start_anchor = anchor_id;
+ hdr_attr.by_anchor.end_anchor = MLX5_HEADER_ANCHOR_TCP_UDP;
+ hdr_attr.type = MLX5DR_ACTION_REMOVE_HEADER_TYPE_BY_HEADER;
+ action->ipv6_route_ext.action[3] =
+ mlx5dr_action_create_remove_header(action->ctx, &hdr_attr, action->flags);
+
+ if (!action->ipv6_route_ext.action[0] || !action->ipv6_route_ext.action[1] ||
+ !action->ipv6_route_ext.action[2] || !action->ipv6_route_ext.action[3]) {
+ DR_LOG(ERR, "Failed to create ipv6_route_ext pop subaction");
+ goto err;
+ }
+
+ return 0;
+
+err:
+ for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++)
+ if (action->ipv6_route_ext.action[i])
+ mlx5dr_action_destroy(action->ipv6_route_ext.action[i]);
+
+ return rte_errno;
+}
+
+static void *
+mlx5dr_action_create_push_ipv6_route_ext_mhdr1(struct mlx5dr_action *action)
+{
+ uint8_t cmd[MLX5DR_MODIFY_ACTION_SIZE] = {0};
+ struct mlx5dr_action_mh_pattern pattern;
+
+ /* Set ipv6.protocol to IPPROTO_ROUTING */
+ MLX5_SET(set_action_in, cmd, action_type, MLX5_MODIFICATION_TYPE_SET);
+ MLX5_SET(set_action_in, cmd, length, 8);
+ MLX5_SET(set_action_in, cmd, field, MLX5_MODI_OUT_IPV6_NEXT_HDR);
+ MLX5_SET(set_action_in, cmd, data, IPPROTO_ROUTING);
+
+ pattern.data = (__be64 *)cmd;
+ pattern.sz = sizeof(cmd);
+
+ return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern, 0,
+ action->flags | MLX5DR_ACTION_FLAG_SHARED);
+}
+
+static void *
+mlx5dr_action_create_push_ipv6_route_ext_mhdr2(struct mlx5dr_action *action,
+ uint32_t bulk_size,
+ uint8_t *data)
+{
+ enum mlx5_modification_field field[MLX5_ST_SZ_DW(definer_hl_ipv6_addr)] = {
+ MLX5_MODI_OUT_DIPV6_127_96,
+ MLX5_MODI_OUT_DIPV6_95_64,
+ MLX5_MODI_OUT_DIPV6_63_32,
+ MLX5_MODI_OUT_DIPV6_31_0
+ };
+ struct mlx5dr_action_mh_pattern pattern;
+ uint32_t *ipv6_dst_addr = NULL;
+ uint8_t seg_left, next_hdr;
+ __be64 cmd[5] = {0};
+ uint16_t mod_id;
+ uint32_t i;
+
+ /* Fetch the last IPv6 address in the segment list */
+ if (action->flags & MLX5DR_ACTION_FLAG_SHARED) {
+ seg_left = MLX5_GET(header_ipv6_routing_ext, data, segments_left) - 1;
+ ipv6_dst_addr = (uint32_t *)data + MLX5_ST_SZ_DW(header_ipv6_routing_ext) +
+ seg_left * MLX5_ST_SZ_DW(definer_hl_ipv6_addr);
+ }
+
+ /* Copy IPv6 destination address from ipv6_route_ext.last_segment */
+ for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) {
+ MLX5_SET(set_action_in, &cmd[i], action_type, MLX5_MODIFICATION_TYPE_SET);
+ MLX5_SET(set_action_in, &cmd[i], field, field[i]);
+ if (action->flags & MLX5DR_ACTION_FLAG_SHARED)
+ MLX5_SET(set_action_in, &cmd[i], data, be32toh(*ipv6_dst_addr++));
+ }
+
+ mod_id = flow_hw_get_ipv6_route_ext_mod_id_from_ctx(action->ctx, 0);
+ if (!mod_id) {
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ /* Set ipv6_route_ext.next_hdr since initially pushed as 0 for right checksum */
+ MLX5_SET(set_action_in, &cmd[4], action_type, MLX5_MODIFICATION_TYPE_SET);
+ MLX5_SET(set_action_in, &cmd[4], length, 8);
+ MLX5_SET(set_action_in, &cmd[4], offset, 24);
+ MLX5_SET(set_action_in, &cmd[4], field, mod_id);
+ if (action->flags & MLX5DR_ACTION_FLAG_SHARED) {
+ next_hdr = MLX5_GET(header_ipv6_routing_ext, data, next_hdr);
+ MLX5_SET(set_action_in, &cmd[4], data, next_hdr);
+ }
+
+ pattern.data = cmd;
+ pattern.sz = sizeof(cmd);
+
+ return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern,
+ bulk_size, action->flags);
+}
+
+static int
+mlx5dr_action_create_push_ipv6_route_ext(struct mlx5dr_action *action,
+ struct mlx5dr_action_reformat_header *hdr,
+ uint32_t bulk_size)
+{
+ struct mlx5dr_action_insert_header insert_hdr = { {0} };
+ uint8_t header[MLX5_PUSH_MAX_LEN];
+ uint32_t i;
+
+ if (!hdr || !hdr->sz || hdr->sz > MLX5_PUSH_MAX_LEN ||
+ ((action->flags & MLX5DR_ACTION_FLAG_SHARED) && !hdr->data)) {
+ DR_LOG(ERR, "Invalid ipv6_route_ext header");
+ rte_errno = EINVAL;
+ return rte_errno;
+ }
+
+ if (action->flags & MLX5DR_ACTION_FLAG_SHARED) {
+ memcpy(header, hdr->data, hdr->sz);
+ /* Clear ipv6_route_ext.next_hdr for right checksum */
+ MLX5_SET(header_ipv6_routing_ext, header, next_hdr, 0);
+ }
+
+ insert_hdr.anchor = MLX5_HEADER_ANCHOR_TCP_UDP;
+ insert_hdr.encap = 1;
+ insert_hdr.hdr.sz = hdr->sz;
+ insert_hdr.hdr.data = header;
+ action->ipv6_route_ext.action[0] =
+ mlx5dr_action_create_insert_header(action->ctx, 1, &insert_hdr,
+ bulk_size, action->flags);
+ action->ipv6_route_ext.action[1] =
+ mlx5dr_action_create_push_ipv6_route_ext_mhdr1(action);
+ action->ipv6_route_ext.action[2] =
+ mlx5dr_action_create_push_ipv6_route_ext_mhdr2(action, bulk_size, hdr->data);
+
+ if (!action->ipv6_route_ext.action[0] ||
+ !action->ipv6_route_ext.action[1] ||
+ !action->ipv6_route_ext.action[2]) {
+ DR_LOG(ERR, "Failed to create ipv6_route_ext push subaction");
+ goto err;
+ }
+
+ return 0;
+
+err:
+ for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++)
+ if (action->ipv6_route_ext.action[i])
+ mlx5dr_action_destroy(action->ipv6_route_ext.action[i]);
+
+ return rte_errno;
+}
+
+struct mlx5dr_action *
+mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx,
+ enum mlx5dr_action_type action_type,
+ struct mlx5dr_action_reformat_header *hdr,
+ uint32_t log_bulk_size,
+ uint32_t flags)
+{
+ struct mlx5dr_action *action;
+ int ret;
+
+ if (mlx5dr_context_cap_dynamic_reparse(ctx)) {
+ DR_LOG(ERR, "IPv6 extension actions is not supported");
+ rte_errno = ENOTSUP;
+ return NULL;
+ }
+
+ if (!mlx5dr_action_is_hws_flags(flags) ||
+ ((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) {
+ DR_LOG(ERR, "IPv6 extension flags don't fit HWS (flags: 0x%x)", flags);
+ rte_errno = EINVAL;
+ return NULL;
+ }
+
+ action = mlx5dr_action_create_generic(ctx, flags, action_type);
+ if (!action) {
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ switch (action_type) {
+ case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT:
+ if (!(flags & MLX5DR_ACTION_FLAG_SHARED)) {
+ DR_LOG(ERR, "Pop ipv6_route_ext must be shared");
+ rte_errno = EINVAL;
+ goto free_action;
+ }
+
+ ret = mlx5dr_action_create_pop_ipv6_route_ext(action);
+ break;
+ case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT:
+ ret = mlx5dr_action_create_push_ipv6_route_ext(action, hdr, log_bulk_size);
+ break;
+ default:
+ DR_LOG(ERR, "Unsupported action type %d\n", action_type);
+ rte_errno = ENOTSUP;
+ goto free_action;
+ }
+
+ if (ret) {
+ DR_LOG(ERR, "Failed to create IPv6 extension reformat action");
+ goto free_action;
+ }
+
+ return action;
+
+free_action:
+ simple_free(action);
+ return NULL;
+}
+
static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
{
struct mlx5dr_devx_obj *obj = NULL;
@@ -2455,6 +2801,12 @@ static void mlx5dr_action_destroy_hws(struct mlx5dr_action *action)
mlx5dr_action_destroy_stcs(&action[i]);
mlx5dr_cmd_destroy_obj(action->reformat.arg_obj);
break;
+ case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT:
+ case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT:
+ for (i = 0; i < MLX5DR_ACTION_IPV6_EXT_MAX_SA; i++)
+ if (action->ipv6_route_ext.action[i])
+ mlx5dr_action_destroy(action->ipv6_route_ext.action[i]);
+ break;
}
}
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h
index e56f5b59c7..d0152dde3b 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.h
+++ b/drivers/net/mlx5/hws/mlx5dr_action.h
@@ -8,6 +8,9 @@
/* Max number of STEs needed for a rule (including match) */
#define MLX5DR_ACTION_MAX_STE 10
+/* Max number of internal subactions of ipv6_ext */
+#define MLX5DR_ACTION_IPV6_EXT_MAX_SA 4
+
enum mlx5dr_action_stc_idx {
MLX5DR_ACTION_STC_IDX_CTRL = 0,
MLX5DR_ACTION_STC_IDX_HIT = 1,
@@ -143,6 +146,10 @@ struct mlx5dr_action {
uint8_t offset;
bool encap;
} reformat;
+ struct {
+ struct mlx5dr_action
+ *action[MLX5DR_ACTION_IPV6_EXT_MAX_SA];
+ } ipv6_route_ext;
struct {
struct mlx5dr_devx_obj *devx_obj;
uint8_t return_reg_id;
diff --git a/drivers/net/mlx5/hws/mlx5dr_debug.c b/drivers/net/mlx5/hws/mlx5dr_debug.c
index 5111f41648..1e5ef9cf67 100644
--- a/drivers/net/mlx5/hws/mlx5dr_debug.c
+++ b/drivers/net/mlx5/hws/mlx5dr_debug.c
@@ -31,6 +31,8 @@ const char *mlx5dr_debug_action_type_str[] = {
[MLX5DR_ACTION_TYP_CRYPTO_DECRYPT] = "CRYPTO_DECRYPT",
[MLX5DR_ACTION_TYP_INSERT_HEADER] = "INSERT_HEADER",
[MLX5DR_ACTION_TYP_REMOVE_HEADER] = "REMOVE_HEADER",
+ [MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT] = "POP_IPV6_ROUTE_EXT",
+ [MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT] = "PUSH_IPV6_ROUTE_EXT",
};
static_assert(ARRAY_SIZE(mlx5dr_debug_action_type_str) == MLX5DR_ACTION_TYP_MAX,
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index ddb3b7b6fd..8174c03d50 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -589,6 +589,7 @@ struct mlx5_flow_dv_matcher {
struct mlx5_flow_dv_match_params mask; /**< Matcher mask. */
};
+#define MLX5_PUSH_MAX_LEN 128
#define MLX5_ENCAP_MAX_LEN 132
/* Encap/decap resource structure. */
@@ -2872,6 +2873,49 @@ flow_hw_get_srh_flex_parser_byte_off_from_ctx(void *dr_ctx __rte_unused)
#endif
return UINT32_MAX;
}
+
+static __rte_always_inline uint8_t
+flow_hw_get_ipv6_route_ext_anchor_from_ctx(void *dr_ctx)
+{
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+ uint16_t port;
+ struct mlx5_priv *priv;
+
+ MLX5_ETH_FOREACH_DEV(port, NULL) {
+ priv = rte_eth_devices[port].data->dev_private;
+ if (priv->dr_ctx == dr_ctx)
+ return priv->sh->srh_flex_parser.flex.devx_fp->anchor_id;
+ }
+#else
+ RTE_SET_USED(dr_ctx);
+#endif
+ return 0;
+}
+
+static __rte_always_inline uint16_t
+flow_hw_get_ipv6_route_ext_mod_id_from_ctx(void *dr_ctx, uint8_t idx)
+{
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+ uint16_t port;
+ struct mlx5_priv *priv;
+ struct mlx5_flex_parser_devx *fp;
+
+ if (idx >= MLX5_GRAPH_NODE_SAMPLE_NUM || idx >= MLX5_SRV6_SAMPLE_NUM)
+ return 0;
+ MLX5_ETH_FOREACH_DEV(port, NULL) {
+ priv = rte_eth_devices[port].data->dev_private;
+ if (priv->dr_ctx == dr_ctx) {
+ fp = priv->sh->srh_flex_parser.flex.devx_fp;
+ return fp->sample_info[idx].modify_field_id;
+ }
+ }
+#else
+ RTE_SET_USED(dr_ctx);
+ RTE_SET_USED(idx);
+#endif
+ return 0;
+}
+
void
mlx5_indirect_list_handles_release(struct rte_eth_dev *dev);
void
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 28/30] net/mlx5/hws: add setter for IPv6 routing push remove
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (25 preceding siblings ...)
2023-10-29 16:31 ` [PATCH 27/30] net/mlx5/hws: add IPv6 routing extension push remove actions Gregory Etelson
@ 2023-10-29 16:32 ` Gregory Etelson
2023-10-29 16:32 ` [PATCH 29/30] net/mlx5: implement " Gregory Etelson
` (2 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:32 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Rongwei Liu, Alex Vesker, Ori Kam,
Matan Azrad, Viacheslav Ovsiienko, Suanming Mou
From: Rongwei Liu <rongweil@nvidia.com>
The rte action will be translated to multiple dr_actions which need
different setters to program them.
In order to leverage the existing setter logic, there is a new callback
introduce which called fetch_opt with unique parameter.
For each setter, it may have different reparsing properties.
Setter which requires no reparse can't share the same one with
the one has reparse enabled even if there is spare space.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_action.c | 174 +++++++++++++++++++++++++++
drivers/net/mlx5/hws/mlx5dr_action.h | 3 +-
2 files changed, 176 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 6ac3c2f782..281b09a582 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -3311,6 +3311,121 @@ mlx5dr_action_setter_reformat_trailer(struct mlx5dr_actions_apply_data *apply,
apply->wqe_data[MLX5DR_ACTION_OFFSET_DW7] = 0;
}
+static void
+mlx5dr_action_setter_ipv6_route_ext_gen_push_mhdr(uint8_t *data, void *mh_data)
+{
+ uint8_t *action_ptr = mh_data;
+ uint32_t *ipv6_dst_addr;
+ uint8_t seg_left;
+ uint32_t i;
+
+ /* Fetch the last IPv6 address in the segment list which is the next hop */
+ seg_left = MLX5_GET(header_ipv6_routing_ext, data, segments_left) - 1;
+ ipv6_dst_addr = (uint32_t *)data + MLX5_ST_SZ_DW(header_ipv6_routing_ext)
+ + seg_left * MLX5_ST_SZ_DW(definer_hl_ipv6_addr);
+
+ /* Load next hop IPv6 address in reverse order to ipv6.dst_address */
+ for (i = 0; i < MLX5_ST_SZ_DW(definer_hl_ipv6_addr); i++) {
+ MLX5_SET(set_action_in, action_ptr, data, be32toh(*ipv6_dst_addr++));
+ action_ptr += MLX5DR_MODIFY_ACTION_SIZE;
+ }
+
+ /* Set ipv6_route_ext.next_hdr per user input */
+ MLX5_SET(set_action_in, action_ptr, data, *data);
+}
+
+static void
+mlx5dr_action_setter_ipv6_route_ext_mhdr(struct mlx5dr_actions_apply_data *apply,
+ struct mlx5dr_actions_wqe_setter *setter)
+{
+ struct mlx5dr_rule_action *rule_action = apply->rule_action;
+ struct mlx5dr_actions_wqe_setter tmp_setter = {0};
+ struct mlx5dr_rule_action tmp_rule_action;
+ __be64 cmd[MLX5_SRV6_SAMPLE_NUM] = {0};
+ struct mlx5dr_action *ipv6_ext_action;
+ uint8_t *header;
+
+ header = rule_action[setter->idx_double].ipv6_ext.header;
+ ipv6_ext_action = rule_action[setter->idx_double].action;
+ tmp_rule_action.action = ipv6_ext_action->ipv6_route_ext.action[setter->extra_data];
+
+ if (tmp_rule_action.action->flags & MLX5DR_ACTION_FLAG_SHARED) {
+ tmp_rule_action.modify_header.offset = 0;
+ tmp_rule_action.modify_header.pattern_idx = 0;
+ tmp_rule_action.modify_header.data = NULL;
+ } else {
+ /*
+ * Copy ipv6_dst from ipv6_route_ext.last_seg.
+ * Set ipv6_route_ext.next_hdr.
+ */
+ mlx5dr_action_setter_ipv6_route_ext_gen_push_mhdr(header, cmd);
+ tmp_rule_action.modify_header.data = (uint8_t *)cmd;
+ tmp_rule_action.modify_header.pattern_idx = 0;
+ tmp_rule_action.modify_header.offset =
+ rule_action[setter->idx_double].ipv6_ext.offset;
+ }
+
+ apply->rule_action = &tmp_rule_action;
+
+ /* Reuse regular */
+ mlx5dr_action_setter_modify_header(apply, &tmp_setter);
+
+ /* Swap rule actions from backup */
+ apply->rule_action = rule_action;
+}
+
+static void
+mlx5dr_action_setter_ipv6_route_ext_insert_ptr(struct mlx5dr_actions_apply_data *apply,
+ struct mlx5dr_actions_wqe_setter *setter)
+{
+ struct mlx5dr_rule_action *rule_action = apply->rule_action;
+ struct mlx5dr_actions_wqe_setter tmp_setter = {0};
+ struct mlx5dr_rule_action tmp_rule_action;
+ struct mlx5dr_action *ipv6_ext_action;
+ uint8_t header[MLX5_PUSH_MAX_LEN];
+
+ ipv6_ext_action = rule_action[setter->idx_double].action;
+ tmp_rule_action.action = ipv6_ext_action->ipv6_route_ext.action[setter->extra_data];
+
+ if (tmp_rule_action.action->flags & MLX5DR_ACTION_FLAG_SHARED) {
+ tmp_rule_action.reformat.offset = 0;
+ tmp_rule_action.reformat.hdr_idx = 0;
+ tmp_rule_action.reformat.data = NULL;
+ } else {
+ memcpy(header, rule_action[setter->idx_double].ipv6_ext.header,
+ tmp_rule_action.action->reformat.header_size);
+ /* Clear ipv6_route_ext.next_hdr for right checksum */
+ MLX5_SET(header_ipv6_routing_ext, header, next_hdr, 0);
+ tmp_rule_action.reformat.data = header;
+ tmp_rule_action.reformat.hdr_idx = 0;
+ tmp_rule_action.reformat.offset =
+ rule_action[setter->idx_double].ipv6_ext.offset;
+ }
+
+ apply->rule_action = &tmp_rule_action;
+
+ /* Reuse regular */
+ mlx5dr_action_setter_insert_ptr(apply, &tmp_setter);
+
+ /* Swap rule actions from backup */
+ apply->rule_action = rule_action;
+}
+
+static void
+mlx5dr_action_setter_ipv6_route_ext_pop(struct mlx5dr_actions_apply_data *apply,
+ struct mlx5dr_actions_wqe_setter *setter)
+{
+ struct mlx5dr_rule_action *rule_action = &apply->rule_action[setter->idx_single];
+ uint8_t idx = MLX5DR_ACTION_IPV6_EXT_MAX_SA - 1;
+ struct mlx5dr_action *action;
+
+ /* Pop the ipv6_route_ext as set_single logic */
+ action = rule_action->action->ipv6_route_ext.action[idx];
+ apply->wqe_data[MLX5DR_ACTION_OFFSET_DW5] = 0;
+ apply->wqe_ctrl->stc_ix[MLX5DR_ACTION_STC_IDX_DW5] =
+ htobe32(action->stc[apply->tbl_type].offset);
+}
+
int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
{
struct mlx5dr_actions_wqe_setter *start_setter = at->setters + 1;
@@ -3374,6 +3489,65 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
setter->idx_double = i;
break;
+ case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT:
+ /*
+ * Backup ipv6_route_ext.next_hdr to ipv6_route_ext.seg_left.
+ * Set ipv6_route_ext.next_hdr to 0 for checksum bug.
+ */
+ setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE);
+ setter->flags |= ASF_DOUBLE | ASF_MODIFY;
+ setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr;
+ setter->idx_double = i;
+ setter->extra_data = 0;
+ setter++;
+
+ /*
+ * Restore ipv6_route_ext.next_hdr from ipv6_route_ext.seg_left.
+ * Load the final destination address from flex parser sample 1->4.
+ */
+ setter->flags |= ASF_DOUBLE | ASF_MODIFY;
+ setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr;
+ setter->idx_double = i;
+ setter->extra_data = 1;
+ setter++;
+
+ /* Set the ipv6.protocol per ipv6_route_ext.next_hdr */
+ setter->flags |= ASF_DOUBLE | ASF_MODIFY;
+ setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr;
+ setter->idx_double = i;
+ setter->extra_data = 2;
+ /* Pop ipv6_route_ext */
+ setter->flags |= ASF_SINGLE1 | ASF_REMOVE;
+ setter->set_single = &mlx5dr_action_setter_ipv6_route_ext_pop;
+ setter->idx_single = i;
+ break;
+
+ case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT:
+ /* Insert ipv6_route_ext with next_hdr as 0 due to checksum bug */
+ setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE);
+ setter->flags |= ASF_DOUBLE | ASF_INSERT;
+ setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_insert_ptr;
+ setter->idx_double = i;
+ setter->extra_data = 0;
+ setter++;
+
+ /* Set ipv6.protocol as IPPROTO_ROUTING: 0x2b */
+ setter->flags |= ASF_DOUBLE | ASF_MODIFY;
+ setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr;
+ setter->idx_double = i;
+ setter->extra_data = 1;
+ setter++;
+
+ /*
+ * Load the right ipv6_route_ext.next_hdr per user input buffer.
+ * Load the next dest_addr from the ipv6_route_ext.seg_list[last].
+ */
+ setter->flags |= ASF_DOUBLE | ASF_MODIFY;
+ setter->set_double = &mlx5dr_action_setter_ipv6_route_ext_mhdr;
+ setter->idx_double = i;
+ setter->extra_data = 2;
+ break;
+
case MLX5DR_ACTION_TYP_MODIFY_HDR:
/* Double modify header list */
setter = mlx5dr_action_setter_find_first(last_setter, ASF_DOUBLE | ASF_REMOVE);
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h
index d0152dde3b..ce9091a336 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.h
+++ b/drivers/net/mlx5/hws/mlx5dr_action.h
@@ -6,7 +6,7 @@
#define MLX5DR_ACTION_H_
/* Max number of STEs needed for a rule (including match) */
-#define MLX5DR_ACTION_MAX_STE 10
+#define MLX5DR_ACTION_MAX_STE 20
/* Max number of internal subactions of ipv6_ext */
#define MLX5DR_ACTION_IPV6_EXT_MAX_SA 4
@@ -109,6 +109,7 @@ struct mlx5dr_actions_wqe_setter {
uint8_t idx_ctr;
uint8_t idx_hit;
uint8_t flags;
+ uint8_t extra_data;
};
struct mlx5dr_action_template {
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 29/30] net/mlx5: implement IPv6 routing push remove
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (26 preceding siblings ...)
2023-10-29 16:32 ` [PATCH 28/30] net/mlx5/hws: add setter for IPv6 routing push remove Gregory Etelson
@ 2023-10-29 16:32 ` Gregory Etelson
2023-10-29 16:32 ` [PATCH 30/30] net/mlx5/hws: add stc reparse support for srv6 push pop Gregory Etelson
2023-11-05 18:49 ` [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Thomas Monjalon
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:32 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Rongwei Liu, Ori Kam, Suanming Mou,
Matan Azrad, Viacheslav Ovsiienko
From: Rongwei Liu <rongweil@nvidia.com>
Reserve the push data buffer for each job and the maximum
length is set to 128 for now.
Only supports type IPPROTO_ROUTING when translating the rte
flow action.
Remove actions must be shared globally and only supports next layer
as TCP or UDP.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
---
doc/guides/nics/features/mlx5.ini | 2 +
doc/guides/nics/mlx5.rst | 11 +-
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_flow.h | 21 ++-
drivers/net/mlx5/mlx5_flow_hw.c | 282 +++++++++++++++++++++++++++++-
5 files changed, 307 insertions(+), 10 deletions(-)
diff --git a/doc/guides/nics/features/mlx5.ini b/doc/guides/nics/features/mlx5.ini
index a85d755734..9c943fe5da 100644
--- a/doc/guides/nics/features/mlx5.ini
+++ b/doc/guides/nics/features/mlx5.ini
@@ -107,6 +107,8 @@ flag = Y
inc_tcp_ack = Y
inc_tcp_seq = Y
indirect_list = Y
+ipv6_ext_push = Y
+ipv6_ext_remove = Y
jump = Y
mark = Y
meter = Y
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 81cc193f34..7e0a3d4cb8 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -148,7 +148,9 @@ Features
- Matching on GTP extension header with raw encap/decap action.
- Matching on Geneve TLV option header with raw encap/decap action.
- Matching on ESP header SPI field.
+- Matching on flex item with specific pattern.
- Matching on InfiniBand BTH.
+- Modify flex item field.
- Modify IPv4/IPv6 ECN field.
- RSS support in sample action.
- E-Switch mirroring and jump.
@@ -166,7 +168,7 @@ Features
- Sub-Function.
- Matching on represented port.
- Matching on aggregated affinity.
-
+- Push or remove IPv6 routing extension.
Limitations
-----------
@@ -728,6 +730,13 @@ Limitations
The flow engine of a process cannot move from active to standby mode
if preceding active application rules are still present and vice versa.
+- IPv6 routing extension push or remove:
+
+ - Supported only with HW Steering enabled (``dv_flow_en`` = 2).
+ - Supported in non-zero group (No limits on transfer domain if `fdb_def_rule_en` = 1 which is default).
+ - Only supports TCP or UDP as next layer.
+ - IPv6 routing header must be the only present extension.
+ - Not supported on guest port.
Statistics
----------
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index ad82d8060e..c60886abff 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -373,6 +373,7 @@ struct mlx5_hw_q_job {
};
void *user_data; /* Job user data. */
uint8_t *encap_data; /* Encap data. */
+ uint8_t *push_data; /* IPv6 routing push data. */
struct mlx5_modification_cmd *mhdr_cmd;
struct rte_flow_item *items;
union {
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 8174c03d50..6f4979a575 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -358,6 +358,8 @@ enum mlx5_feature_name {
#define MLX5_FLOW_ACTION_INDIRECT_COUNT (1ull << 43)
#define MLX5_FLOW_ACTION_INDIRECT_AGE (1ull << 44)
#define MLX5_FLOW_ACTION_QUOTA (1ull << 46)
+#define MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE (1ull << 47)
+#define MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH (1ull << 48)
#define MLX5_FLOW_DROP_INCLUSIVE_ACTIONS \
(MLX5_FLOW_ACTION_COUNT | MLX5_FLOW_ACTION_SAMPLE | MLX5_FLOW_ACTION_AGE)
@@ -1263,6 +1265,8 @@ typedef int
const struct rte_flow_action *,
struct mlx5dr_rule_action *);
+#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1)
+
/* rte flow action translate to DR action struct. */
struct mlx5_action_construct_data {
LIST_ENTRY(mlx5_action_construct_data) next;
@@ -1309,6 +1313,10 @@ struct mlx5_action_construct_data {
struct {
cnt_id_t id;
} shared_counter;
+ struct {
+ /* IPv6 extension push data len. */
+ uint16_t len;
+ } ipv6_ext;
struct {
uint32_t id;
uint32_t conf_masked:1;
@@ -1353,6 +1361,7 @@ struct rte_flow_actions_template {
uint16_t *src_off; /* RTE action displacement from app. template */
uint16_t reformat_off; /* Offset of DR reformat action. */
uint16_t mhdr_off; /* Offset of DR modify header action. */
+ uint16_t recom_off; /* Offset of DR IPv6 routing push remove action. */
uint32_t refcnt; /* Reference counter. */
uint8_t flex_item; /* flex item index. */
};
@@ -1376,7 +1385,14 @@ struct mlx5_hw_encap_decap_action {
uint8_t data[]; /* Action data. */
};
-#define MLX5_MHDR_MAX_CMD ((MLX5_MAX_MODIFY_NUM) * 2 + 1)
+/* Push remove action struct. */
+struct mlx5_hw_push_remove_action {
+ struct mlx5dr_action *action; /* Action object. */
+ /* Is push_remove action shared across flows in table. */
+ uint8_t shared;
+ size_t data_size; /* Action metadata size. */
+ uint8_t data[]; /* Action data. */
+};
/* Modify field action struct. */
struct mlx5_hw_modify_header_action {
@@ -1407,6 +1423,9 @@ struct mlx5_hw_actions {
/* Encap/Decap action. */
struct mlx5_hw_encap_decap_action *encap_decap;
uint16_t encap_decap_pos; /* Encap/Decap action position. */
+ /* Push/remove action. */
+ struct mlx5_hw_push_remove_action *push_remove;
+ uint16_t push_remove_pos; /* Push/remove action position. */
uint32_t mark:1; /* Indicate the mark action. */
cnt_id_t cnt_id; /* Counter id. */
uint32_t mtr_id; /* Meter id. */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 84c78ba19c..f19b548a25 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -624,6 +624,12 @@ __flow_hw_action_template_destroy(struct rte_eth_dev *dev,
mlx5_free(acts->encap_decap);
acts->encap_decap = NULL;
}
+ if (acts->push_remove) {
+ if (acts->push_remove->action)
+ mlx5dr_action_destroy(acts->push_remove->action);
+ mlx5_free(acts->push_remove);
+ acts->push_remove = NULL;
+ }
if (acts->mhdr) {
flow_hw_template_destroy_mhdr_action(acts->mhdr);
mlx5_free(acts->mhdr);
@@ -761,6 +767,44 @@ __flow_hw_act_data_encap_append(struct mlx5_priv *priv,
return 0;
}
+/**
+ * Append dynamic push action to the dynamic action list.
+ *
+ * @param[in] dev
+ * Pointer to the port.
+ * @param[in] acts
+ * Pointer to the template HW steering DR actions.
+ * @param[in] type
+ * Action type.
+ * @param[in] action_src
+ * Offset of source rte flow action.
+ * @param[in] action_dst
+ * Offset of destination DR action.
+ * @param[in] len
+ * Length of the data to be updated.
+ *
+ * @return
+ * Data pointer on success, NULL otherwise and rte_errno is set.
+ */
+static __rte_always_inline void *
+__flow_hw_act_data_push_append(struct rte_eth_dev *dev,
+ struct mlx5_hw_actions *acts,
+ enum rte_flow_action_type type,
+ uint16_t action_src,
+ uint16_t action_dst,
+ uint16_t len)
+{
+ struct mlx5_action_construct_data *act_data;
+ struct mlx5_priv *priv = dev->data->dev_private;
+
+ act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst);
+ if (!act_data)
+ return NULL;
+ act_data->ipv6_ext.len = len;
+ LIST_INSERT_HEAD(&acts->act_list, act_data, next);
+ return act_data;
+}
+
static __rte_always_inline int
__flow_hw_act_data_hdr_modify_append(struct mlx5_priv *priv,
struct mlx5_hw_actions *acts,
@@ -1874,6 +1918,82 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev,
return 0;
}
+
+static int
+mlx5_create_ipv6_ext_reformat(struct rte_eth_dev *dev,
+ const struct mlx5_flow_template_table_cfg *cfg,
+ struct mlx5_hw_actions *acts,
+ struct rte_flow_actions_template *at,
+ uint8_t *push_data, uint8_t *push_data_m,
+ size_t push_size, uint16_t recom_src,
+ enum mlx5dr_action_type recom_type)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ const struct rte_flow_template_table_attr *table_attr = &cfg->attr;
+ const struct rte_flow_attr *attr = &table_attr->flow_attr;
+ enum mlx5dr_table_type type = get_mlx5dr_table_type(attr);
+ struct mlx5_action_construct_data *act_data;
+ struct mlx5dr_action_reformat_header hdr = {0};
+ uint32_t flag, bulk = 0;
+
+ flag = mlx5_hw_act_flag[!!attr->group][type];
+ acts->push_remove = mlx5_malloc(MLX5_MEM_ZERO,
+ sizeof(*acts->push_remove) + push_size,
+ 0, SOCKET_ID_ANY);
+ if (!acts->push_remove)
+ return -ENOMEM;
+
+ switch (recom_type) {
+ case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT:
+ if (!push_data || !push_size)
+ goto err1;
+ if (!push_data_m) {
+ bulk = rte_log2_u32(table_attr->nb_flows);
+ } else {
+ flag |= MLX5DR_ACTION_FLAG_SHARED;
+ acts->push_remove->shared = 1;
+ }
+ acts->push_remove->data_size = push_size;
+ memcpy(acts->push_remove->data, push_data, push_size);
+ hdr.data = push_data;
+ hdr.sz = push_size;
+ break;
+ case MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT:
+ flag |= MLX5DR_ACTION_FLAG_SHARED;
+ acts->push_remove->shared = 1;
+ break;
+ default:
+ break;
+ }
+
+ acts->push_remove->action =
+ mlx5dr_action_create_reformat_ipv6_ext(priv->dr_ctx,
+ recom_type, &hdr, bulk, flag);
+ if (!acts->push_remove->action)
+ goto err1;
+ acts->rule_acts[at->recom_off].action = acts->push_remove->action;
+ acts->rule_acts[at->recom_off].ipv6_ext.header = acts->push_remove->data;
+ acts->rule_acts[at->recom_off].ipv6_ext.offset = 0;
+ acts->push_remove_pos = at->recom_off;
+ if (!acts->push_remove->shared) {
+ act_data = __flow_hw_act_data_push_append(dev, acts,
+ RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH,
+ recom_src, at->recom_off, push_size);
+ if (!act_data)
+ goto err;
+ }
+ return 0;
+err:
+ if (acts->push_remove->action)
+ mlx5dr_action_destroy(acts->push_remove->action);
+err1:
+ if (acts->push_remove) {
+ mlx5_free(acts->push_remove);
+ acts->push_remove = NULL;
+ }
+ return -EINVAL;
+}
+
/**
* Translate rte_flow actions to DR action.
*
@@ -1907,19 +2027,24 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
{
struct mlx5_priv *priv = dev->data->dev_private;
const struct rte_flow_template_table_attr *table_attr = &cfg->attr;
+ struct mlx5_hca_flex_attr *hca_attr = &priv->sh->cdev->config.hca_attr.flex;
const struct rte_flow_attr *attr = &table_attr->flow_attr;
struct rte_flow_action *actions = at->actions;
struct rte_flow_action *masks = at->masks;
enum mlx5dr_action_type refmt_type = MLX5DR_ACTION_TYP_LAST;
+ enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST;
const struct rte_flow_action_raw_encap *raw_encap_data;
+ const struct rte_flow_action_ipv6_ext_push *ipv6_ext_data;
const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL;
- uint16_t reformat_src = 0;
+ uint16_t reformat_src = 0, recom_src = 0;
uint8_t *encap_data = NULL, *encap_data_m = NULL;
- size_t data_size = 0;
+ uint8_t *push_data = NULL, *push_data_m = NULL;
+ size_t data_size = 0, push_size = 0;
struct mlx5_hw_modify_header_action mhdr = { 0 };
bool actions_end = false;
uint32_t type;
bool reformat_used = false;
+ bool recom_used = false;
unsigned int of_vlan_offset;
uint16_t jump_pos;
uint32_t ct_idx;
@@ -2118,6 +2243,36 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
reformat_used = true;
refmt_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2;
break;
+ case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH:
+ if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor ||
+ !priv->sh->srh_flex_parser.flex.mapnum) {
+ DRV_LOG(ERR, "SRv6 anchor is not supported.");
+ goto err;
+ }
+ MLX5_ASSERT(!recom_used && !recom_type);
+ recom_used = true;
+ recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT;
+ ipv6_ext_data =
+ (const struct rte_flow_action_ipv6_ext_push *)masks->conf;
+ if (ipv6_ext_data)
+ push_data_m = ipv6_ext_data->data;
+ ipv6_ext_data =
+ (const struct rte_flow_action_ipv6_ext_push *)actions->conf;
+ if (ipv6_ext_data) {
+ push_data = ipv6_ext_data->data;
+ push_size = ipv6_ext_data->size;
+ }
+ recom_src = src_pos;
+ break;
+ case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE:
+ if (!hca_attr->query_match_sample_info || !hca_attr->parse_graph_anchor ||
+ !priv->sh->srh_flex_parser.flex.mapnum) {
+ DRV_LOG(ERR, "SRv6 anchor is not supported.");
+ goto err;
+ }
+ recom_used = true;
+ recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT;
+ break;
case RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL:
flow_hw_translate_group(dev, cfg, attr->group,
&target_grp, error);
@@ -2265,6 +2420,14 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
if (ret)
goto err;
}
+ if (recom_used) {
+ MLX5_ASSERT(at->recom_off != UINT16_MAX);
+ ret = mlx5_create_ipv6_ext_reformat(dev, cfg, acts, at, push_data,
+ push_data_m, push_size, recom_src,
+ recom_type);
+ if (ret)
+ goto err;
+ }
return 0;
err:
err = rte_errno;
@@ -2662,11 +2825,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
const struct mlx5_hw_actions *hw_acts = &hw_at->acts;
const struct rte_flow_action *action;
const struct rte_flow_action_raw_encap *raw_encap_data;
+ const struct rte_flow_action_ipv6_ext_push *ipv6_push;
const struct rte_flow_item *enc_item = NULL;
const struct rte_flow_action_ethdev *port_action = NULL;
const struct rte_flow_action_meter *meter = NULL;
const struct rte_flow_action_age *age = NULL;
uint8_t *buf = job->encap_data;
+ uint8_t *push_buf = job->push_data;
struct rte_flow_attr attr = {
.ingress = 1,
};
@@ -2797,6 +2962,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
MLX5_ASSERT(raw_encap_data->size ==
act_data->encap.len);
break;
+ case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH:
+ ipv6_push =
+ (const struct rte_flow_action_ipv6_ext_push *)action->conf;
+ rte_memcpy((void *)push_buf, ipv6_push->data,
+ act_data->ipv6_ext.len);
+ MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len);
+ break;
case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID)
ret = flow_hw_set_vlan_vid_construct(dev, job,
@@ -2953,6 +3125,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
job->flow->res_idx - 1;
rule_acts[hw_acts->encap_decap_pos].reformat.data = buf;
}
+ if (hw_acts->push_remove && !hw_acts->push_remove->shared) {
+ rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset =
+ job->flow->res_idx - 1;
+ rule_acts[hw_acts->push_remove_pos].ipv6_ext.header = push_buf;
+ }
if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id))
job->flow->cnt_id = hw_acts->cnt_id;
return 0;
@@ -4731,6 +4908,38 @@ flow_hw_validate_action_indirect(struct rte_eth_dev *dev,
return 0;
}
+/**
+ * Validate ipv6_ext_push action.
+ *
+ * @param[in] dev
+ * Pointer to rte_eth_dev structure.
+ * @param[in] action
+ * Pointer to the indirect action.
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_hw_validate_action_ipv6_ext_push(struct rte_eth_dev *dev __rte_unused,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_action_ipv6_ext_push *raw_push_data = action->conf;
+
+ if (!raw_push_data || !raw_push_data->size || !raw_push_data->data)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "invalid ipv6_ext_push data");
+ if (raw_push_data->type != IPPROTO_ROUTING ||
+ raw_push_data->size > MLX5_PUSH_MAX_LEN)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, action,
+ "Unsupported ipv6_ext_push type or length");
+ return 0;
+}
+
/**
* Validate raw_encap action.
*
@@ -4957,6 +5166,7 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
#endif
uint16_t i;
int ret;
+ const struct rte_flow_action_ipv6_ext_remove *remove_data;
/* FDB actions are only valid to proxy port. */
if (attr->transfer && (!priv->sh->config.dv_esw_en || !priv->master))
@@ -5053,6 +5263,21 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
/* TODO: Validation logic */
action_flags |= MLX5_FLOW_ACTION_DECAP;
break;
+ case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH:
+ ret = flow_hw_validate_action_ipv6_ext_push(dev, action, error);
+ if (ret < 0)
+ return ret;
+ action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH;
+ break;
+ case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE:
+ remove_data = action->conf;
+ /* Remove action must be shared. */
+ if (remove_data->type != IPPROTO_ROUTING || !mask) {
+ DRV_LOG(ERR, "Only supports shared IPv6 routing remove");
+ return -EINVAL;
+ }
+ action_flags |= MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE;
+ break;
case RTE_FLOW_ACTION_TYPE_METER:
/* TODO: Validation logic */
action_flags |= MLX5_FLOW_ACTION_METER;
@@ -5160,6 +5385,8 @@ static enum mlx5dr_action_type mlx5_hw_dr_action_types[] = {
[RTE_FLOW_ACTION_TYPE_OF_POP_VLAN] = MLX5DR_ACTION_TYP_POP_VLAN,
[RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN] = MLX5DR_ACTION_TYP_PUSH_VLAN,
[RTE_FLOW_ACTION_TYPE_SEND_TO_KERNEL] = MLX5DR_ACTION_TYP_DEST_ROOT,
+ [RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH] = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT,
+ [RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE] = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT,
};
static inline void
@@ -5251,6 +5478,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at,
/**
* Create DR action template based on a provided sequence of flow actions.
*
+ * @param[in] dev
+ * Pointer to the rte_eth_dev structure.
* @param[in] at
* Pointer to flow actions template to be updated.
*
@@ -5259,7 +5488,8 @@ flow_hw_template_actions_list(struct rte_flow_actions_template *at,
* NULL otherwise.
*/
static struct mlx5dr_action_template *
-flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at)
+flow_hw_dr_actions_template_create(struct rte_eth_dev *dev,
+ struct rte_flow_actions_template *at)
{
struct mlx5dr_action_template *dr_template;
enum mlx5dr_action_type action_types[MLX5_HW_MAX_ACTS] = { MLX5DR_ACTION_TYP_LAST };
@@ -5268,8 +5498,11 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at)
enum mlx5dr_action_type reformat_act_type = MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2;
uint16_t reformat_off = UINT16_MAX;
uint16_t mhdr_off = UINT16_MAX;
+ uint16_t recom_off = UINT16_MAX;
uint16_t cnt_off = UINT16_MAX;
+ enum mlx5dr_action_type recom_type = MLX5DR_ACTION_TYP_LAST;
int ret;
+
for (i = 0, curr_off = 0; at->actions[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) {
const struct rte_flow_action_raw_encap *raw_encap_data;
size_t data_size;
@@ -5301,6 +5534,16 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at)
reformat_off = curr_off++;
reformat_act_type = mlx5_hw_dr_action_types[at->actions[i].type];
break;
+ case RTE_FLOW_ACTION_TYPE_IPV6_EXT_PUSH:
+ MLX5_ASSERT(recom_off == UINT16_MAX);
+ recom_type = MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT;
+ recom_off = curr_off++;
+ break;
+ case RTE_FLOW_ACTION_TYPE_IPV6_EXT_REMOVE:
+ MLX5_ASSERT(recom_off == UINT16_MAX);
+ recom_type = MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT;
+ recom_off = curr_off++;
+ break;
case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
raw_encap_data = at->actions[i].conf;
data_size = raw_encap_data->size;
@@ -5373,11 +5616,25 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at)
at->reformat_off = reformat_off;
action_types[reformat_off] = reformat_act_type;
}
+ if (recom_off != UINT16_MAX) {
+ at->recom_off = recom_off;
+ action_types[recom_off] = recom_type;
+ }
dr_template = mlx5dr_action_template_create(action_types);
- if (dr_template)
+ if (dr_template) {
at->dr_actions_num = curr_off;
- else
+ } else {
DRV_LOG(ERR, "Failed to create DR action template: %d", rte_errno);
+ return NULL;
+ }
+ /* Create srh flex parser for remove anchor. */
+ if ((recom_type == MLX5DR_ACTION_TYP_POP_IPV6_ROUTE_EXT ||
+ recom_type == MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT) &&
+ mlx5_alloc_srh_flex_parser(dev)) {
+ DRV_LOG(ERR, "Failed to create srv6 flex parser");
+ claim_zero(mlx5dr_action_template_destroy(dr_template));
+ return NULL;
+ }
return dr_template;
err_actions_num:
DRV_LOG(ERR, "Number of HW actions (%u) exceeded maximum (%u) allowed in template",
@@ -5786,7 +6043,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
break;
}
}
- at->tmpl = flow_hw_dr_actions_template_create(at);
+ at->tmpl = flow_hw_dr_actions_template_create(dev, at);
if (!at->tmpl)
goto error;
at->action_flags = action_flags;
@@ -5823,6 +6080,9 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev,
struct rte_flow_actions_template *template,
struct rte_flow_error *error __rte_unused)
{
+ uint64_t flag = MLX5_FLOW_ACTION_IPV6_ROUTING_REMOVE |
+ MLX5_FLOW_ACTION_IPV6_ROUTING_PUSH;
+
if (__atomic_load_n(&template->refcnt, __ATOMIC_RELAXED) > 1) {
DRV_LOG(WARNING, "Action template %p is still in use.",
(void *)template);
@@ -5831,6 +6091,8 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev,
NULL,
"action template in using");
}
+ if (template->action_flags & flag)
+ mlx5_free_srh_flex_parser(dev);
LIST_REMOVE(template, next);
flow_hw_flex_item_release(dev, &template->flex_item);
if (template->tmpl)
@@ -8398,6 +8660,7 @@ flow_hw_configure(struct rte_eth_dev *dev,
mem_size += (sizeof(struct mlx5_hw_q_job *) +
sizeof(struct mlx5_hw_q_job) +
sizeof(uint8_t) * MLX5_ENCAP_MAX_LEN +
+ sizeof(uint8_t) * MLX5_PUSH_MAX_LEN +
sizeof(struct mlx5_modification_cmd) *
MLX5_MHDR_MAX_CMD +
sizeof(struct rte_flow_item) *
@@ -8413,7 +8676,7 @@ flow_hw_configure(struct rte_eth_dev *dev,
}
for (i = 0; i < nb_q_updated; i++) {
char mz_name[RTE_MEMZONE_NAMESIZE];
- uint8_t *encap = NULL;
+ uint8_t *encap = NULL, *push = NULL;
struct mlx5_modification_cmd *mhdr_cmd = NULL;
struct rte_flow_item *items = NULL;
struct rte_flow_hw *upd_flow = NULL;
@@ -8433,13 +8696,16 @@ flow_hw_configure(struct rte_eth_dev *dev,
&job[_queue_attr[i]->size];
encap = (uint8_t *)
&mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD];
- items = (struct rte_flow_item *)
+ push = (uint8_t *)
&encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN];
+ items = (struct rte_flow_item *)
+ &push[_queue_attr[i]->size * MLX5_PUSH_MAX_LEN];
upd_flow = (struct rte_flow_hw *)
&items[_queue_attr[i]->size * MLX5_HW_MAX_ITEMS];
for (j = 0; j < _queue_attr[i]->size; j++) {
job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD];
job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN];
+ job[j].push_data = &push[j * MLX5_PUSH_MAX_LEN];
job[j].items = &items[j * MLX5_HW_MAX_ITEMS];
job[j].upd_flow = &upd_flow[j];
priv->hw_q[i].job[j] = &job[j];
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 30/30] net/mlx5/hws: add stc reparse support for srv6 push pop
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (27 preceding siblings ...)
2023-10-29 16:32 ` [PATCH 29/30] net/mlx5: implement " Gregory Etelson
@ 2023-10-29 16:32 ` Gregory Etelson
2023-11-05 18:49 ` [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Thomas Monjalon
29 siblings, 0 replies; 33+ messages in thread
From: Gregory Etelson @ 2023-10-29 16:32 UTC (permalink / raw)
To: dev
Cc: getelson, mkashani, rasland, Rongwei Liu, Erez Shitrit, Ori Kam,
Matan Azrad, Viacheslav Ovsiienko, Suanming Mou
From: Rongwei Liu <rongweil@nvidia.com>
After pushing/popping srv6 into/from IPv6 packets, the checksum
needs to be correct.
In order to achieve this, there is a need to control each STE' reparse
behavior(CX7 and above). Add two more flags enumeration definitions to
allow external control of reparse property in stc.
1. Push
a. 1st STE, insert header action, reparse ignored(default reparse
always)
b. 2nd STE, modify IPv6 protocol, reparse always as default.
c. 3rd STE, modify header list, reparse always(default reparse
ignored)
2. Pop
a. 1st STE, modify header list, reparse always(default reparse
ignored)
b. 2nd STE, modify header list, reparse always(default reparse
ignored)
c. 3rd STE, modify IPv6 protocol, reparse ignored(default reparse
always); remove header action, reparse always as default.
For CX6Lx and CX6Dx, the reparse behavior is controlled by RTC as
always. Only pop action can work well.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com>
Reviewed-by: Erez Shitrit <erezsh@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_action.c | 115 +++++++++++++++++++--------
drivers/net/mlx5/hws/mlx5dr_action.h | 7 ++
2 files changed, 87 insertions(+), 35 deletions(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 281b09a582..daeabead2a 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -640,6 +640,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
case MLX5DR_ACTION_TYP_REFORMAT_TNL_L3_TO_L2:
case MLX5DR_ACTION_TYP_MODIFY_HDR:
attr->action_offset = MLX5DR_ACTION_OFFSET_DW6;
+ attr->reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE;
if (action->modify_header.require_reparse)
attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS;
@@ -678,9 +679,12 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2:
case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3:
case MLX5DR_ACTION_TYP_INSERT_HEADER:
+ attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS;
+ if (!action->reformat.require_reparse)
+ attr->reparse_mode = MLX5_IFC_STC_REPARSE_IGNORE;
+
attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT;
attr->action_offset = MLX5DR_ACTION_OFFSET_DW6;
- attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS;
attr->insert_header.encap = action->reformat.encap;
attr->insert_header.insert_anchor = action->reformat.anchor;
attr->insert_header.arg_id = action->reformat.arg_obj->id;
@@ -1441,7 +1445,7 @@ static int
mlx5dr_action_handle_insert_with_ptr(struct mlx5dr_action *action,
uint8_t num_of_hdrs,
struct mlx5dr_action_reformat_header *hdrs,
- uint32_t log_bulk_sz)
+ uint32_t log_bulk_sz, uint32_t reparse)
{
struct mlx5dr_devx_obj *arg_obj;
size_t max_sz = 0;
@@ -1478,6 +1482,11 @@ mlx5dr_action_handle_insert_with_ptr(struct mlx5dr_action *action,
action[i].reformat.encap = 1;
}
+ if (likely(reparse == MLX5DR_ACTION_STC_REPARSE_DEFAULT))
+ action[i].reformat.require_reparse = true;
+ else if (reparse == MLX5DR_ACTION_STC_REPARSE_ON)
+ action[i].reformat.require_reparse = true;
+
ret = mlx5dr_action_create_stcs(&action[i], NULL);
if (ret) {
DR_LOG(ERR, "Failed to create stc for reformat");
@@ -1514,7 +1523,8 @@ mlx5dr_action_handle_l2_to_tunnel_l3(struct mlx5dr_action *action,
ret = mlx5dr_action_handle_insert_with_ptr(action,
num_of_hdrs,
hdrs,
- log_bulk_sz);
+ log_bulk_sz,
+ MLX5DR_ACTION_STC_REPARSE_DEFAULT);
if (ret)
goto put_shared_stc;
@@ -1657,7 +1667,8 @@ mlx5dr_action_create_reformat_hws(struct mlx5dr_action *action,
ret = mlx5dr_action_create_stcs(action, NULL);
break;
case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2:
- ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, hdrs, bulk_size);
+ ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs, hdrs, bulk_size,
+ MLX5DR_ACTION_STC_REPARSE_DEFAULT);
break;
case MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3:
ret = mlx5dr_action_handle_l2_to_tunnel_l3(action, num_of_hdrs, hdrs, bulk_size);
@@ -1765,7 +1776,8 @@ static int
mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action,
uint8_t num_of_patterns,
struct mlx5dr_action_mh_pattern *pattern,
- uint32_t log_bulk_size)
+ uint32_t log_bulk_size,
+ uint32_t reparse)
{
struct mlx5dr_devx_obj *pat_obj, *arg_obj = NULL;
struct mlx5dr_context *ctx = action->ctx;
@@ -1799,8 +1811,12 @@ mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action,
action[i].modify_header.num_of_patterns = num_of_patterns;
action[i].modify_header.max_num_of_actions = max_mh_actions;
action[i].modify_header.num_of_actions = num_actions;
- action[i].modify_header.require_reparse =
- mlx5dr_pat_require_reparse(pattern[i].data, num_actions);
+
+ if (likely(reparse == MLX5DR_ACTION_STC_REPARSE_DEFAULT))
+ action[i].modify_header.require_reparse =
+ mlx5dr_pat_require_reparse(pattern[i].data, num_actions);
+ else if (reparse == MLX5DR_ACTION_STC_REPARSE_ON)
+ action[i].modify_header.require_reparse = true;
if (num_actions == 1) {
pat_obj = NULL;
@@ -1843,12 +1859,12 @@ mlx5dr_action_create_modify_header_hws(struct mlx5dr_action *action,
return rte_errno;
}
-struct mlx5dr_action *
-mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx,
- uint8_t num_of_patterns,
- struct mlx5dr_action_mh_pattern *patterns,
- uint32_t log_bulk_size,
- uint32_t flags)
+static struct mlx5dr_action *
+mlx5dr_action_create_modify_header_reparse(struct mlx5dr_context *ctx,
+ uint8_t num_of_patterns,
+ struct mlx5dr_action_mh_pattern *patterns,
+ uint32_t log_bulk_size,
+ uint32_t flags, uint32_t reparse)
{
struct mlx5dr_action *action;
int ret;
@@ -1896,7 +1912,8 @@ mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx,
ret = mlx5dr_action_create_modify_header_hws(action,
num_of_patterns,
patterns,
- log_bulk_size);
+ log_bulk_size,
+ reparse);
if (ret)
goto free_action;
@@ -1907,6 +1924,17 @@ mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx,
return NULL;
}
+struct mlx5dr_action *
+mlx5dr_action_create_modify_header(struct mlx5dr_context *ctx,
+ uint8_t num_of_patterns,
+ struct mlx5dr_action_mh_pattern *patterns,
+ uint32_t log_bulk_size,
+ uint32_t flags)
+{
+ return mlx5dr_action_create_modify_header_reparse(ctx, num_of_patterns, patterns,
+ log_bulk_size, flags,
+ MLX5DR_ACTION_STC_REPARSE_DEFAULT);
+}
static struct mlx5dr_devx_obj *
mlx5dr_action_dest_array_process_reformat(struct mlx5dr_context *ctx,
enum mlx5dr_action_type type,
@@ -2254,12 +2282,12 @@ mlx5dr_action_create_reformat_trailer(struct mlx5dr_context *ctx,
return action;
}
-struct mlx5dr_action *
-mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx,
- uint8_t num_of_hdrs,
- struct mlx5dr_action_insert_header *hdrs,
- uint32_t log_bulk_size,
- uint32_t flags)
+static struct mlx5dr_action *
+mlx5dr_action_create_insert_header_reparse(struct mlx5dr_context *ctx,
+ uint8_t num_of_hdrs,
+ struct mlx5dr_action_insert_header *hdrs,
+ uint32_t log_bulk_size,
+ uint32_t flags, uint32_t reparse)
{
struct mlx5dr_action_reformat_header *reformat_hdrs;
struct mlx5dr_action *action;
@@ -2312,7 +2340,8 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx,
}
ret = mlx5dr_action_handle_insert_with_ptr(action, num_of_hdrs,
- reformat_hdrs, log_bulk_size);
+ reformat_hdrs, log_bulk_size,
+ reparse);
if (ret) {
DR_LOG(ERR, "Failed to create HWS reformat action");
goto free_reformat_hdrs;
@@ -2329,6 +2358,18 @@ mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx,
return NULL;
}
+struct mlx5dr_action *
+mlx5dr_action_create_insert_header(struct mlx5dr_context *ctx,
+ uint8_t num_of_hdrs,
+ struct mlx5dr_action_insert_header *hdrs,
+ uint32_t log_bulk_size,
+ uint32_t flags)
+{
+ return mlx5dr_action_create_insert_header_reparse(ctx, num_of_hdrs, hdrs,
+ log_bulk_size, flags,
+ MLX5DR_ACTION_STC_REPARSE_DEFAULT);
+}
+
struct mlx5dr_action *
mlx5dr_action_create_remove_header(struct mlx5dr_context *ctx,
struct mlx5dr_action_remove_header_attr *attr,
@@ -2422,8 +2463,9 @@ mlx5dr_action_create_pop_ipv6_route_ext_mhdr1(struct mlx5dr_action *action)
pattern.data = cmd;
pattern.sz = sizeof(cmd);
- return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern,
- 0, action->flags);
+ return mlx5dr_action_create_modify_header_reparse(action->ctx, 1, &pattern, 0,
+ action->flags,
+ MLX5DR_ACTION_STC_REPARSE_ON);
}
static void *
@@ -2469,8 +2511,9 @@ mlx5dr_action_create_pop_ipv6_route_ext_mhdr2(struct mlx5dr_action *action)
pattern.data = cmd;
pattern.sz = sizeof(cmd);
- return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern,
- 0, action->flags);
+ return mlx5dr_action_create_modify_header_reparse(action->ctx, 1, &pattern, 0,
+ action->flags,
+ MLX5DR_ACTION_STC_REPARSE_ON);
}
static void *
@@ -2496,8 +2539,9 @@ mlx5dr_action_create_pop_ipv6_route_ext_mhdr3(struct mlx5dr_action *action)
pattern.data = (__be64 *)cmd;
pattern.sz = sizeof(cmd);
- return mlx5dr_action_create_modify_header(action->ctx, 1, &pattern,
- 0, action->flags);
+ return mlx5dr_action_create_modify_header_reparse(action->ctx, 1, &pattern, 0,
+ action->flags,
+ MLX5DR_ACTION_STC_REPARSE_OFF);
}
static int
@@ -2644,8 +2688,9 @@ mlx5dr_action_create_push_ipv6_route_ext(struct mlx5dr_action *action,
insert_hdr.hdr.sz = hdr->sz;
insert_hdr.hdr.data = header;
action->ipv6_route_ext.action[0] =
- mlx5dr_action_create_insert_header(action->ctx, 1, &insert_hdr,
- bulk_size, action->flags);
+ mlx5dr_action_create_insert_header_reparse(action->ctx, 1, &insert_hdr,
+ bulk_size, action->flags,
+ MLX5DR_ACTION_STC_REPARSE_OFF);
action->ipv6_route_ext.action[1] =
mlx5dr_action_create_push_ipv6_route_ext_mhdr1(action);
action->ipv6_route_ext.action[2] =
@@ -2678,12 +2723,6 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx,
struct mlx5dr_action *action;
int ret;
- if (mlx5dr_context_cap_dynamic_reparse(ctx)) {
- DR_LOG(ERR, "IPv6 extension actions is not supported");
- rte_errno = ENOTSUP;
- return NULL;
- }
-
if (!mlx5dr_action_is_hws_flags(flags) ||
((flags & MLX5DR_ACTION_FLAG_SHARED) && log_bulk_size)) {
DR_LOG(ERR, "IPv6 extension flags don't fit HWS (flags: 0x%x)", flags);
@@ -2708,6 +2747,12 @@ mlx5dr_action_create_reformat_ipv6_ext(struct mlx5dr_context *ctx,
ret = mlx5dr_action_create_pop_ipv6_route_ext(action);
break;
case MLX5DR_ACTION_TYP_PUSH_IPV6_ROUTE_EXT:
+ if (!mlx5dr_context_cap_dynamic_reparse(ctx)) {
+ DR_LOG(ERR, "IPv6 routing extension push actions is not supported");
+ rte_errno = ENOTSUP;
+ goto free_action;
+ }
+
ret = mlx5dr_action_create_push_ipv6_route_ext(action, hdr, log_bulk_size);
break;
default:
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h
index ce9091a336..ec6605bf7a 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.h
+++ b/drivers/net/mlx5/hws/mlx5dr_action.h
@@ -65,6 +65,12 @@ enum mlx5dr_action_setter_flag {
ASF_HIT = 1 << 7,
};
+enum mlx5dr_action_stc_reparse {
+ MLX5DR_ACTION_STC_REPARSE_DEFAULT,
+ MLX5DR_ACTION_STC_REPARSE_ON,
+ MLX5DR_ACTION_STC_REPARSE_OFF,
+};
+
struct mlx5dr_action_default_stc {
struct mlx5dr_pool_chunk nop_ctr;
struct mlx5dr_pool_chunk nop_dw5;
@@ -146,6 +152,7 @@ struct mlx5dr_action {
uint8_t anchor;
uint8_t offset;
bool encap;
+ uint8_t require_reparse;
} reformat;
struct {
struct mlx5dr_action
--
2.39.2
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
` (28 preceding siblings ...)
2023-10-29 16:32 ` [PATCH 30/30] net/mlx5/hws: add stc reparse support for srv6 push pop Gregory Etelson
@ 2023-11-05 18:49 ` Thomas Monjalon
2023-11-06 7:32 ` Etelson, Gregory
29 siblings, 1 reply; 33+ messages in thread
From: Thomas Monjalon @ 2023-11-05 18:49 UTC (permalink / raw)
To: getelson, rasland
Cc: dev, mkashani, Ori Kam, Matan Azrad, Viacheslav Ovsiienko,
Suanming Mou, Gregory Etelson, asafp
The description of this patch does not match the change.
Also the change is going backward, using deprecated fields.
It does not make sense, I'll skip it.
29/10/2023 17:31, Gregory Etelson:
> New mlx5dr_context member replaces mlx5dr_cmd_query_caps.
> Capabilities structure is a member of mlx5dr_context.
>
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>
> ---
> drivers/net/mlx5/hws/mlx5dr_definer.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
> index 95b5d4b70e..75ba46b966 100644
> --- a/drivers/net/mlx5/hws/mlx5dr_definer.c
> +++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
> @@ -1092,7 +1092,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
> return rte_errno;
> }
>
> - if (m->hdr.teid) {
> + if (m->teid) {
> if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_TEID_ENABLED)) {
> rte_errno = ENOTSUP;
> return rte_errno;
> @@ -1118,7 +1118,7 @@ mlx5dr_definer_conv_item_gtp(struct mlx5dr_definer_conv_data *cd,
> }
>
>
> - if (m->hdr.msg_type) {
> + if (m->msg_type) {
> if (!(caps->flex_protocols & MLX5_HCA_FLEX_GTPU_DW_0_ENABLED)) {
> rte_errno = ENOTSUP;
> return rte_errno;
>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 04/30] net/mlx5: add rte_device parameter to locate HWS registers
2023-10-29 16:31 ` [PATCH 04/30] net/mlx5: add rte_device parameter to locate HWS registers Gregory Etelson
@ 2023-11-05 20:27 ` Thomas Monjalon
0 siblings, 0 replies; 33+ messages in thread
From: Thomas Monjalon @ 2023-11-05 20:27 UTC (permalink / raw)
To: rasland, Gregory Etelson
Cc: dev, getelson, mkashani, Ori Kam, Matan Azrad,
Viacheslav Ovsiienko, Suanming Mou, asafp
29/10/2023 17:31, Gregory Etelson:
> 1. Add rte_eth_dev parameter to the `flow_hw_get_reg_id()`
>
> 2. Add mlx5_flow_hw_get_reg_id()
>
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Ori Kam <orika@nvidia.com>
> ---
This,
> -static void
> +void
> flow_rxq_mark_flag_set(struct rte_eth_dev *dev)
> {
and this,
> +void
> +flow_rxq_mark_flag_set(struct rte_eth_dev *dev);
are completely unrelated changes,
and probably unneeded.
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data
2023-11-05 18:49 ` [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Thomas Monjalon
@ 2023-11-06 7:32 ` Etelson, Gregory
0 siblings, 0 replies; 33+ messages in thread
From: Etelson, Gregory @ 2023-11-06 7:32 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Gregory Etelson, rasland, dev, mkashani, Ori Kam, Matan Azrad,
Viacheslav Ovsiienko, Suanming Mou, asafp
On Sun, 5 Nov 2023, Thomas Monjalon wrote:
> External email: Use caution opening links or attachments
>
>
> The description of this patch does not match the change.
> Also the change is going backward, using deprecated fields.
> It does not make sense, I'll skip it.
>
>
Hello Thomas,
Patches in that series were superseded.
Regards,
Gregory
^ permalink raw reply [flat|nested] 33+ messages in thread
end of thread, other threads:[~2023-11-06 7:32 UTC | newest]
Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-29 16:31 [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Gregory Etelson
2023-10-29 16:31 ` [PATCH 02/30] net/mlx5: add flow_hw_get_reg_id_from_ctx() Gregory Etelson
2023-10-29 16:31 ` [PATCH 03/30] net/mlx5/hws: Definer, use flow_hw_get_reg_id_from_ctx function call Gregory Etelson
2023-10-29 16:31 ` [PATCH 04/30] net/mlx5: add rte_device parameter to locate HWS registers Gregory Etelson
2023-11-05 20:27 ` Thomas Monjalon
2023-10-29 16:31 ` [PATCH 05/30] net/mlx5: separate port REG_C registers usage Gregory Etelson
2023-10-29 16:31 ` [PATCH 06/30] net/mlx5: merge REG_C aliases Gregory Etelson
2023-10-29 16:31 ` [PATCH 07/30] net/mlx5: initialize HWS flow tags registers in shared dev context Gregory Etelson
2023-10-29 16:31 ` [PATCH 08/30] net/mlx5/hws: adding method to query rule hash Gregory Etelson
2023-10-29 16:31 ` [PATCH 09/30] net/mlx5: add support for calc hash Gregory Etelson
2023-10-29 16:31 ` [PATCH 10/30] net/mlx5: fix insert by index Gregory Etelson
2023-10-29 16:31 ` [PATCH 11/30] net/mlx5: fix query for NIC flow cap Gregory Etelson
2023-10-29 16:31 ` [PATCH 12/30] net/mlx5: add support for more registers Gregory Etelson
2023-10-29 16:31 ` [PATCH 13/30] net/mlx5: add validation support for tags Gregory Etelson
2023-10-29 16:31 ` [PATCH 14/30] net/mlx5: reuse reformat and modify header actions in a table Gregory Etelson
2023-10-29 16:31 ` [PATCH 15/30] net/mlx5/hws: check the rule status on rule update Gregory Etelson
2023-10-29 16:31 ` [PATCH 16/30] net/mlx5/hws: support IPsec encryption/decryption action Gregory Etelson
2023-10-29 16:31 ` [PATCH 17/30] net/mlx5/hws: support ASO IPsec action Gregory Etelson
2023-10-29 16:31 ` [PATCH 18/30] net/mlx5/hws: support reformat trailer action Gregory Etelson
2023-10-29 16:31 ` [PATCH 19/30] net/mlx5/hws: support ASO first hit action Gregory Etelson
2023-10-29 16:31 ` [PATCH 20/30] net/mlx5/hws: support insert header action Gregory Etelson
2023-10-29 16:31 ` [PATCH 21/30] net/mlx5/hws: support remove " Gregory Etelson
2023-10-29 16:31 ` [PATCH 22/30] net/mlx5/hws: allow jump to TIR over FDB Gregory Etelson
2023-10-29 16:31 ` [PATCH 23/30] net/mlx5/hws: support dynamic re-parse Gregory Etelson
2023-10-29 16:31 ` [PATCH 24/30] net/mlx5/hws: dynamic re-parse for modify header Gregory Etelson
2023-10-29 16:31 ` [PATCH 25/30] net/mlx5: sample the srv6 last segment Gregory Etelson
2023-10-29 16:31 ` [PATCH 26/30] net/mlx5/hws: fix potential wrong errno value Gregory Etelson
2023-10-29 16:31 ` [PATCH 27/30] net/mlx5/hws: add IPv6 routing extension push remove actions Gregory Etelson
2023-10-29 16:32 ` [PATCH 28/30] net/mlx5/hws: add setter for IPv6 routing push remove Gregory Etelson
2023-10-29 16:32 ` [PATCH 29/30] net/mlx5: implement " Gregory Etelson
2023-10-29 16:32 ` [PATCH 30/30] net/mlx5/hws: add stc reparse support for srv6 push pop Gregory Etelson
2023-11-05 18:49 ` [PATCH 01/30] net/mlx5/hws: Definer, add mlx5dr context to definer_conv_data Thomas Monjalon
2023-11-06 7:32 ` Etelson, Gregory
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).