DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 1/9] net/mlx5/hws: skip RTE item when inserting rules by index
@ 2024-02-13  9:50 Itamar Gozlan
  2024-02-13  9:50 ` [PATCH 2/9] net/mlx5/hws: add check for not supported fields in VXLAN Itamar Gozlan
                   ` (7 more replies)
  0 siblings, 8 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-13  9:50 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad, Alex Vesker
  Cc: dev

The location of indexed rules is determined by the index, not the item
hash. A matcher test is added to prevent access to non-existent items.
This avoids unnecessary processing and potential segmentation faults.

Fixes: 405242c ("net/mlx5/hws: add rule object")
Signed-off-by: Itamar Gozlan <igozlan@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_rule.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c
index fa19303b91..e39137a6ee 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.c
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.c
@@ -23,6 +23,9 @@ static void mlx5dr_rule_skip(struct mlx5dr_matcher *matcher,
 	*skip_rx = false;
 	*skip_tx = false;
 
+	if (unlikely(mlx5dr_matcher_is_insert_by_idx(matcher)))
+		return;
+
 	if (mt->item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) {
 		v = items[mt->vport_item_id].spec;
 		vport = flow_hw_conv_port_id(v->port_id);
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 2/9] net/mlx5/hws: add check for not supported fields in VXLAN
  2024-02-13  9:50 [PATCH 1/9] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
@ 2024-02-13  9:50 ` Itamar Gozlan
  2024-02-13  9:50 ` [PATCH 3/9] net/mlx5/hws: add support for resizable matchers Itamar Gozlan
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-13  9:50 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad, Alex Vesker
  Cc: dev

Don't allow the user to mask over rsvd1 / rsvd2 fields which are not
supported.

Fixes: dbff89ef806f ("net/mlx5/hws: fix tunnel protocol checks")
Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_definer.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 79d98bbf78..8b8757ecac 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -1414,6 +1414,13 @@ mlx5dr_definer_conv_item_vxlan(struct mlx5dr_definer_conv_data *cd,
 	struct mlx5dr_definer_fc *fc;
 	bool inner = cd->tunnel;
 
+	if (m && (m->rsvd0[0] != 0 || m->rsvd0[1] != 0 || m->rsvd0[2] != 0 ||
+	    m->rsvd1 != 0)) {
+		DR_LOG(ERR, "reserved fields are not supported");
+		rte_errno = ENOTSUP;
+		return rte_errno;
+	}
+
 	if (inner) {
 		DR_LOG(ERR, "Inner VXLAN item not supported");
 		rte_errno = ENOTSUP;
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 3/9] net/mlx5/hws: add support for resizable matchers
  2024-02-13  9:50 [PATCH 1/9] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
  2024-02-13  9:50 ` [PATCH 2/9] net/mlx5/hws: add check for not supported fields in VXLAN Itamar Gozlan
@ 2024-02-13  9:50 ` Itamar Gozlan
  2024-02-13  9:50 ` [PATCH 4/9] net/mlx5/hws: reordering the STE fields to improve hash Itamar Gozlan
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-13  9:50 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad
  Cc: dev

From: Yevgeny Kliteynik <kliteyn@nvidia.com>

Add support for matcher resize with the following new API calls:
 - mlx5dr_matcher_resize_set_target
 - mlx5dr_matcher_resize_rule_move

The first function links two matchers and allows moving rules from src
matcher to dst matcher. Both matchers should have the same characteristics
(e.g. same mt, same at). It is the user's responsibility to make sure that
the dst matcher has enough space for the moved rules.
After this function, the user can move rules from src into dst matcher,
and he is no longer allowed to insert rules to the src matcher.

The second function is used to move the rule from matcher that is being
resized to a bigger matcher. Moving a single rule includes creating a new
rule in the destination matcher, and deleting the rule from the source
matcher. This operation creates a single completion.

Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr.h         |  39 ++++
 drivers/net/mlx5/hws/mlx5dr_definer.c |   5 +-
 drivers/net/mlx5/hws/mlx5dr_definer.h |   3 +
 drivers/net/mlx5/hws/mlx5dr_matcher.c | 181 +++++++++++++++-
 drivers/net/mlx5/hws/mlx5dr_matcher.h |  21 ++
 drivers/net/mlx5/hws/mlx5dr_rule.c    | 290 ++++++++++++++++++++++----
 drivers/net/mlx5/hws/mlx5dr_rule.h    |  30 ++-
 drivers/net/mlx5/hws/mlx5dr_send.c    |  45 ++++
 8 files changed, 573 insertions(+), 41 deletions(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index d88f73ab57..9d8f8e13dc 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -139,6 +139,8 @@ struct mlx5dr_matcher_attr {
 	/* Define the insertion and distribution modes for this matcher */
 	enum mlx5dr_matcher_insert_mode insert_mode;
 	enum mlx5dr_matcher_distribute_mode distribute_mode;
+	/* Define whether the created matcher supports resizing into a bigger matcher */
+	bool resizable;
 	union {
 		struct {
 			uint8_t sz_row_log;
@@ -419,6 +421,43 @@ int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher);
 int mlx5dr_matcher_attach_at(struct mlx5dr_matcher *matcher,
 			     struct mlx5dr_action_template *at);
 
+/* Link two matchers and enable moving rules from src matcher to dst matcher.
+ * Both matchers must be in the same table type, must be created with 'resizable'
+ * property, and should have the same characteristics (e.g. same mt, same at).
+ *
+ * It is the user's responsibility to make sure that the dst matcher
+ * was allocated with the appropriate size.
+ *
+ * Once the function is completed, the user is:
+ *  - allowed to move rules from src into dst matcher
+ *  - no longer allowed to insert rules to the src matcher
+ *
+ * The user is always allowed to insert rules to the dst matcher and
+ * to delete rules from any matcher.
+ *
+ * @param[in] src_matcher
+ *	source matcher for moving rules from
+ * @param[in] dst_matcher
+ *	destination matcher for moving rules to
+ * @return zero on successful move, non zero otherwise.
+ */
+int mlx5dr_matcher_resize_set_target(struct mlx5dr_matcher *src_matcher,
+				     struct mlx5dr_matcher *dst_matcher);
+
+/* Enqueue moving rule operation: moving rule from src matcher to a dst matcher
+ *
+ * @param[in] src_matcher
+ *	matcher that the rule belongs to
+ * @param[in] rule
+ *	the rule to move
+ * @param[in] attr
+ *	rule attributes
+ * @return zero on success, non zero otherwise.
+ */
+int mlx5dr_matcher_resize_rule_move(struct mlx5dr_matcher *src_matcher,
+				    struct mlx5dr_rule *rule,
+				    struct mlx5dr_rule_attr *attr);
+
 /* Get the size of the rule handle (mlx5dr_rule) to be used on rule creation.
  *
  * @return size in bytes of rule handle struct.
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 8b8757ecac..e564062313 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -3296,9 +3296,8 @@ int mlx5dr_definer_get_id(struct mlx5dr_definer *definer)
 	return definer->obj->id;
 }
 
-static int
-mlx5dr_definer_compare(struct mlx5dr_definer *definer_a,
-		       struct mlx5dr_definer *definer_b)
+int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a,
+			   struct mlx5dr_definer *definer_b)
 {
 	int i;
 
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h
index ced9d9da13..71cc0e94de 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.h
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.h
@@ -733,4 +733,7 @@ int mlx5dr_definer_init_cache(struct mlx5dr_definer_cache **cache);
 
 void mlx5dr_definer_uninit_cache(struct mlx5dr_definer_cache *cache);
 
+int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a,
+			   struct mlx5dr_definer *definer_b);
+
 #endif
diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c
index 4ea161eae6..0d5c462734 100644
--- a/drivers/net/mlx5/hws/mlx5dr_matcher.c
+++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c
@@ -704,6 +704,65 @@ static int mlx5dr_matcher_check_and_process_at(struct mlx5dr_matcher *matcher,
 	return 0;
 }
 
+static int
+mlx5dr_matcher_resize_init(struct mlx5dr_matcher *src_matcher)
+{
+	struct mlx5dr_matcher_resize_data *resize_data;
+
+	resize_data = simple_calloc(1, sizeof(*resize_data));
+	if (!resize_data) {
+		rte_errno = ENOMEM;
+		return rte_errno;
+	}
+
+	resize_data->stc = src_matcher->action_ste.stc;
+	resize_data->action_ste_rtc_0 = src_matcher->action_ste.rtc_0;
+	resize_data->action_ste_rtc_1 = src_matcher->action_ste.rtc_1;
+	resize_data->action_ste_pool = src_matcher->action_ste.max_stes ?
+				       src_matcher->action_ste.pool :
+				       NULL;
+
+	/* Place the new resized matcher on the dst matcher's list */
+	LIST_INSERT_HEAD(&src_matcher->resize_dst->resize_data,
+			 resize_data, next);
+
+	/* Move all the previous resized matchers to the dst matcher's list */
+	while (!LIST_EMPTY(&src_matcher->resize_data)) {
+		resize_data = LIST_FIRST(&src_matcher->resize_data);
+		LIST_REMOVE(resize_data, next);
+		LIST_INSERT_HEAD(&src_matcher->resize_dst->resize_data,
+				 resize_data, next);
+	}
+
+	return 0;
+}
+
+static void
+mlx5dr_matcher_resize_uninit(struct mlx5dr_matcher *matcher)
+{
+	struct mlx5dr_matcher_resize_data *resize_data;
+
+	if (!mlx5dr_matcher_is_resizable(matcher) ||
+	    !matcher->action_ste.max_stes)
+		return;
+
+	while (!LIST_EMPTY(&matcher->resize_data)) {
+		resize_data = LIST_FIRST(&matcher->resize_data);
+		LIST_REMOVE(resize_data, next);
+
+		mlx5dr_action_free_single_stc(matcher->tbl->ctx,
+					      matcher->tbl->type,
+					      &resize_data->stc);
+
+		if (matcher->tbl->type == MLX5DR_TABLE_TYPE_FDB)
+			mlx5dr_cmd_destroy_obj(resize_data->action_ste_rtc_1);
+		mlx5dr_cmd_destroy_obj(resize_data->action_ste_rtc_0);
+		if (resize_data->action_ste_pool)
+			mlx5dr_pool_destroy(resize_data->action_ste_pool);
+		simple_free(resize_data);
+	}
+}
+
 static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher)
 {
 	bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(matcher->mt);
@@ -790,7 +849,9 @@ static void mlx5dr_matcher_unbind_at(struct mlx5dr_matcher *matcher)
 {
 	struct mlx5dr_table *tbl = matcher->tbl;
 
-	if (!matcher->action_ste.max_stes || matcher->flags & MLX5DR_MATCHER_FLAGS_COLLISION)
+	if (!matcher->action_ste.max_stes ||
+	    matcher->flags & MLX5DR_MATCHER_FLAGS_COLLISION ||
+	    mlx5dr_matcher_is_in_resize(matcher))
 		return;
 
 	mlx5dr_action_free_single_stc(tbl->ctx, tbl->type, &matcher->action_ste.stc);
@@ -947,6 +1008,10 @@ mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps,
 			DR_LOG(ERR, "Root matcher does not support at attaching");
 			goto not_supported;
 		}
+		if (attr->resizable) {
+			DR_LOG(ERR, "Root matcher does not support resizing");
+			goto not_supported;
+		}
 		return 0;
 	}
 
@@ -960,6 +1025,8 @@ mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps,
 	    attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_HASH)
 		attr->table.sz_col_log = mlx5dr_matcher_rules_to_tbl_depth(attr->rule.num_log);
 
+	matcher->flags |= attr->resizable ? MLX5DR_MATCHER_FLAGS_RESIZABLE : 0;
+
 	return mlx5dr_matcher_check_attr_sz(caps, attr);
 
 not_supported:
@@ -1018,6 +1085,7 @@ static int mlx5dr_matcher_create_and_connect(struct mlx5dr_matcher *matcher)
 
 static void mlx5dr_matcher_destroy_and_disconnect(struct mlx5dr_matcher *matcher)
 {
+	mlx5dr_matcher_resize_uninit(matcher);
 	mlx5dr_matcher_disconnect(matcher);
 	mlx5dr_matcher_create_uninit_shared(matcher);
 	mlx5dr_matcher_destroy_rtc(matcher, DR_MATCHER_RTC_TYPE_MATCH);
@@ -1452,3 +1520,114 @@ int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt)
 	simple_free(mt);
 	return 0;
 }
+
+static int mlx5dr_matcher_resize_precheck(struct mlx5dr_matcher *src_matcher,
+					  struct mlx5dr_matcher *dst_matcher)
+{
+	int i;
+
+	if (mlx5dr_table_is_root(src_matcher->tbl) ||
+	    mlx5dr_table_is_root(dst_matcher->tbl)) {
+		DR_LOG(ERR, "Src/dst matcher belongs to root table - resize unsupported");
+		goto out_einval;
+	}
+
+	if (src_matcher->tbl->type != dst_matcher->tbl->type) {
+		DR_LOG(ERR, "Table type mismatch for src/dst matchers");
+		goto out_einval;
+	}
+
+	if (mlx5dr_matcher_req_fw_wqe(src_matcher) ||
+	    mlx5dr_matcher_req_fw_wqe(dst_matcher)) {
+		DR_LOG(ERR, "Matchers require FW WQE - resize unsupported");
+		goto out_einval;
+	}
+
+	if (!mlx5dr_matcher_is_resizable(src_matcher) ||
+	    !mlx5dr_matcher_is_resizable(dst_matcher)) {
+		DR_LOG(ERR, "Src/dst matcher is not resizable");
+		goto out_einval;
+	}
+
+	if (mlx5dr_matcher_is_insert_by_idx(src_matcher) !=
+	    mlx5dr_matcher_is_insert_by_idx(dst_matcher)) {
+		DR_LOG(ERR, "Src/dst matchers insert mode mismatch");
+		goto out_einval;
+	}
+
+	if (mlx5dr_matcher_is_in_resize(src_matcher) ||
+	    mlx5dr_matcher_is_in_resize(dst_matcher)) {
+		DR_LOG(ERR, "Src/dst matcher is already in resize");
+		goto out_einval;
+	}
+
+	/* Compare match templates - make sure the definers are equivalent */
+	if (src_matcher->num_of_mt != dst_matcher->num_of_mt) {
+		DR_LOG(ERR, "Src/dst matcher match templates mismatch");
+		goto out_einval;
+	}
+
+	if (src_matcher->action_ste.max_stes > dst_matcher->action_ste.max_stes) {
+		DR_LOG(ERR, "Src/dst matcher max STEs mismatch");
+		goto out_einval;
+	}
+
+	for (i = 0; i < src_matcher->num_of_mt; i++) {
+		if (mlx5dr_definer_compare(src_matcher->mt[i].definer,
+					   dst_matcher->mt[i].definer)) {
+			DR_LOG(ERR, "Src/dst matcher definers mismatch");
+			goto out_einval;
+		}
+	}
+
+	return 0;
+
+out_einval:
+	rte_errno = EINVAL;
+	return rte_errno;
+}
+
+int mlx5dr_matcher_resize_set_target(struct mlx5dr_matcher *src_matcher,
+				     struct mlx5dr_matcher *dst_matcher)
+{
+	int ret = 0;
+
+	pthread_spin_lock(&src_matcher->tbl->ctx->ctrl_lock);
+
+	if (mlx5dr_matcher_resize_precheck(src_matcher, dst_matcher)) {
+		ret = -rte_errno;
+		goto out;
+	}
+
+	src_matcher->resize_dst = dst_matcher;
+
+	if (mlx5dr_matcher_resize_init(src_matcher)) {
+		src_matcher->resize_dst = NULL;
+		ret = -rte_errno;
+	}
+
+out:
+	pthread_spin_unlock(&src_matcher->tbl->ctx->ctrl_lock);
+	return ret;
+}
+
+int mlx5dr_matcher_resize_rule_move(struct mlx5dr_matcher *src_matcher,
+				    struct mlx5dr_rule *rule,
+				    struct mlx5dr_rule_attr *attr)
+{
+	if (unlikely(!mlx5dr_matcher_is_in_resize(src_matcher))) {
+		DR_LOG(ERR, "Matcher is not resizable or not in resize");
+		goto out_einval;
+	}
+
+	if (unlikely(src_matcher != rule->matcher)) {
+		DR_LOG(ERR, "Rule doesn't belong to src matcher");
+		goto out_einval;
+	}
+
+	return mlx5dr_rule_move_hws_add(rule, attr);
+
+out_einval:
+	rte_errno = EINVAL;
+	return -rte_errno;
+}
diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h
index 363a61fd41..0f2bf96e8b 100644
--- a/drivers/net/mlx5/hws/mlx5dr_matcher.h
+++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h
@@ -26,6 +26,7 @@ enum mlx5dr_matcher_flags {
 	MLX5DR_MATCHER_FLAGS_RANGE_DEFINER	= 1 << 0,
 	MLX5DR_MATCHER_FLAGS_HASH_DEFINER	= 1 << 1,
 	MLX5DR_MATCHER_FLAGS_COLLISION		= 1 << 2,
+	MLX5DR_MATCHER_FLAGS_RESIZABLE		= 1 << 3,
 };
 
 struct mlx5dr_match_template {
@@ -59,6 +60,14 @@ struct mlx5dr_matcher_action_ste {
 	uint8_t max_stes;
 };
 
+struct mlx5dr_matcher_resize_data {
+	struct mlx5dr_pool_chunk stc;
+	struct mlx5dr_devx_obj *action_ste_rtc_0;
+	struct mlx5dr_devx_obj *action_ste_rtc_1;
+	struct mlx5dr_pool *action_ste_pool;
+	LIST_ENTRY(mlx5dr_matcher_resize_data) next;
+};
+
 struct mlx5dr_matcher {
 	struct mlx5dr_table *tbl;
 	struct mlx5dr_matcher_attr attr;
@@ -71,10 +80,12 @@ struct mlx5dr_matcher {
 	uint8_t flags;
 	struct mlx5dr_devx_obj *end_ft;
 	struct mlx5dr_matcher *col_matcher;
+	struct mlx5dr_matcher *resize_dst;
 	struct mlx5dr_matcher_match_ste match_ste;
 	struct mlx5dr_matcher_action_ste action_ste;
 	struct mlx5dr_definer *hash_definer;
 	LIST_ENTRY(mlx5dr_matcher) next;
+	LIST_HEAD(resize_data_head, mlx5dr_matcher_resize_data) resize_data;
 };
 
 static inline bool
@@ -89,6 +100,16 @@ mlx5dr_matcher_mt_is_range(struct mlx5dr_match_template *mt)
 	return (!!mt->range_definer);
 }
 
+static inline bool mlx5dr_matcher_is_resizable(struct mlx5dr_matcher *matcher)
+{
+	return !!(matcher->flags & MLX5DR_MATCHER_FLAGS_RESIZABLE);
+}
+
+static inline bool mlx5dr_matcher_is_in_resize(struct mlx5dr_matcher *matcher)
+{
+	return !!matcher->resize_dst;
+}
+
 static inline bool mlx5dr_matcher_req_fw_wqe(struct mlx5dr_matcher *matcher)
 {
 	/* Currently HWS doesn't support hash different from match or range */
diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c
index e39137a6ee..6bf087e187 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.c
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.c
@@ -114,6 +114,23 @@ static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe,
 	}
 }
 
+static void mlx5dr_rule_move_get_rtc(struct mlx5dr_rule *rule,
+				     struct mlx5dr_send_ste_attr *ste_attr)
+{
+	struct mlx5dr_matcher *dst_matcher = rule->matcher->resize_dst;
+
+	if (rule->resize_info->rtc_0) {
+		ste_attr->rtc_0 = dst_matcher->match_ste.rtc_0->id;
+		ste_attr->retry_rtc_0 = dst_matcher->col_matcher ?
+					dst_matcher->col_matcher->match_ste.rtc_0->id : 0;
+	}
+	if (rule->resize_info->rtc_1) {
+		ste_attr->rtc_1 = dst_matcher->match_ste.rtc_1->id;
+		ste_attr->retry_rtc_1 = dst_matcher->col_matcher ?
+					dst_matcher->col_matcher->match_ste.rtc_1->id : 0;
+	}
+}
+
 static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue,
 				 struct mlx5dr_rule *rule,
 				 bool err,
@@ -134,6 +151,34 @@ static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue,
 	mlx5dr_send_engine_gen_comp(queue, user_data, comp_status);
 }
 
+static void
+mlx5dr_rule_save_resize_info(struct mlx5dr_rule *rule,
+			     struct mlx5dr_send_ste_attr *ste_attr)
+{
+	rule->resize_info = simple_calloc(1, sizeof(*rule->resize_info));
+	if (unlikely(!rule->resize_info)) {
+		assert(rule->resize_info);
+		rte_errno = ENOMEM;
+	}
+
+	memcpy(rule->resize_info->ctrl_seg, ste_attr->wqe_ctrl,
+	       sizeof(rule->resize_info->ctrl_seg));
+	memcpy(rule->resize_info->data_seg, ste_attr->wqe_data,
+	       sizeof(rule->resize_info->data_seg));
+
+	rule->resize_info->action_ste_pool = rule->matcher->action_ste.max_stes ?
+					     rule->matcher->action_ste.pool :
+					     NULL;
+}
+
+static void mlx5dr_rule_clear_resize_info(struct mlx5dr_rule *rule)
+{
+	if (rule->resize_info) {
+		simple_free(rule->resize_info);
+		rule->resize_info = NULL;
+	}
+}
+
 static void
 mlx5dr_rule_save_delete_info(struct mlx5dr_rule *rule,
 			     struct mlx5dr_send_ste_attr *ste_attr)
@@ -161,17 +206,29 @@ mlx5dr_rule_save_delete_info(struct mlx5dr_rule *rule,
 		return;
 	}
 
-	if (is_jumbo)
-		memcpy(rule->tag.jumbo, ste_attr->wqe_data->jumbo, MLX5DR_JUMBO_TAG_SZ);
-	else
-		memcpy(rule->tag.match, ste_attr->wqe_data->tag, MLX5DR_MATCH_TAG_SZ);
+	if (likely(!mlx5dr_matcher_is_resizable(rule->matcher))) {
+		if (is_jumbo)
+			memcpy(&rule->tag.jumbo, ste_attr->wqe_data->action, MLX5DR_JUMBO_TAG_SZ);
+		else
+			memcpy(&rule->tag.match, ste_attr->wqe_data->tag, MLX5DR_MATCH_TAG_SZ);
+		return;
+	}
+
+	mlx5dr_rule_save_resize_info(rule, ste_attr);
 }
 
 static void
 mlx5dr_rule_clear_delete_info(struct mlx5dr_rule *rule)
 {
-	if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher)))
+	if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) {
 		simple_free(rule->tag_ptr);
+		return;
+	}
+
+	if (unlikely(mlx5dr_matcher_is_resizable(rule->matcher))) {
+		mlx5dr_rule_clear_resize_info(rule);
+		return;
+	}
 }
 
 static void
@@ -188,8 +245,11 @@ mlx5dr_rule_load_delete_info(struct mlx5dr_rule *rule,
 			ste_attr->range_wqe_tag = &rule->tag_ptr[1];
 			ste_attr->send_attr.range_definer_id = rule->tag_ptr[1].reserved[1];
 		}
-	} else {
+	} else if (likely(!mlx5dr_matcher_is_resizable(rule->matcher))) {
 		ste_attr->wqe_tag = &rule->tag;
+	} else {
+		ste_attr->wqe_tag = (struct mlx5dr_rule_match_tag *)
+			&rule->resize_info->data_seg[MLX5DR_STE_CTRL_SZ];
 	}
 }
 
@@ -220,6 +280,7 @@ static int mlx5dr_rule_alloc_action_ste(struct mlx5dr_rule *rule,
 void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule)
 {
 	struct mlx5dr_matcher *matcher = rule->matcher;
+	struct mlx5dr_pool *pool;
 
 	if (rule->action_ste_idx > -1 &&
 	    !matcher->attr.optimize_using_rule_idx &&
@@ -229,7 +290,11 @@ void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule)
 		/* This release is safe only when the rule match part was deleted */
 		ste.order = rte_log2_u32(matcher->action_ste.max_stes);
 		ste.offset = rule->action_ste_idx;
-		mlx5dr_pool_chunk_free(matcher->action_ste.pool, &ste);
+
+		/* Free the original action pool if rule was resized */
+		pool = mlx5dr_matcher_is_resizable(matcher) ? rule->resize_info->action_ste_pool :
+							      matcher->action_ste.pool;
+		mlx5dr_pool_chunk_free(pool, &ste);
 	}
 }
 
@@ -266,6 +331,23 @@ static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule,
 	apply->require_dep = 0;
 }
 
+static void mlx5dr_rule_move_init(struct mlx5dr_rule *rule,
+				  struct mlx5dr_rule_attr *attr)
+{
+	/* Save the old RTC IDs to be later used in match STE delete */
+	rule->resize_info->rtc_0 = rule->rtc_0;
+	rule->resize_info->rtc_1 = rule->rtc_1;
+	rule->resize_info->rule_idx = attr->rule_idx;
+
+	rule->rtc_0 = 0;
+	rule->rtc_1 = 0;
+
+	rule->pending_wqes = 0;
+	rule->action_ste_idx = -1;
+	rule->status = MLX5DR_RULE_STATUS_CREATING;
+	rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_WRITING;
+}
+
 static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule,
 					 struct mlx5dr_rule_attr *attr,
 					 uint8_t mt_idx,
@@ -346,7 +428,9 @@ static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule,
 	/* Send WQEs to FW */
 	mlx5dr_send_stes_fw(queue, &ste_attr);
 
-	/* Backup TAG on the rule for deletion */
+	/* Backup TAG on the rule for deletion, and save ctrl/data
+	 * segments to be used when resizing the matcher.
+	 */
 	mlx5dr_rule_save_delete_info(rule, &ste_attr);
 	mlx5dr_send_engine_inc_rule(queue);
 
@@ -469,7 +553,9 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule,
 		mlx5dr_send_ste(queue, &ste_attr);
 	}
 
-	/* Backup TAG on the rule for deletion, only after insertion */
+	/* Backup TAG on the rule for deletion and resize info for
+	 * moving rules to a new matcher, only after insertion.
+	 */
 	if (!is_update)
 		mlx5dr_rule_save_delete_info(rule, &ste_attr);
 
@@ -496,7 +582,7 @@ static void mlx5dr_rule_destroy_failed_hws(struct mlx5dr_rule *rule,
 	/* Rule failed now we can safely release action STEs */
 	mlx5dr_rule_free_action_ste_idx(rule);
 
-	/* Clear complex tag */
+	/* Clear complex tag or info that was saved for matcher resizing */
 	mlx5dr_rule_clear_delete_info(rule);
 
 	/* If a rule that was indicated as burst (need to trigger HW) has failed
@@ -571,12 +657,12 @@ static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule,
 
 	mlx5dr_rule_load_delete_info(rule, &ste_attr);
 
-	if (unlikely(fw_wqe)) {
+	if (unlikely(fw_wqe))
 		mlx5dr_send_stes_fw(queue, &ste_attr);
-		mlx5dr_rule_clear_delete_info(rule);
-	} else {
+	else
 		mlx5dr_send_ste(queue, &ste_attr);
-	}
+
+	mlx5dr_rule_clear_delete_info(rule);
 
 	return 0;
 }
@@ -664,9 +750,11 @@ static int mlx5dr_rule_destroy_root(struct mlx5dr_rule *rule,
 	return 0;
 }
 
-static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_context *ctx,
+static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_rule *rule,
 					struct mlx5dr_rule_attr *attr)
 {
+	struct mlx5dr_context *ctx = rule->matcher->tbl->ctx;
+
 	if (unlikely(!attr->user_data)) {
 		rte_errno = EINVAL;
 		return rte_errno;
@@ -681,6 +769,153 @@ static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_context *ctx,
 	return 0;
 }
 
+static int mlx5dr_rule_enqueue_precheck_move(struct mlx5dr_rule *rule,
+					     struct mlx5dr_rule_attr *attr)
+{
+	if (unlikely(rule->status != MLX5DR_RULE_STATUS_CREATED)) {
+		rte_errno = EINVAL;
+		return rte_errno;
+	}
+
+	return mlx5dr_rule_enqueue_precheck(rule, attr);
+}
+
+static int mlx5dr_rule_enqueue_precheck_create(struct mlx5dr_rule *rule,
+					       struct mlx5dr_rule_attr *attr)
+{
+	if (unlikely(mlx5dr_matcher_is_in_resize(rule->matcher))) {
+		/* Matcher in resize - new rules are not allowed */
+		rte_errno = EAGAIN;
+		return rte_errno;
+	}
+
+	return mlx5dr_rule_enqueue_precheck(rule, attr);
+}
+
+static int mlx5dr_rule_enqueue_precheck_update(struct mlx5dr_rule *rule,
+					       struct mlx5dr_rule_attr *attr)
+{
+	struct mlx5dr_matcher *matcher = rule->matcher;
+
+	if (unlikely((mlx5dr_table_is_root(matcher->tbl) ||
+		     mlx5dr_matcher_req_fw_wqe(matcher)))) {
+		DR_LOG(ERR, "Rule update is not supported on current matcher");
+		rte_errno = ENOTSUP;
+		return rte_errno;
+	}
+
+	if (unlikely(!matcher->attr.optimize_using_rule_idx &&
+		     !mlx5dr_matcher_is_insert_by_idx(matcher))) {
+		DR_LOG(ERR, "Rule update requires optimize by idx matcher");
+		rte_errno = ENOTSUP;
+		return rte_errno;
+	}
+
+	if (unlikely(mlx5dr_matcher_is_resizable(rule->matcher))) {
+		DR_LOG(ERR, "Rule update is not supported on resizable matcher");
+		rte_errno = ENOTSUP;
+		return rte_errno;
+	}
+
+	if (unlikely(rule->status != MLX5DR_RULE_STATUS_CREATED)) {
+		DR_LOG(ERR, "Current rule status does not allow update");
+		rte_errno = EBUSY;
+		return rte_errno;
+	}
+
+	return mlx5dr_rule_enqueue_precheck_create(rule, attr);
+}
+
+int mlx5dr_rule_move_hws_remove(struct mlx5dr_rule *rule,
+				void *queue_ptr,
+				void *user_data)
+{
+	bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(rule->matcher->mt);
+	struct mlx5dr_wqe_gta_ctrl_seg empty_wqe_ctrl = {0};
+	struct mlx5dr_matcher *matcher = rule->matcher;
+	struct mlx5dr_send_engine *queue = queue_ptr;
+	struct mlx5dr_send_ste_attr ste_attr = {0};
+
+	/* Send dependent WQEs */
+	mlx5dr_send_all_dep_wqe(queue);
+
+	rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_DELETING;
+
+	ste_attr.send_attr.fence = 0;
+	ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE;
+	ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS;
+	ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA;
+	ste_attr.send_attr.rule = rule;
+	ste_attr.send_attr.notify_hw = 1;
+	ste_attr.send_attr.user_data = user_data;
+	ste_attr.rtc_0 = rule->resize_info->rtc_0;
+	ste_attr.rtc_1 = rule->resize_info->rtc_1;
+	ste_attr.used_id_rtc_0 = &rule->resize_info->rtc_0;
+	ste_attr.used_id_rtc_1 = &rule->resize_info->rtc_1;
+	ste_attr.wqe_ctrl = &empty_wqe_ctrl;
+	ste_attr.wqe_tag_is_jumbo = is_jumbo;
+	ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE;
+
+	if (unlikely(mlx5dr_matcher_is_insert_by_idx(matcher)))
+		ste_attr.direct_index = rule->resize_info->rule_idx;
+
+	mlx5dr_rule_load_delete_info(rule, &ste_attr);
+	mlx5dr_send_ste(queue, &ste_attr);
+
+	return 0;
+}
+
+int mlx5dr_rule_move_hws_add(struct mlx5dr_rule *rule,
+			     struct mlx5dr_rule_attr *attr)
+{
+	bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(rule->matcher->mt);
+	struct mlx5dr_context *ctx = rule->matcher->tbl->ctx;
+	struct mlx5dr_matcher *matcher = rule->matcher;
+	struct mlx5dr_send_ste_attr ste_attr = {0};
+	struct mlx5dr_send_engine *queue;
+
+	if (unlikely(mlx5dr_rule_enqueue_precheck_move(rule, attr)))
+		return -rte_errno;
+
+	queue = &ctx->send_queue[attr->queue_id];
+
+	if (unlikely(mlx5dr_send_engine_err(queue))) {
+		rte_errno = EIO;
+		return rte_errno;
+	}
+
+	mlx5dr_rule_move_init(rule, attr);
+
+	mlx5dr_rule_move_get_rtc(rule, &ste_attr);
+
+	ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE;
+	ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS;
+	ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA;
+	ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE;
+	ste_attr.wqe_tag_is_jumbo = is_jumbo;
+
+	ste_attr.send_attr.rule = rule;
+	ste_attr.send_attr.fence = 0;
+	ste_attr.send_attr.notify_hw = !attr->burst;
+	ste_attr.send_attr.user_data = attr->user_data;
+
+	ste_attr.used_id_rtc_0 = &rule->rtc_0;
+	ste_attr.used_id_rtc_1 = &rule->rtc_1;
+	ste_attr.wqe_ctrl = (struct mlx5dr_wqe_gta_ctrl_seg *)rule->resize_info->ctrl_seg;
+	ste_attr.wqe_data = (struct mlx5dr_wqe_gta_data_seg_ste *)rule->resize_info->data_seg;
+	ste_attr.direct_index = mlx5dr_matcher_is_insert_by_idx(matcher) ?
+				attr->rule_idx : 0;
+
+	mlx5dr_send_ste(queue, &ste_attr);
+	mlx5dr_send_engine_inc_rule(queue);
+
+	/* Send dependent WQEs */
+	if (!attr->burst)
+		mlx5dr_send_all_dep_wqe(queue);
+
+	return 0;
+}
+
 int mlx5dr_rule_create(struct mlx5dr_matcher *matcher,
 		       uint8_t mt_idx,
 		       const struct rte_flow_item items[],
@@ -689,13 +924,11 @@ int mlx5dr_rule_create(struct mlx5dr_matcher *matcher,
 		       struct mlx5dr_rule_attr *attr,
 		       struct mlx5dr_rule *rule_handle)
 {
-	struct mlx5dr_context *ctx;
 	int ret;
 
 	rule_handle->matcher = matcher;
-	ctx = matcher->tbl->ctx;
 
-	if (mlx5dr_rule_enqueue_precheck(ctx, attr))
+	if (unlikely(mlx5dr_rule_enqueue_precheck_create(rule_handle, attr)))
 		return -rte_errno;
 
 	assert(matcher->num_of_mt >= mt_idx);
@@ -723,7 +956,7 @@ int mlx5dr_rule_destroy(struct mlx5dr_rule *rule,
 {
 	int ret;
 
-	if (mlx5dr_rule_enqueue_precheck(rule->matcher->tbl->ctx, attr))
+	if (unlikely(mlx5dr_rule_enqueue_precheck(rule, attr)))
 		return -rte_errno;
 
 	if (unlikely(mlx5dr_table_is_root(rule->matcher->tbl)))
@@ -739,24 +972,9 @@ int mlx5dr_rule_action_update(struct mlx5dr_rule *rule_handle,
 			      struct mlx5dr_rule_action rule_actions[],
 			      struct mlx5dr_rule_attr *attr)
 {
-	struct mlx5dr_matcher *matcher = rule_handle->matcher;
 	int ret;
 
-	if (unlikely(mlx5dr_table_is_root(matcher->tbl) ||
-	    unlikely(mlx5dr_matcher_req_fw_wqe(matcher)))) {
-		DR_LOG(ERR, "Rule update not supported on current matcher");
-		rte_errno = ENOTSUP;
-		return -rte_errno;
-	}
-
-	if (!matcher->attr.optimize_using_rule_idx &&
-	    !mlx5dr_matcher_is_insert_by_idx(matcher)) {
-		DR_LOG(ERR, "Rule update requires optimize by idx matcher");
-		rte_errno = ENOTSUP;
-		return -rte_errno;
-	}
-
-	if (mlx5dr_rule_enqueue_precheck(matcher->tbl->ctx, attr))
+	if (unlikely(mlx5dr_rule_enqueue_precheck_update(rule_handle, attr)))
 		return -rte_errno;
 
 	ret = mlx5dr_rule_create_hws(rule_handle,
@@ -780,7 +998,7 @@ int mlx5dr_rule_hash_calculate(struct mlx5dr_matcher *matcher,
 			       enum mlx5dr_rule_hash_calc_mode mode,
 			       uint32_t *ret_hash)
 {
-	uint8_t tag[MLX5DR_STE_SZ] = {0};
+	uint8_t tag[MLX5DR_WQE_SZ_GTA_DATA] = {0};
 	struct mlx5dr_match_template *mt;
 
 	if (!matcher || !matcher->mt) {
diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h
index f7d97eead5..07adf9c5ad 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.h
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.h
@@ -10,7 +10,6 @@ enum {
 	MLX5DR_ACTIONS_SZ = 12,
 	MLX5DR_MATCH_TAG_SZ = 32,
 	MLX5DR_JUMBO_TAG_SZ = 44,
-	MLX5DR_STE_SZ = 64,
 };
 
 enum mlx5dr_rule_status {
@@ -23,6 +22,12 @@ enum mlx5dr_rule_status {
 	MLX5DR_RULE_STATUS_FAILED,
 };
 
+enum mlx5dr_rule_move_state {
+	MLX5DR_RULE_RESIZE_STATE_IDLE,
+	MLX5DR_RULE_RESIZE_STATE_WRITING,
+	MLX5DR_RULE_RESIZE_STATE_DELETING,
+};
+
 struct mlx5dr_rule_match_tag {
 	union {
 		uint8_t jumbo[MLX5DR_JUMBO_TAG_SZ];
@@ -33,6 +38,16 @@ struct mlx5dr_rule_match_tag {
 	};
 };
 
+struct mlx5dr_rule_resize_info {
+	uint8_t state;
+	uint32_t rtc_0;
+	uint32_t rtc_1;
+	uint32_t rule_idx;
+	struct mlx5dr_pool *action_ste_pool;
+	uint8_t ctrl_seg[MLX5DR_WQE_SZ_GTA_CTRL]; /* Ctrl segment of STE: 48 bytes */
+	uint8_t data_seg[MLX5DR_WQE_SZ_GTA_DATA]; /* Data segment of STE: 64 bytes */
+};
+
 struct mlx5dr_rule {
 	struct mlx5dr_matcher *matcher;
 	union {
@@ -40,6 +55,7 @@ struct mlx5dr_rule {
 		/* Pointer to tag to store more than one tag */
 		struct mlx5dr_rule_match_tag *tag_ptr;
 		struct ibv_flow *flow;
+		struct mlx5dr_rule_resize_info *resize_info;
 	};
 	uint32_t rtc_0; /* The RTC into which the STE was inserted */
 	uint32_t rtc_1; /* The RTC into which the STE was inserted */
@@ -50,4 +66,16 @@ struct mlx5dr_rule {
 
 void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule);
 
+int mlx5dr_rule_move_hws_remove(struct mlx5dr_rule *rule,
+				void *queue, void *user_data);
+
+int mlx5dr_rule_move_hws_add(struct mlx5dr_rule *rule,
+			     struct mlx5dr_rule_attr *attr);
+
+static inline bool mlx5dr_rule_move_in_progress(struct mlx5dr_rule *rule)
+{
+	return rule->resize_info &&
+	       rule->resize_info->state != MLX5DR_RULE_RESIZE_STATE_IDLE;
+}
+
 #endif /* MLX5DR_RULE_H_ */
diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c
index 622d574bfa..64138279a1 100644
--- a/drivers/net/mlx5/hws/mlx5dr_send.c
+++ b/drivers/net/mlx5/hws/mlx5dr_send.c
@@ -444,6 +444,46 @@ void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue)
 	mlx5dr_send_engine_post_ring(sq, queue->uar, wqe_ctrl);
 }
 
+static void
+mlx5dr_send_engine_update_rule_resize(struct mlx5dr_send_engine *queue,
+				      struct mlx5dr_send_ring_priv *priv,
+				      enum rte_flow_op_status *status)
+{
+	switch (priv->rule->resize_info->state) {
+	case MLX5DR_RULE_RESIZE_STATE_WRITING:
+		if (priv->rule->status == MLX5DR_RULE_STATUS_FAILING) {
+			/* Backup original RTCs */
+			uint32_t orig_rtc_0 = priv->rule->resize_info->rtc_0;
+			uint32_t orig_rtc_1 = priv->rule->resize_info->rtc_1;
+
+			/* Delete partially failed move rule using resize_info */
+			priv->rule->resize_info->rtc_0 = priv->rule->rtc_0;
+			priv->rule->resize_info->rtc_1 = priv->rule->rtc_1;
+
+			/* Move rule to original RTC for future delete */
+			priv->rule->rtc_0 = orig_rtc_0;
+			priv->rule->rtc_1 = orig_rtc_1;
+		}
+		/* Clean leftovers */
+		mlx5dr_rule_move_hws_remove(priv->rule, queue, priv->user_data);
+		break;
+
+	case MLX5DR_RULE_RESIZE_STATE_DELETING:
+		if (priv->rule->status == MLX5DR_RULE_STATUS_FAILING) {
+			*status = RTE_FLOW_OP_ERROR;
+		} else {
+			*status = RTE_FLOW_OP_SUCCESS;
+			priv->rule->matcher = priv->rule->matcher->resize_dst;
+		}
+		priv->rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_IDLE;
+		priv->rule->status = MLX5DR_RULE_STATUS_CREATED;
+		break;
+
+	default:
+		break;
+	}
+}
+
 static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue,
 					   struct mlx5dr_send_ring_priv *priv,
 					   uint16_t wqe_cnt,
@@ -465,6 +505,11 @@ static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue,
 
 	/* Update rule status for the last completion */
 	if (!priv->rule->pending_wqes) {
+		if (unlikely(mlx5dr_rule_move_in_progress(priv->rule))) {
+			mlx5dr_send_engine_update_rule_resize(queue, priv, status);
+			return;
+		}
+
 		if (unlikely(priv->rule->status == MLX5DR_RULE_STATUS_FAILING)) {
 			/* Rule completely failed and doesn't require cleanup */
 			if (!priv->rule->rtc_0 && !priv->rule->rtc_1)
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 4/9] net/mlx5/hws: reordering the STE fields to improve hash
  2024-02-13  9:50 [PATCH 1/9] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
  2024-02-13  9:50 ` [PATCH 2/9] net/mlx5/hws: add check for not supported fields in VXLAN Itamar Gozlan
  2024-02-13  9:50 ` [PATCH 3/9] net/mlx5/hws: add support for resizable matchers Itamar Gozlan
@ 2024-02-13  9:50 ` Itamar Gozlan
  2024-02-13  9:50 ` [PATCH 5/9] net/mlx5/hws: check the rule status on rule update Itamar Gozlan
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-13  9:50 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad
  Cc: dev

Inserting two rules with the same hash calculation result into the same
matcher will cause collisions, which can cause degradation in PPS.
Changing the order of some fields in the STE can change the hash result,
and doing this for every value would give us a different hash distribution
for the inputs. By using precomputed optimal DW locations, we can change
the STE order for a limited set of the most common values to reduce the
number of hash collisions and improve latency.

Signed-off-by: Itamar Gozlan <igozlan@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_definer.c | 64 +++++++++++++++++++++++++++
 1 file changed, 64 insertions(+)

diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index e564062313..eb788a772a 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -100,6 +100,33 @@
 	__mlx5_dw_off(typ, fld))) >> __mlx5_dw_bit_off(typ, fld)) & \
 	__mlx5_mask(typ, fld))
 
+#define MAX_ROW_LOG 31
+
+enum header_layout {
+	MLX5DR_HL_IPV4_SRC = 64,
+	MLX5DR_HL_IPV4_DST = 65,
+	MAX_HL_PRIO,
+};
+
+/* Each row (i) indicates a different matcher size, and each column (j)
+ * represents {DW5, DW4, DW3, DW2, DW1, DW0}.
+ * For values 0,..,2^i, and j (DW) 0,..,5: optimal_dist_dw[i][j] is 1 if the
+ * number of different hash results on these values equals 2^i, meaning this
+ * DW hash distribution is complete.
+ */
+int optimal_dist_dw[MAX_ROW_LOG][DW_SELECTORS_MATCH] = {
+	{1, 1, 1, 1, 1, 1}, {0, 1, 1, 0, 1, 0}, {0, 1, 1, 0, 1, 0},
+	{1, 0, 1, 0, 1, 0}, {0, 0, 0, 1, 1, 0}, {0, 1, 1, 0, 1, 0},
+	{0, 0, 0, 0, 1, 0}, {0, 1, 1, 0, 1, 0}, {0, 0, 0, 0, 0, 0},
+	{1, 0, 1, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 1, 0, 1, 0, 0},
+	{1, 0, 0, 0, 0, 0}, {0, 0, 1, 0, 0, 1}, {1, 1, 1, 0, 0, 0},
+	{1, 1, 1, 0, 1, 0}, {0, 0, 1, 1, 0, 0}, {0, 1, 1, 0, 0, 1},
+	{0, 0, 1, 0, 0, 1}, {0, 0, 1, 0, 0, 0}, {1, 0, 1, 1, 0, 0},
+	{1, 0, 1, 0, 0, 1}, {0, 0, 1, 1, 0, 1}, {1, 1, 1, 0, 0, 0},
+	{0, 1, 0, 1, 0, 1}, {0, 0, 0, 0, 0, 1}, {0, 0, 0, 1, 1, 1},
+	{0, 0, 1, 0, 0, 1}, {1, 1, 0, 1, 1, 0}, {0, 0, 0, 0, 1, 0},
+	{0, 0, 0, 1, 1, 0}};
+
 struct mlx5dr_definer_sel_ctrl {
 	uint8_t allowed_full_dw; /* Full DW selectors cover all offsets */
 	uint8_t allowed_lim_dw;  /* Limited DW selectors cover offset < 64 */
@@ -3185,6 +3212,37 @@ mlx5dr_definer_find_best_range_fit(struct mlx5dr_definer *definer,
 	return rte_errno;
 }
 
+static void mlx5dr_definer_optimize_order(struct mlx5dr_definer *definer, int num_log)
+{
+	uint8_t hl_prio[MAX_HL_PRIO - 1] = {MLX5DR_HL_IPV4_SRC,
+					    MLX5DR_HL_IPV4_DST,
+					    MAX_HL_PRIO};
+	int dw = 0, i = 0, j;
+	int *dw_flag;
+	uint8_t tmp;
+
+	dw_flag = optimal_dist_dw[num_log];
+
+	while (hl_prio[i] != MAX_HL_PRIO) {
+		j = 0;
+		/* Finding a candidate to improve its hash distribution */
+		while (j < DW_SELECTORS_MATCH && (hl_prio[i] != definer->dw_selector[j]))
+			j++;
+
+		/* Finding a DW location with good hash distribution */
+		while (dw < DW_SELECTORS_MATCH && dw_flag[dw] == 0)
+			dw++;
+
+		if (dw < DW_SELECTORS_MATCH && j < DW_SELECTORS_MATCH) {
+			tmp = definer->dw_selector[dw];
+			definer->dw_selector[dw] = definer->dw_selector[j];
+			definer->dw_selector[j] = tmp;
+			dw++;
+		}
+		i++;
+	}
+}
+
 static int
 mlx5dr_definer_find_best_match_fit(struct mlx5dr_context *ctx,
 				   struct mlx5dr_definer *definer,
@@ -3355,6 +3413,12 @@ mlx5dr_definer_calc_layout(struct mlx5dr_matcher *matcher,
 		goto free_fc;
 	}
 
+	if (!mlx5dr_definer_is_jumbo(match_definer) &&
+	    !mlx5dr_matcher_req_fw_wqe(matcher) &&
+	    !mlx5dr_matcher_is_resizable(matcher) &&
+	    !mlx5dr_matcher_is_insert_by_idx(matcher))
+		mlx5dr_definer_optimize_order(match_definer, matcher->attr.rule.num_log);
+
 	/* Find the range definer layout for match templates fcrs */
 	ret = mlx5dr_definer_find_best_range_fit(range_definer, matcher);
 	if (ret) {
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 5/9] net/mlx5/hws: check the rule status on rule update
  2024-02-13  9:50 [PATCH 1/9] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
                   ` (2 preceding siblings ...)
  2024-02-13  9:50 ` [PATCH 4/9] net/mlx5/hws: reordering the STE fields to improve hash Itamar Gozlan
@ 2024-02-13  9:50 ` Itamar Gozlan
  2024-02-13  9:50 ` [PATCH 6/9] net/mlx5/hws: fix VLAN item handling on non relaxed mode Itamar Gozlan
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-13  9:50 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad
  Cc: dev

From: Hamdan Igbaria <hamdani@nvidia.com>

Only allow rule updates for rules with their status value equal to
MLX5DR_RULE_STATUS_CREATED.
Otherwise, the rule may be in an unstable stage like deleting and
this will result in a faulty unexpected scenario.

Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_rule.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c
index 6bf087e187..aa00c54e53 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.c
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.c
@@ -977,6 +977,12 @@ int mlx5dr_rule_action_update(struct mlx5dr_rule *rule_handle,
 	if (unlikely(mlx5dr_rule_enqueue_precheck_update(rule_handle, attr)))
 		return -rte_errno;
 
+	if (rule_handle->status != MLX5DR_RULE_STATUS_CREATED) {
+		DR_LOG(ERR, "Current rule status does not allow update");
+		rte_errno = EBUSY;
+		return -rte_errno;
+	}
+
 	ret = mlx5dr_rule_create_hws(rule_handle,
 				     attr,
 				     0,
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 6/9] net/mlx5/hws: fix VLAN item handling on non relaxed mode
  2024-02-13  9:50 [PATCH 1/9] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
                   ` (3 preceding siblings ...)
  2024-02-13  9:50 ` [PATCH 5/9] net/mlx5/hws: check the rule status on rule update Itamar Gozlan
@ 2024-02-13  9:50 ` Itamar Gozlan
  2024-02-13  9:50 ` [PATCH 7/9] net/mlx5/hws: extend action template creation API Itamar Gozlan
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-13  9:50 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad, Mark Bloch,
	Alex Vesker
  Cc: dev

From: Hamdan Igbaria <hamdani@nvidia.com>

If a VLAN item was passed with null mask, the item handler would
return immediately and thus won't set default values for non relax
mode.
Also change the non relax default set to single-tagged (CVLAN).

Fixes: c55c2bf35333 ("net/mlx5/hws: add definer layer")
Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_definer.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index eb788a772a..b8a546989a 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -223,6 +223,7 @@ struct mlx5dr_definer_conv_data {
 	X(SET,		ib_l4_opcode,		v->hdr.opcode,		rte_flow_item_ib_bth) \
 	X(SET,		random_number,		v->value,		rte_flow_item_random) \
 	X(SET,		ib_l4_bth_a,		v->hdr.a,		rte_flow_item_ib_bth) \
+	X(SET,		cvlan,			STE_CVLAN,		rte_flow_item_vlan) \
 
 /* Item set function format */
 #define X(set_type, func_name, value, item_type) \
@@ -864,6 +865,15 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
 	struct mlx5dr_definer_fc *fc;
 	bool inner = cd->tunnel;
 
+	if (!cd->relaxed) {
+		/* Mark packet as tagged (CVLAN) */
+		fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)];
+		fc->item_idx = item_idx;
+		fc->tag_mask_set = &mlx5dr_definer_ones_set;
+		fc->tag_set = &mlx5dr_definer_cvlan_set;
+		DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner);
+	}
+
 	if (!m)
 		return 0;
 
@@ -872,8 +882,7 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
 		return rte_errno;
 	}
 
-	if (!cd->relaxed || m->has_more_vlan) {
-		/* Mark packet as tagged (CVLAN or SVLAN) even if TCI is not specified.*/
+	if (m->has_more_vlan) {
 		fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_mask_set = &mlx5dr_definer_ones_set;
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 7/9] net/mlx5/hws: extend action template creation API
  2024-02-13  9:50 [PATCH 1/9] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
                   ` (4 preceding siblings ...)
  2024-02-13  9:50 ` [PATCH 6/9] net/mlx5/hws: fix VLAN item handling on non relaxed mode Itamar Gozlan
@ 2024-02-13  9:50 ` Itamar Gozlan
  2024-02-13  9:50 ` [PATCH 8/9] net/mlx5/hws: add missing actions STE limitation Itamar Gozlan
  2024-02-13  9:50 ` [PATCH 9/9] net/mlx5/hws: support push_esp flag for insert header action Itamar Gozlan
  7 siblings, 0 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-13  9:50 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad
  Cc: dev

From: Hamdan Igbaria <hamdani@nvidia.com>

Extend mlx5dr_action_template_create function params
to include flags parameter.

Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr.h         | 10 +++++++++-
 drivers/net/mlx5/hws/mlx5dr_action.c  | 11 ++++++++++-
 drivers/net/mlx5/hws/mlx5dr_action.h  |  1 +
 drivers/net/mlx5/hws/mlx5dr_matcher.c | 16 ++++++++++------
 drivers/net/mlx5/mlx5_flow_hw.c       |  2 +-
 5 files changed, 31 insertions(+), 9 deletions(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index 9d8f8e13dc..c11ec08616 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -84,6 +84,11 @@ enum mlx5dr_match_template_flags {
 	MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH = 1,
 };
 
+enum mlx5dr_action_template_flags {
+	/* Allow relaxed actions order. */
+	MLX5DR_ACTION_TEMPLATE_FLAG_RELAXED_ORDER = 1 << 0,
+};
+
 enum mlx5dr_send_queue_actions {
 	/* Start executing all pending queued rules */
 	MLX5DR_SEND_QUEUE_ACTION_DRAIN_ASYNC = 1 << 0,
@@ -362,10 +367,13 @@ int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt);
  *	An array of actions based on the order of actions which will be provided
  *	with rule_actions to mlx5dr_rule_create. The last action is marked
  *	using MLX5DR_ACTION_TYP_LAST.
+ * @param[in] flags
+ *	Template creation flags
  * @return pointer to mlx5dr_action_template on success NULL otherwise
  */
 struct mlx5dr_action_template *
-mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[]);
+mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[],
+			      uint32_t flags);
 
 /* Destroy action template.
  *
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 862ee3e332..370886907f 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -3385,12 +3385,19 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
 }
 
 struct mlx5dr_action_template *
-mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[])
+mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[],
+			      uint32_t flags)
 {
 	struct mlx5dr_action_template *at;
 	uint8_t num_actions = 0;
 	int i;
 
+	if (flags > MLX5DR_ACTION_TEMPLATE_FLAG_RELAXED_ORDER) {
+		DR_LOG(ERR, "Unsupported action template flag provided");
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
 	at = simple_calloc(1, sizeof(*at));
 	if (!at) {
 		DR_LOG(ERR, "Failed to allocate action template");
@@ -3398,6 +3405,8 @@ mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[])
 		return NULL;
 	}
 
+	at->flags = flags;
+
 	while (action_type[num_actions++] != MLX5DR_ACTION_TYP_LAST)
 		;
 
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h
index fad35a845b..a8d9720c42 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.h
+++ b/drivers/net/mlx5/hws/mlx5dr_action.h
@@ -119,6 +119,7 @@ struct mlx5dr_action_template {
 	uint8_t num_of_action_stes;
 	uint8_t num_actions;
 	uint8_t only_term;
+	uint32_t flags;
 };
 
 struct mlx5dr_action {
diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c
index 0d5c462734..402242308d 100644
--- a/drivers/net/mlx5/hws/mlx5dr_matcher.c
+++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c
@@ -686,12 +686,16 @@ static int mlx5dr_matcher_check_and_process_at(struct mlx5dr_matcher *matcher,
 	bool valid;
 	int ret;
 
-	/* Check if action combinabtion is valid */
-	valid = mlx5dr_action_check_combo(at->action_type_arr, matcher->tbl->type);
-	if (!valid) {
-		DR_LOG(ERR, "Invalid combination in action template");
-		rte_errno = EINVAL;
-		return rte_errno;
+	if (!(at->flags & MLX5DR_ACTION_TEMPLATE_FLAG_RELAXED_ORDER)) {
+		/* Check if actions combinabtion is valid,
+		 * in the case of not relaxed actions order.
+		 */
+		valid = mlx5dr_action_check_combo(at->action_type_arr, matcher->tbl->type);
+		if (!valid) {
+			DR_LOG(ERR, "Invalid combination in action template");
+			rte_errno = EINVAL;
+			return rte_errno;
+		}
 	}
 
 	/* Process action template to setters */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 3af5e1f160..5ad45ce2ae 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -6198,7 +6198,7 @@ flow_hw_dr_actions_template_create(struct rte_eth_dev *dev,
 		at->recom_off = recom_off;
 		action_types[recom_off] = recom_type;
 	}
-	dr_template = mlx5dr_action_template_create(action_types);
+	dr_template = mlx5dr_action_template_create(action_types, 0);
 	if (dr_template) {
 		at->dr_actions_num = curr_off;
 	} else {
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 8/9] net/mlx5/hws: add missing actions STE limitation
  2024-02-13  9:50 [PATCH 1/9] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
                   ` (5 preceding siblings ...)
  2024-02-13  9:50 ` [PATCH 7/9] net/mlx5/hws: extend action template creation API Itamar Gozlan
@ 2024-02-13  9:50 ` Itamar Gozlan
  2024-02-13  9:50 ` [PATCH 9/9] net/mlx5/hws: support push_esp flag for insert header action Itamar Gozlan
  7 siblings, 0 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-13  9:50 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad
  Cc: dev

From: Hamdan Igbaria <hamdani@nvidia.com>

Today if we pass a remove header action and after it an insert header
action then our action template builder will set two different
STE setters, because it won't allow insert header in same STE as
remove header.
But if we have the opposite order of insert header and then remove
header actions, then the setter will set both of them on the same STE
since the opposite check was missing.
This patch added the missing opposite limitation.

Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_action.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 370886907f..8589de5557 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -3308,7 +3308,8 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
 		case MLX5DR_ACTION_TYP_REMOVE_HEADER:
 		case MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2:
 			/* Single remove header to header */
-			setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY);
+			setter = mlx5dr_action_setter_find_first(last_setter,
+					ASF_SINGLE1 | ASF_MODIFY | ASF_INSERT);
 			setter->flags |= ASF_SINGLE1 | ASF_REMOVE;
 			setter->set_single = &mlx5dr_action_setter_single;
 			setter->idx_single = i;
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 9/9] net/mlx5/hws: support push_esp flag for insert header action
  2024-02-13  9:50 [PATCH 1/9] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
                   ` (6 preceding siblings ...)
  2024-02-13  9:50 ` [PATCH 8/9] net/mlx5/hws: add missing actions STE limitation Itamar Gozlan
@ 2024-02-13  9:50 ` Itamar Gozlan
  2024-02-18  5:11   ` [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
  7 siblings, 1 reply; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-13  9:50 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad
  Cc: dev

From: Hamdan Igbaria <hamdani@nvidia.com>

support push_esp flag for insert header action, it must be set when
inserting an ESP header, it's also sets the next_protocol field in
the IPsec trailer.

Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_prm.h       | 3 ++-
 drivers/net/mlx5/hws/mlx5dr.h        | 4 ++++
 drivers/net/mlx5/hws/mlx5dr_action.c | 4 ++++
 drivers/net/mlx5/hws/mlx5dr_action.h | 1 +
 drivers/net/mlx5/hws/mlx5dr_cmd.c    | 2 ++
 drivers/net/mlx5/hws/mlx5dr_cmd.h    | 1 +
 6 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index abff8e4dc3..6fa5553215 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -3608,7 +3608,8 @@ struct mlx5_ifc_stc_ste_param_insert_bits {
 	u8 action_type[0x4];
 	u8 encap[0x1];
 	u8 inline_data[0x1];
-	u8 reserved_at_6[0x4];
+	u8 push_esp[0x1];
+	u8 reserved_at_7[0x3];
 	u8 insert_anchor[0x6];
 	u8 reserved_at_10[0x1];
 	u8 insert_offset[0x7];
diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index c11ec08616..78b16f3dc7 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -192,6 +192,10 @@ struct mlx5dr_action_insert_header {
 	 * requiring device to update offloaded fields (for example IPv4 total length).
 	 */
 	bool encap;
+	/* It must be set when adding ESP header.
+	 * It's also sets the next_protocol value in the ipsec trailer.
+	 */
+	bool push_esp;
 };
 
 enum mlx5dr_action_remove_header_type {
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 8589de5557..f55069c675 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -598,6 +598,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
 		attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT;
 		attr->action_offset = MLX5DR_ACTION_OFFSET_DW6;
 		attr->insert_header.encap = action->reformat.encap;
+		attr->insert_header.push_esp = action->reformat.push_esp;
 		attr->insert_header.insert_anchor = action->reformat.anchor;
 		attr->insert_header.arg_id = action->reformat.arg_obj->id;
 		attr->insert_header.header_size = action->reformat.header_size;
@@ -635,6 +636,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
 		attr->action_offset = MLX5DR_ACTION_OFFSET_DW6;
 		attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS;
 		attr->insert_header.encap = 0;
+		attr->insert_header.push_esp = 0;
 		attr->insert_header.is_inline = 1;
 		attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START;
 		attr->insert_header.insert_offset = MLX5DR_ACTION_HDR_LEN_L2_MACS;
@@ -1340,6 +1342,7 @@ mlx5dr_action_handle_insert_with_ptr(struct mlx5dr_action *action,
 			action[i].reformat.anchor = MLX5_HEADER_ANCHOR_PACKET_START;
 			action[i].reformat.offset = 0;
 			action[i].reformat.encap = 1;
+			action[i].reformat.push_esp = 0;
 		}
 
 		if (likely(reparse == MLX5DR_ACTION_STC_REPARSE_DEFAULT))
@@ -2087,6 +2090,7 @@ mlx5dr_action_create_insert_header_reparse(struct mlx5dr_context *ctx,
 
 		action[i].reformat.anchor = hdrs[i].anchor;
 		action[i].reformat.encap = hdrs[i].encap;
+		action[i].reformat.push_esp = hdrs[i].push_esp;
 		action[i].reformat.offset = hdrs[i].offset;
 		reformat_hdrs[i].sz = hdrs[i].hdr.sz;
 		reformat_hdrs[i].data = hdrs[i].hdr.data;
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h
index a8d9720c42..0c8e4bbb5a 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.h
+++ b/drivers/net/mlx5/hws/mlx5dr_action.h
@@ -149,6 +149,7 @@ struct mlx5dr_action {
 					uint8_t offset;
 					bool encap;
 					uint8_t require_reparse;
+					bool push_esp;
 				} reformat;
 				struct {
 					struct mlx5dr_action
diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c
index 876a47147d..28d909df80 100644
--- a/drivers/net/mlx5/hws/mlx5dr_cmd.c
+++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c
@@ -486,6 +486,8 @@ mlx5dr_cmd_stc_modify_set_stc_param(struct mlx5dr_cmd_stc_modify_attr *stc_attr,
 			 MLX5_MODIFICATION_TYPE_INSERT);
 		MLX5_SET(stc_ste_param_insert, stc_parm, encap,
 			 stc_attr->insert_header.encap);
+		MLX5_SET(stc_ste_param_insert, stc_parm, push_esp,
+			 stc_attr->insert_header.push_esp);
 		MLX5_SET(stc_ste_param_insert, stc_parm, inline_data,
 			 stc_attr->insert_header.is_inline);
 		MLX5_SET(stc_ste_param_insert, stc_parm, insert_anchor,
diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h
index 18c2b07fc8..013a7e99e8 100644
--- a/drivers/net/mlx5/hws/mlx5dr_cmd.h
+++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h
@@ -122,6 +122,7 @@ struct mlx5dr_cmd_stc_modify_attr {
 			uint8_t encap;
 			uint16_t insert_anchor;
 			uint16_t insert_offset;
+			uint8_t push_esp;
 		} insert_header;
 		struct {
 			uint8_t aso_type;
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index
  2024-02-13  9:50 ` [PATCH 9/9] net/mlx5/hws: support push_esp flag for insert header action Itamar Gozlan
@ 2024-02-18  5:11   ` Itamar Gozlan
  2024-02-18  5:11     ` [v2 02/10] net/mlx5/hws: add check for not supported fields in VXLAN Itamar Gozlan
                       ` (9 more replies)
  0 siblings, 10 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-18  5:11 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad, Alex Vesker
  Cc: dev

The location of indexed rules is determined by the index, not the item
hash. A matcher test is added to prevent access to non-existent items.
This avoids unnecessary processing and potential segmentation faults.

Fixes: 405242c ("net/mlx5/hws: add rule object")
Signed-off-by: Itamar Gozlan <igozlan@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_rule.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c
index fa19303b91..e39137a6ee 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.c
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.c
@@ -23,6 +23,9 @@ static void mlx5dr_rule_skip(struct mlx5dr_matcher *matcher,
 	*skip_rx = false;
 	*skip_tx = false;
 
+	if (unlikely(mlx5dr_matcher_is_insert_by_idx(matcher)))
+		return;
+
 	if (mt->item_flags & MLX5_FLOW_ITEM_REPRESENTED_PORT) {
 		v = items[mt->vport_item_id].spec;
 		vport = flow_hw_conv_port_id(v->port_id);
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [v2 02/10] net/mlx5/hws: add check for not supported fields in VXLAN
  2024-02-18  5:11   ` [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
@ 2024-02-18  5:11     ` Itamar Gozlan
  2024-02-18  5:11     ` [v2 03/10] net/mlx5/hws: add support for resizable matchers Itamar Gozlan
                       ` (8 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-18  5:11 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad, Alex Vesker
  Cc: dev

From: Erez Shitrit <erezsh@nvidia.com>

Don't allow the user to mask over rsvd1 / rsvd2 fields which are not
supported.

Fixes: dbff89ef806f ("net/mlx5/hws: fix tunnel protocol checks")
Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_definer.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 79d98bbf78..8b8757ecac 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -1414,6 +1414,13 @@ mlx5dr_definer_conv_item_vxlan(struct mlx5dr_definer_conv_data *cd,
 	struct mlx5dr_definer_fc *fc;
 	bool inner = cd->tunnel;
 
+	if (m && (m->rsvd0[0] != 0 || m->rsvd0[1] != 0 || m->rsvd0[2] != 0 ||
+	    m->rsvd1 != 0)) {
+		DR_LOG(ERR, "reserved fields are not supported");
+		rte_errno = ENOTSUP;
+		return rte_errno;
+	}
+
 	if (inner) {
 		DR_LOG(ERR, "Inner VXLAN item not supported");
 		rte_errno = ENOTSUP;
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [v2 03/10] net/mlx5/hws: add support for resizable matchers
  2024-02-18  5:11   ` [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
  2024-02-18  5:11     ` [v2 02/10] net/mlx5/hws: add check for not supported fields in VXLAN Itamar Gozlan
@ 2024-02-18  5:11     ` Itamar Gozlan
  2024-02-18  5:11     ` [v2 04/10] net/mlx5/hws: reordering the STE fields to improve hash Itamar Gozlan
                       ` (7 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-18  5:11 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad
  Cc: dev

From: Yevgeny Kliteynik <kliteyn@nvidia.com>

Add support for matcher resize with the following new API calls:
 - mlx5dr_matcher_resize_set_target
 - mlx5dr_matcher_resize_rule_move

The first function links two matchers and allows moving rules from src
matcher to dst matcher. Both matchers should have the same characteristics
(e.g. same mt, same at). It is the user's responsibility to make sure that
the dst matcher has enough space for the moved rules.
After this function, the user can move rules from src into dst matcher,
and he is no longer allowed to insert rules to the src matcher.

The second function is used to move the rule from matcher that is being
resized to a bigger matcher. Moving a single rule includes creating a new
rule in the destination matcher, and deleting the rule from the source
matcher. This operation creates a single completion.

Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr.h         |  39 ++++
 drivers/net/mlx5/hws/mlx5dr_definer.c |   5 +-
 drivers/net/mlx5/hws/mlx5dr_definer.h |   3 +
 drivers/net/mlx5/hws/mlx5dr_matcher.c | 181 +++++++++++++++-
 drivers/net/mlx5/hws/mlx5dr_matcher.h |  21 ++
 drivers/net/mlx5/hws/mlx5dr_rule.c    | 290 ++++++++++++++++++++++----
 drivers/net/mlx5/hws/mlx5dr_rule.h    |  30 ++-
 drivers/net/mlx5/hws/mlx5dr_send.c    |  45 ++++
 8 files changed, 573 insertions(+), 41 deletions(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index 9c5b068c93..49f72118ba 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -139,6 +139,8 @@ struct mlx5dr_matcher_attr {
 	/* Define the insertion and distribution modes for this matcher */
 	enum mlx5dr_matcher_insert_mode insert_mode;
 	enum mlx5dr_matcher_distribute_mode distribute_mode;
+	/* Define whether the created matcher supports resizing into a bigger matcher */
+	bool resizable;
 	union {
 		struct {
 			uint8_t sz_row_log;
@@ -440,6 +442,43 @@ int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher);
 int mlx5dr_matcher_attach_at(struct mlx5dr_matcher *matcher,
 			     struct mlx5dr_action_template *at);
 
+/* Link two matchers and enable moving rules from src matcher to dst matcher.
+ * Both matchers must be in the same table type, must be created with 'resizable'
+ * property, and should have the same characteristics (e.g. same mt, same at).
+ *
+ * It is the user's responsibility to make sure that the dst matcher
+ * was allocated with the appropriate size.
+ *
+ * Once the function is completed, the user is:
+ *  - allowed to move rules from src into dst matcher
+ *  - no longer allowed to insert rules to the src matcher
+ *
+ * The user is always allowed to insert rules to the dst matcher and
+ * to delete rules from any matcher.
+ *
+ * @param[in] src_matcher
+ *	source matcher for moving rules from
+ * @param[in] dst_matcher
+ *	destination matcher for moving rules to
+ * @return zero on successful move, non zero otherwise.
+ */
+int mlx5dr_matcher_resize_set_target(struct mlx5dr_matcher *src_matcher,
+				     struct mlx5dr_matcher *dst_matcher);
+
+/* Enqueue moving rule operation: moving rule from src matcher to a dst matcher
+ *
+ * @param[in] src_matcher
+ *	matcher that the rule belongs to
+ * @param[in] rule
+ *	the rule to move
+ * @param[in] attr
+ *	rule attributes
+ * @return zero on success, non zero otherwise.
+ */
+int mlx5dr_matcher_resize_rule_move(struct mlx5dr_matcher *src_matcher,
+				    struct mlx5dr_rule *rule,
+				    struct mlx5dr_rule_attr *attr);
+
 /* Get the size of the rule handle (mlx5dr_rule) to be used on rule creation.
  *
  * @return size in bytes of rule handle struct.
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 8b8757ecac..e564062313 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -3296,9 +3296,8 @@ int mlx5dr_definer_get_id(struct mlx5dr_definer *definer)
 	return definer->obj->id;
 }
 
-static int
-mlx5dr_definer_compare(struct mlx5dr_definer *definer_a,
-		       struct mlx5dr_definer *definer_b)
+int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a,
+			   struct mlx5dr_definer *definer_b)
 {
 	int i;
 
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h
index ced9d9da13..71cc0e94de 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.h
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.h
@@ -733,4 +733,7 @@ int mlx5dr_definer_init_cache(struct mlx5dr_definer_cache **cache);
 
 void mlx5dr_definer_uninit_cache(struct mlx5dr_definer_cache *cache);
 
+int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a,
+			   struct mlx5dr_definer *definer_b);
+
 #endif
diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c
index 4ea161eae6..0d5c462734 100644
--- a/drivers/net/mlx5/hws/mlx5dr_matcher.c
+++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c
@@ -704,6 +704,65 @@ static int mlx5dr_matcher_check_and_process_at(struct mlx5dr_matcher *matcher,
 	return 0;
 }
 
+static int
+mlx5dr_matcher_resize_init(struct mlx5dr_matcher *src_matcher)
+{
+	struct mlx5dr_matcher_resize_data *resize_data;
+
+	resize_data = simple_calloc(1, sizeof(*resize_data));
+	if (!resize_data) {
+		rte_errno = ENOMEM;
+		return rte_errno;
+	}
+
+	resize_data->stc = src_matcher->action_ste.stc;
+	resize_data->action_ste_rtc_0 = src_matcher->action_ste.rtc_0;
+	resize_data->action_ste_rtc_1 = src_matcher->action_ste.rtc_1;
+	resize_data->action_ste_pool = src_matcher->action_ste.max_stes ?
+				       src_matcher->action_ste.pool :
+				       NULL;
+
+	/* Place the new resized matcher on the dst matcher's list */
+	LIST_INSERT_HEAD(&src_matcher->resize_dst->resize_data,
+			 resize_data, next);
+
+	/* Move all the previous resized matchers to the dst matcher's list */
+	while (!LIST_EMPTY(&src_matcher->resize_data)) {
+		resize_data = LIST_FIRST(&src_matcher->resize_data);
+		LIST_REMOVE(resize_data, next);
+		LIST_INSERT_HEAD(&src_matcher->resize_dst->resize_data,
+				 resize_data, next);
+	}
+
+	return 0;
+}
+
+static void
+mlx5dr_matcher_resize_uninit(struct mlx5dr_matcher *matcher)
+{
+	struct mlx5dr_matcher_resize_data *resize_data;
+
+	if (!mlx5dr_matcher_is_resizable(matcher) ||
+	    !matcher->action_ste.max_stes)
+		return;
+
+	while (!LIST_EMPTY(&matcher->resize_data)) {
+		resize_data = LIST_FIRST(&matcher->resize_data);
+		LIST_REMOVE(resize_data, next);
+
+		mlx5dr_action_free_single_stc(matcher->tbl->ctx,
+					      matcher->tbl->type,
+					      &resize_data->stc);
+
+		if (matcher->tbl->type == MLX5DR_TABLE_TYPE_FDB)
+			mlx5dr_cmd_destroy_obj(resize_data->action_ste_rtc_1);
+		mlx5dr_cmd_destroy_obj(resize_data->action_ste_rtc_0);
+		if (resize_data->action_ste_pool)
+			mlx5dr_pool_destroy(resize_data->action_ste_pool);
+		simple_free(resize_data);
+	}
+}
+
 static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher)
 {
 	bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(matcher->mt);
@@ -790,7 +849,9 @@ static void mlx5dr_matcher_unbind_at(struct mlx5dr_matcher *matcher)
 {
 	struct mlx5dr_table *tbl = matcher->tbl;
 
-	if (!matcher->action_ste.max_stes || matcher->flags & MLX5DR_MATCHER_FLAGS_COLLISION)
+	if (!matcher->action_ste.max_stes ||
+	    matcher->flags & MLX5DR_MATCHER_FLAGS_COLLISION ||
+	    mlx5dr_matcher_is_in_resize(matcher))
 		return;
 
 	mlx5dr_action_free_single_stc(tbl->ctx, tbl->type, &matcher->action_ste.stc);
@@ -947,6 +1008,10 @@ mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps,
 			DR_LOG(ERR, "Root matcher does not support at attaching");
 			goto not_supported;
 		}
+		if (attr->resizable) {
+			DR_LOG(ERR, "Root matcher does not support resizing");
+			goto not_supported;
+		}
 		return 0;
 	}
 
@@ -960,6 +1025,8 @@ mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps,
 	    attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_HASH)
 		attr->table.sz_col_log = mlx5dr_matcher_rules_to_tbl_depth(attr->rule.num_log);
 
+	matcher->flags |= attr->resizable ? MLX5DR_MATCHER_FLAGS_RESIZABLE : 0;
+
 	return mlx5dr_matcher_check_attr_sz(caps, attr);
 
 not_supported:
@@ -1018,6 +1085,7 @@ static int mlx5dr_matcher_create_and_connect(struct mlx5dr_matcher *matcher)
 
 static void mlx5dr_matcher_destroy_and_disconnect(struct mlx5dr_matcher *matcher)
 {
+	mlx5dr_matcher_resize_uninit(matcher);
 	mlx5dr_matcher_disconnect(matcher);
 	mlx5dr_matcher_create_uninit_shared(matcher);
 	mlx5dr_matcher_destroy_rtc(matcher, DR_MATCHER_RTC_TYPE_MATCH);
@@ -1452,3 +1520,114 @@ int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt)
 	simple_free(mt);
 	return 0;
 }
+
+static int mlx5dr_matcher_resize_precheck(struct mlx5dr_matcher *src_matcher,
+					  struct mlx5dr_matcher *dst_matcher)
+{
+	int i;
+
+	if (mlx5dr_table_is_root(src_matcher->tbl) ||
+	    mlx5dr_table_is_root(dst_matcher->tbl)) {
+		DR_LOG(ERR, "Src/dst matcher belongs to root table - resize unsupported");
+		goto out_einval;
+	}
+
+	if (src_matcher->tbl->type != dst_matcher->tbl->type) {
+		DR_LOG(ERR, "Table type mismatch for src/dst matchers");
+		goto out_einval;
+	}
+
+	if (mlx5dr_matcher_req_fw_wqe(src_matcher) ||
+	    mlx5dr_matcher_req_fw_wqe(dst_matcher)) {
+		DR_LOG(ERR, "Matchers require FW WQE - resize unsupported");
+		goto out_einval;
+	}
+
+	if (!mlx5dr_matcher_is_resizable(src_matcher) ||
+	    !mlx5dr_matcher_is_resizable(dst_matcher)) {
+		DR_LOG(ERR, "Src/dst matcher is not resizable");
+		goto out_einval;
+	}
+
+	if (mlx5dr_matcher_is_insert_by_idx(src_matcher) !=
+	    mlx5dr_matcher_is_insert_by_idx(dst_matcher)) {
+		DR_LOG(ERR, "Src/dst matchers insert mode mismatch");
+		goto out_einval;
+	}
+
+	if (mlx5dr_matcher_is_in_resize(src_matcher) ||
+	    mlx5dr_matcher_is_in_resize(dst_matcher)) {
+		DR_LOG(ERR, "Src/dst matcher is already in resize");
+		goto out_einval;
+	}
+
+	/* Compare match templates - make sure the definers are equivalent */
+	if (src_matcher->num_of_mt != dst_matcher->num_of_mt) {
+		DR_LOG(ERR, "Src/dst matcher match templates mismatch");
+		goto out_einval;
+	}
+
+	if (src_matcher->action_ste.max_stes > dst_matcher->action_ste.max_stes) {
+		DR_LOG(ERR, "Src/dst matcher max STEs mismatch");
+		goto out_einval;
+	}
+
+	for (i = 0; i < src_matcher->num_of_mt; i++) {
+		if (mlx5dr_definer_compare(src_matcher->mt[i].definer,
+					   dst_matcher->mt[i].definer)) {
+			DR_LOG(ERR, "Src/dst matcher definers mismatch");
+			goto out_einval;
+		}
+	}
+
+	return 0;
+
+out_einval:
+	rte_errno = EINVAL;
+	return rte_errno;
+}
+
+int mlx5dr_matcher_resize_set_target(struct mlx5dr_matcher *src_matcher,
+				     struct mlx5dr_matcher *dst_matcher)
+{
+	int ret = 0;
+
+	pthread_spin_lock(&src_matcher->tbl->ctx->ctrl_lock);
+
+	if (mlx5dr_matcher_resize_precheck(src_matcher, dst_matcher)) {
+		ret = -rte_errno;
+		goto out;
+	}
+
+	src_matcher->resize_dst = dst_matcher;
+
+	if (mlx5dr_matcher_resize_init(src_matcher)) {
+		src_matcher->resize_dst = NULL;
+		ret = -rte_errno;
+	}
+
+out:
+	pthread_spin_unlock(&src_matcher->tbl->ctx->ctrl_lock);
+	return ret;
+}
+
+int mlx5dr_matcher_resize_rule_move(struct mlx5dr_matcher *src_matcher,
+				    struct mlx5dr_rule *rule,
+				    struct mlx5dr_rule_attr *attr)
+{
+	if (unlikely(!mlx5dr_matcher_is_in_resize(src_matcher))) {
+		DR_LOG(ERR, "Matcher is not resizable or not in resize");
+		goto out_einval;
+	}
+
+	if (unlikely(src_matcher != rule->matcher)) {
+		DR_LOG(ERR, "Rule doesn't belong to src matcher");
+		goto out_einval;
+	}
+
+	return mlx5dr_rule_move_hws_add(rule, attr);
+
+out_einval:
+	rte_errno = EINVAL;
+	return -rte_errno;
+}
diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h
index 363a61fd41..0f2bf96e8b 100644
--- a/drivers/net/mlx5/hws/mlx5dr_matcher.h
+++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h
@@ -26,6 +26,7 @@ enum mlx5dr_matcher_flags {
 	MLX5DR_MATCHER_FLAGS_RANGE_DEFINER	= 1 << 0,
 	MLX5DR_MATCHER_FLAGS_HASH_DEFINER	= 1 << 1,
 	MLX5DR_MATCHER_FLAGS_COLLISION		= 1 << 2,
+	MLX5DR_MATCHER_FLAGS_RESIZABLE		= 1 << 3,
 };
 
 struct mlx5dr_match_template {
@@ -59,6 +60,14 @@ struct mlx5dr_matcher_action_ste {
 	uint8_t max_stes;
 };
 
+struct mlx5dr_matcher_resize_data {
+	struct mlx5dr_pool_chunk stc;
+	struct mlx5dr_devx_obj *action_ste_rtc_0;
+	struct mlx5dr_devx_obj *action_ste_rtc_1;
+	struct mlx5dr_pool *action_ste_pool;
+	LIST_ENTRY(mlx5dr_matcher_resize_data) next;
+};
+
 struct mlx5dr_matcher {
 	struct mlx5dr_table *tbl;
 	struct mlx5dr_matcher_attr attr;
@@ -71,10 +80,12 @@ struct mlx5dr_matcher {
 	uint8_t flags;
 	struct mlx5dr_devx_obj *end_ft;
 	struct mlx5dr_matcher *col_matcher;
+	struct mlx5dr_matcher *resize_dst;
 	struct mlx5dr_matcher_match_ste match_ste;
 	struct mlx5dr_matcher_action_ste action_ste;
 	struct mlx5dr_definer *hash_definer;
 	LIST_ENTRY(mlx5dr_matcher) next;
+	LIST_HEAD(resize_data_head, mlx5dr_matcher_resize_data) resize_data;
 };
 
 static inline bool
@@ -89,6 +100,16 @@ mlx5dr_matcher_mt_is_range(struct mlx5dr_match_template *mt)
 	return (!!mt->range_definer);
 }
 
+static inline bool mlx5dr_matcher_is_resizable(struct mlx5dr_matcher *matcher)
+{
+	return !!(matcher->flags & MLX5DR_MATCHER_FLAGS_RESIZABLE);
+}
+
+static inline bool mlx5dr_matcher_is_in_resize(struct mlx5dr_matcher *matcher)
+{
+	return !!matcher->resize_dst;
+}
+
 static inline bool mlx5dr_matcher_req_fw_wqe(struct mlx5dr_matcher *matcher)
 {
 	/* Currently HWS doesn't support hash different from match or range */
diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c
index e39137a6ee..6bf087e187 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.c
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.c
@@ -114,6 +114,23 @@ static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe,
 	}
 }
 
+static void mlx5dr_rule_move_get_rtc(struct mlx5dr_rule *rule,
+				     struct mlx5dr_send_ste_attr *ste_attr)
+{
+	struct mlx5dr_matcher *dst_matcher = rule->matcher->resize_dst;
+
+	if (rule->resize_info->rtc_0) {
+		ste_attr->rtc_0 = dst_matcher->match_ste.rtc_0->id;
+		ste_attr->retry_rtc_0 = dst_matcher->col_matcher ?
+					dst_matcher->col_matcher->match_ste.rtc_0->id : 0;
+	}
+	if (rule->resize_info->rtc_1) {
+		ste_attr->rtc_1 = dst_matcher->match_ste.rtc_1->id;
+		ste_attr->retry_rtc_1 = dst_matcher->col_matcher ?
+					dst_matcher->col_matcher->match_ste.rtc_1->id : 0;
+	}
+}
+
 static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue,
 				 struct mlx5dr_rule *rule,
 				 bool err,
@@ -134,6 +151,34 @@ static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue,
 	mlx5dr_send_engine_gen_comp(queue, user_data, comp_status);
 }
 
+static void
+mlx5dr_rule_save_resize_info(struct mlx5dr_rule *rule,
+			     struct mlx5dr_send_ste_attr *ste_attr)
+{
+	rule->resize_info = simple_calloc(1, sizeof(*rule->resize_info));
+	if (unlikely(!rule->resize_info)) {
+		assert(rule->resize_info);
+		rte_errno = ENOMEM;
+	}
+
+	memcpy(rule->resize_info->ctrl_seg, ste_attr->wqe_ctrl,
+	       sizeof(rule->resize_info->ctrl_seg));
+	memcpy(rule->resize_info->data_seg, ste_attr->wqe_data,
+	       sizeof(rule->resize_info->data_seg));
+
+	rule->resize_info->action_ste_pool = rule->matcher->action_ste.max_stes ?
+					     rule->matcher->action_ste.pool :
+					     NULL;
+}
+
+static void mlx5dr_rule_clear_resize_info(struct mlx5dr_rule *rule)
+{
+	if (rule->resize_info) {
+		simple_free(rule->resize_info);
+		rule->resize_info = NULL;
+	}
+}
+
 static void
 mlx5dr_rule_save_delete_info(struct mlx5dr_rule *rule,
 			     struct mlx5dr_send_ste_attr *ste_attr)
@@ -161,17 +206,29 @@ mlx5dr_rule_save_delete_info(struct mlx5dr_rule *rule,
 		return;
 	}
 
-	if (is_jumbo)
-		memcpy(rule->tag.jumbo, ste_attr->wqe_data->jumbo, MLX5DR_JUMBO_TAG_SZ);
-	else
-		memcpy(rule->tag.match, ste_attr->wqe_data->tag, MLX5DR_MATCH_TAG_SZ);
+	if (likely(!mlx5dr_matcher_is_resizable(rule->matcher))) {
+		if (is_jumbo)
+			memcpy(&rule->tag.jumbo, ste_attr->wqe_data->action, MLX5DR_JUMBO_TAG_SZ);
+		else
+			memcpy(&rule->tag.match, ste_attr->wqe_data->tag, MLX5DR_MATCH_TAG_SZ);
+		return;
+	}
+
+	mlx5dr_rule_save_resize_info(rule, ste_attr);
 }
 
 static void
 mlx5dr_rule_clear_delete_info(struct mlx5dr_rule *rule)
 {
-	if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher)))
+	if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) {
 		simple_free(rule->tag_ptr);
+		return;
+	}
+
+	if (unlikely(mlx5dr_matcher_is_resizable(rule->matcher))) {
+		mlx5dr_rule_clear_resize_info(rule);
+		return;
+	}
 }
 
 static void
@@ -188,8 +245,11 @@ mlx5dr_rule_load_delete_info(struct mlx5dr_rule *rule,
 			ste_attr->range_wqe_tag = &rule->tag_ptr[1];
 			ste_attr->send_attr.range_definer_id = rule->tag_ptr[1].reserved[1];
 		}
-	} else {
+	} else if (likely(!mlx5dr_matcher_is_resizable(rule->matcher))) {
 		ste_attr->wqe_tag = &rule->tag;
+	} else {
+		ste_attr->wqe_tag = (struct mlx5dr_rule_match_tag *)
+			&rule->resize_info->data_seg[MLX5DR_STE_CTRL_SZ];
 	}
 }
 
@@ -220,6 +280,7 @@ static int mlx5dr_rule_alloc_action_ste(struct mlx5dr_rule *rule,
 void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule)
 {
 	struct mlx5dr_matcher *matcher = rule->matcher;
+	struct mlx5dr_pool *pool;
 
 	if (rule->action_ste_idx > -1 &&
 	    !matcher->attr.optimize_using_rule_idx &&
@@ -229,7 +290,11 @@ void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule)
 		/* This release is safe only when the rule match part was deleted */
 		ste.order = rte_log2_u32(matcher->action_ste.max_stes);
 		ste.offset = rule->action_ste_idx;
-		mlx5dr_pool_chunk_free(matcher->action_ste.pool, &ste);
+
+		/* Free the original action pool if rule was resized */
+		pool = mlx5dr_matcher_is_resizable(matcher) ? rule->resize_info->action_ste_pool :
+							      matcher->action_ste.pool;
+		mlx5dr_pool_chunk_free(pool, &ste);
 	}
 }
 
@@ -266,6 +331,23 @@ static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule,
 	apply->require_dep = 0;
 }
 
+static void mlx5dr_rule_move_init(struct mlx5dr_rule *rule,
+				  struct mlx5dr_rule_attr *attr)
+{
+	/* Save the old RTC IDs to be later used in match STE delete */
+	rule->resize_info->rtc_0 = rule->rtc_0;
+	rule->resize_info->rtc_1 = rule->rtc_1;
+	rule->resize_info->rule_idx = attr->rule_idx;
+
+	rule->rtc_0 = 0;
+	rule->rtc_1 = 0;
+
+	rule->pending_wqes = 0;
+	rule->action_ste_idx = -1;
+	rule->status = MLX5DR_RULE_STATUS_CREATING;
+	rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_WRITING;
+}
+
 static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule,
 					 struct mlx5dr_rule_attr *attr,
 					 uint8_t mt_idx,
@@ -346,7 +428,9 @@ static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule,
 	/* Send WQEs to FW */
 	mlx5dr_send_stes_fw(queue, &ste_attr);
 
-	/* Backup TAG on the rule for deletion */
+	/* Backup TAG on the rule for deletion, and save ctrl/data
+	 * segments to be used when resizing the matcher.
+	 */
 	mlx5dr_rule_save_delete_info(rule, &ste_attr);
 	mlx5dr_send_engine_inc_rule(queue);
 
@@ -469,7 +553,9 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule,
 		mlx5dr_send_ste(queue, &ste_attr);
 	}
 
-	/* Backup TAG on the rule for deletion, only after insertion */
+	/* Backup TAG on the rule for deletion and resize info for
+	 * moving rules to a new matcher, only after insertion.
+	 */
 	if (!is_update)
 		mlx5dr_rule_save_delete_info(rule, &ste_attr);
 
@@ -496,7 +582,7 @@ static void mlx5dr_rule_destroy_failed_hws(struct mlx5dr_rule *rule,
 	/* Rule failed now we can safely release action STEs */
 	mlx5dr_rule_free_action_ste_idx(rule);
 
-	/* Clear complex tag */
+	/* Clear complex tag or info that was saved for matcher resizing */
 	mlx5dr_rule_clear_delete_info(rule);
 
 	/* If a rule that was indicated as burst (need to trigger HW) has failed
@@ -571,12 +657,12 @@ static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule,
 
 	mlx5dr_rule_load_delete_info(rule, &ste_attr);
 
-	if (unlikely(fw_wqe)) {
+	if (unlikely(fw_wqe))
 		mlx5dr_send_stes_fw(queue, &ste_attr);
-		mlx5dr_rule_clear_delete_info(rule);
-	} else {
+	else
 		mlx5dr_send_ste(queue, &ste_attr);
-	}
+
+	mlx5dr_rule_clear_delete_info(rule);
 
 	return 0;
 }
@@ -664,9 +750,11 @@ static int mlx5dr_rule_destroy_root(struct mlx5dr_rule *rule,
 	return 0;
 }
 
-static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_context *ctx,
+static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_rule *rule,
 					struct mlx5dr_rule_attr *attr)
 {
+	struct mlx5dr_context *ctx = rule->matcher->tbl->ctx;
+
 	if (unlikely(!attr->user_data)) {
 		rte_errno = EINVAL;
 		return rte_errno;
@@ -681,6 +769,153 @@ static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_context *ctx,
 	return 0;
 }
 
+static int mlx5dr_rule_enqueue_precheck_move(struct mlx5dr_rule *rule,
+					     struct mlx5dr_rule_attr *attr)
+{
+	if (unlikely(rule->status != MLX5DR_RULE_STATUS_CREATED)) {
+		rte_errno = EINVAL;
+		return rte_errno;
+	}
+
+	return mlx5dr_rule_enqueue_precheck(rule, attr);
+}
+
+static int mlx5dr_rule_enqueue_precheck_create(struct mlx5dr_rule *rule,
+					       struct mlx5dr_rule_attr *attr)
+{
+	if (unlikely(mlx5dr_matcher_is_in_resize(rule->matcher))) {
+		/* Matcher in resize - new rules are not allowed */
+		rte_errno = EAGAIN;
+		return rte_errno;
+	}
+
+	return mlx5dr_rule_enqueue_precheck(rule, attr);
+}
+
+static int mlx5dr_rule_enqueue_precheck_update(struct mlx5dr_rule *rule,
+					       struct mlx5dr_rule_attr *attr)
+{
+	struct mlx5dr_matcher *matcher = rule->matcher;
+
+	if (unlikely((mlx5dr_table_is_root(matcher->tbl) ||
+		     mlx5dr_matcher_req_fw_wqe(matcher)))) {
+		DR_LOG(ERR, "Rule update is not supported on current matcher");
+		rte_errno = ENOTSUP;
+		return rte_errno;
+	}
+
+	if (unlikely(!matcher->attr.optimize_using_rule_idx &&
+		     !mlx5dr_matcher_is_insert_by_idx(matcher))) {
+		DR_LOG(ERR, "Rule update requires optimize by idx matcher");
+		rte_errno = ENOTSUP;
+		return rte_errno;
+	}
+
+	if (unlikely(mlx5dr_matcher_is_resizable(rule->matcher))) {
+		DR_LOG(ERR, "Rule update is not supported on resizable matcher");
+		rte_errno = ENOTSUP;
+		return rte_errno;
+	}
+
+	if (unlikely(rule->status != MLX5DR_RULE_STATUS_CREATED)) {
+		DR_LOG(ERR, "Current rule status does not allow update");
+		rte_errno = EBUSY;
+		return rte_errno;
+	}
+
+	return mlx5dr_rule_enqueue_precheck_create(rule, attr);
+}
+
+int mlx5dr_rule_move_hws_remove(struct mlx5dr_rule *rule,
+				void *queue_ptr,
+				void *user_data)
+{
+	bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(rule->matcher->mt);
+	struct mlx5dr_wqe_gta_ctrl_seg empty_wqe_ctrl = {0};
+	struct mlx5dr_matcher *matcher = rule->matcher;
+	struct mlx5dr_send_engine *queue = queue_ptr;
+	struct mlx5dr_send_ste_attr ste_attr = {0};
+
+	/* Send dependent WQEs */
+	mlx5dr_send_all_dep_wqe(queue);
+
+	rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_DELETING;
+
+	ste_attr.send_attr.fence = 0;
+	ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE;
+	ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS;
+	ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA;
+	ste_attr.send_attr.rule = rule;
+	ste_attr.send_attr.notify_hw = 1;
+	ste_attr.send_attr.user_data = user_data;
+	ste_attr.rtc_0 = rule->resize_info->rtc_0;
+	ste_attr.rtc_1 = rule->resize_info->rtc_1;
+	ste_attr.used_id_rtc_0 = &rule->resize_info->rtc_0;
+	ste_attr.used_id_rtc_1 = &rule->resize_info->rtc_1;
+	ste_attr.wqe_ctrl = &empty_wqe_ctrl;
+	ste_attr.wqe_tag_is_jumbo = is_jumbo;
+	ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE;
+
+	if (unlikely(mlx5dr_matcher_is_insert_by_idx(matcher)))
+		ste_attr.direct_index = rule->resize_info->rule_idx;
+
+	mlx5dr_rule_load_delete_info(rule, &ste_attr);
+	mlx5dr_send_ste(queue, &ste_attr);
+
+	return 0;
+}
+
+int mlx5dr_rule_move_hws_add(struct mlx5dr_rule *rule,
+			     struct mlx5dr_rule_attr *attr)
+{
+	bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(rule->matcher->mt);
+	struct mlx5dr_context *ctx = rule->matcher->tbl->ctx;
+	struct mlx5dr_matcher *matcher = rule->matcher;
+	struct mlx5dr_send_ste_attr ste_attr = {0};
+	struct mlx5dr_send_engine *queue;
+
+	if (unlikely(mlx5dr_rule_enqueue_precheck_move(rule, attr)))
+		return -rte_errno;
+
+	queue = &ctx->send_queue[attr->queue_id];
+
+	if (unlikely(mlx5dr_send_engine_err(queue))) {
+		rte_errno = EIO;
+		return rte_errno;
+	}
+
+	mlx5dr_rule_move_init(rule, attr);
+
+	mlx5dr_rule_move_get_rtc(rule, &ste_attr);
+
+	ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE;
+	ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS;
+	ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA;
+	ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE;
+	ste_attr.wqe_tag_is_jumbo = is_jumbo;
+
+	ste_attr.send_attr.rule = rule;
+	ste_attr.send_attr.fence = 0;
+	ste_attr.send_attr.notify_hw = !attr->burst;
+	ste_attr.send_attr.user_data = attr->user_data;
+
+	ste_attr.used_id_rtc_0 = &rule->rtc_0;
+	ste_attr.used_id_rtc_1 = &rule->rtc_1;
+	ste_attr.wqe_ctrl = (struct mlx5dr_wqe_gta_ctrl_seg *)rule->resize_info->ctrl_seg;
+	ste_attr.wqe_data = (struct mlx5dr_wqe_gta_data_seg_ste *)rule->resize_info->data_seg;
+	ste_attr.direct_index = mlx5dr_matcher_is_insert_by_idx(matcher) ?
+				attr->rule_idx : 0;
+
+	mlx5dr_send_ste(queue, &ste_attr);
+	mlx5dr_send_engine_inc_rule(queue);
+
+	/* Send dependent WQEs */
+	if (!attr->burst)
+		mlx5dr_send_all_dep_wqe(queue);
+
+	return 0;
+}
+
 int mlx5dr_rule_create(struct mlx5dr_matcher *matcher,
 		       uint8_t mt_idx,
 		       const struct rte_flow_item items[],
@@ -689,13 +924,11 @@ int mlx5dr_rule_create(struct mlx5dr_matcher *matcher,
 		       struct mlx5dr_rule_attr *attr,
 		       struct mlx5dr_rule *rule_handle)
 {
-	struct mlx5dr_context *ctx;
 	int ret;
 
 	rule_handle->matcher = matcher;
-	ctx = matcher->tbl->ctx;
 
-	if (mlx5dr_rule_enqueue_precheck(ctx, attr))
+	if (unlikely(mlx5dr_rule_enqueue_precheck_create(rule_handle, attr)))
 		return -rte_errno;
 
 	assert(matcher->num_of_mt >= mt_idx);
@@ -723,7 +956,7 @@ int mlx5dr_rule_destroy(struct mlx5dr_rule *rule,
 {
 	int ret;
 
-	if (mlx5dr_rule_enqueue_precheck(rule->matcher->tbl->ctx, attr))
+	if (unlikely(mlx5dr_rule_enqueue_precheck(rule, attr)))
 		return -rte_errno;
 
 	if (unlikely(mlx5dr_table_is_root(rule->matcher->tbl)))
@@ -739,24 +972,9 @@ int mlx5dr_rule_action_update(struct mlx5dr_rule *rule_handle,
 			      struct mlx5dr_rule_action rule_actions[],
 			      struct mlx5dr_rule_attr *attr)
 {
-	struct mlx5dr_matcher *matcher = rule_handle->matcher;
 	int ret;
 
-	if (unlikely(mlx5dr_table_is_root(matcher->tbl) ||
-	    unlikely(mlx5dr_matcher_req_fw_wqe(matcher)))) {
-		DR_LOG(ERR, "Rule update not supported on current matcher");
-		rte_errno = ENOTSUP;
-		return -rte_errno;
-	}
-
-	if (!matcher->attr.optimize_using_rule_idx &&
-	    !mlx5dr_matcher_is_insert_by_idx(matcher)) {
-		DR_LOG(ERR, "Rule update requires optimize by idx matcher");
-		rte_errno = ENOTSUP;
-		return -rte_errno;
-	}
-
-	if (mlx5dr_rule_enqueue_precheck(matcher->tbl->ctx, attr))
+	if (unlikely(mlx5dr_rule_enqueue_precheck_update(rule_handle, attr)))
 		return -rte_errno;
 
 	ret = mlx5dr_rule_create_hws(rule_handle,
@@ -780,7 +998,7 @@ int mlx5dr_rule_hash_calculate(struct mlx5dr_matcher *matcher,
 			       enum mlx5dr_rule_hash_calc_mode mode,
 			       uint32_t *ret_hash)
 {
-	uint8_t tag[MLX5DR_STE_SZ] = {0};
+	uint8_t tag[MLX5DR_WQE_SZ_GTA_DATA] = {0};
 	struct mlx5dr_match_template *mt;
 
 	if (!matcher || !matcher->mt) {
diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h
index f7d97eead5..07adf9c5ad 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.h
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.h
@@ -10,7 +10,6 @@ enum {
 	MLX5DR_ACTIONS_SZ = 12,
 	MLX5DR_MATCH_TAG_SZ = 32,
 	MLX5DR_JUMBO_TAG_SZ = 44,
-	MLX5DR_STE_SZ = 64,
 };
 
 enum mlx5dr_rule_status {
@@ -23,6 +22,12 @@ enum mlx5dr_rule_status {
 	MLX5DR_RULE_STATUS_FAILED,
 };
 
+enum mlx5dr_rule_move_state {
+	MLX5DR_RULE_RESIZE_STATE_IDLE,
+	MLX5DR_RULE_RESIZE_STATE_WRITING,
+	MLX5DR_RULE_RESIZE_STATE_DELETING,
+};
+
 struct mlx5dr_rule_match_tag {
 	union {
 		uint8_t jumbo[MLX5DR_JUMBO_TAG_SZ];
@@ -33,6 +38,16 @@ struct mlx5dr_rule_match_tag {
 	};
 };
 
+struct mlx5dr_rule_resize_info {
+	uint8_t state;
+	uint32_t rtc_0;
+	uint32_t rtc_1;
+	uint32_t rule_idx;
+	struct mlx5dr_pool *action_ste_pool;
+	uint8_t ctrl_seg[MLX5DR_WQE_SZ_GTA_CTRL]; /* Ctrl segment of STE: 48 bytes */
+	uint8_t data_seg[MLX5DR_WQE_SZ_GTA_DATA]; /* Data segment of STE: 64 bytes */
+};
+
 struct mlx5dr_rule {
 	struct mlx5dr_matcher *matcher;
 	union {
@@ -40,6 +55,7 @@ struct mlx5dr_rule {
 		/* Pointer to tag to store more than one tag */
 		struct mlx5dr_rule_match_tag *tag_ptr;
 		struct ibv_flow *flow;
+		struct mlx5dr_rule_resize_info *resize_info;
 	};
 	uint32_t rtc_0; /* The RTC into which the STE was inserted */
 	uint32_t rtc_1; /* The RTC into which the STE was inserted */
@@ -50,4 +66,16 @@ struct mlx5dr_rule {
 
 void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule);
 
+int mlx5dr_rule_move_hws_remove(struct mlx5dr_rule *rule,
+				void *queue, void *user_data);
+
+int mlx5dr_rule_move_hws_add(struct mlx5dr_rule *rule,
+			     struct mlx5dr_rule_attr *attr);
+
+static inline bool mlx5dr_rule_move_in_progress(struct mlx5dr_rule *rule)
+{
+	return rule->resize_info &&
+	       rule->resize_info->state != MLX5DR_RULE_RESIZE_STATE_IDLE;
+}
+
 #endif /* MLX5DR_RULE_H_ */
diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c
index 622d574bfa..64138279a1 100644
--- a/drivers/net/mlx5/hws/mlx5dr_send.c
+++ b/drivers/net/mlx5/hws/mlx5dr_send.c
@@ -444,6 +444,46 @@ void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue)
 	mlx5dr_send_engine_post_ring(sq, queue->uar, wqe_ctrl);
 }
 
+static void
+mlx5dr_send_engine_update_rule_resize(struct mlx5dr_send_engine *queue,
+				      struct mlx5dr_send_ring_priv *priv,
+				      enum rte_flow_op_status *status)
+{
+	switch (priv->rule->resize_info->state) {
+	case MLX5DR_RULE_RESIZE_STATE_WRITING:
+		if (priv->rule->status == MLX5DR_RULE_STATUS_FAILING) {
+			/* Backup original RTCs */
+			uint32_t orig_rtc_0 = priv->rule->resize_info->rtc_0;
+			uint32_t orig_rtc_1 = priv->rule->resize_info->rtc_1;
+
+			/* Delete partially failed move rule using resize_info */
+			priv->rule->resize_info->rtc_0 = priv->rule->rtc_0;
+			priv->rule->resize_info->rtc_1 = priv->rule->rtc_1;
+
+			/* Move rule to original RTC for future delete */
+			priv->rule->rtc_0 = orig_rtc_0;
+			priv->rule->rtc_1 = orig_rtc_1;
+		}
+		/* Clean leftovers */
+		mlx5dr_rule_move_hws_remove(priv->rule, queue, priv->user_data);
+		break;
+
+	case MLX5DR_RULE_RESIZE_STATE_DELETING:
+		if (priv->rule->status == MLX5DR_RULE_STATUS_FAILING) {
+			*status = RTE_FLOW_OP_ERROR;
+		} else {
+			*status = RTE_FLOW_OP_SUCCESS;
+			priv->rule->matcher = priv->rule->matcher->resize_dst;
+		}
+		priv->rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_IDLE;
+		priv->rule->status = MLX5DR_RULE_STATUS_CREATED;
+		break;
+
+	default:
+		break;
+	}
+}
+
 static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue,
 					   struct mlx5dr_send_ring_priv *priv,
 					   uint16_t wqe_cnt,
@@ -465,6 +505,11 @@ static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue,
 
 	/* Update rule status for the last completion */
 	if (!priv->rule->pending_wqes) {
+		if (unlikely(mlx5dr_rule_move_in_progress(priv->rule))) {
+			mlx5dr_send_engine_update_rule_resize(queue, priv, status);
+			return;
+		}
+
 		if (unlikely(priv->rule->status == MLX5DR_RULE_STATUS_FAILING)) {
 			/* Rule completely failed and doesn't require cleanup */
 			if (!priv->rule->rtc_0 && !priv->rule->rtc_1)
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [v2 04/10] net/mlx5/hws: reordering the STE fields to improve hash
  2024-02-18  5:11   ` [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
  2024-02-18  5:11     ` [v2 02/10] net/mlx5/hws: add check for not supported fields in VXLAN Itamar Gozlan
  2024-02-18  5:11     ` [v2 03/10] net/mlx5/hws: add support for resizable matchers Itamar Gozlan
@ 2024-02-18  5:11     ` Itamar Gozlan
  2024-02-18  5:11     ` [v2 05/10] net/mlx5/hws: check the rule status on rule update Itamar Gozlan
                       ` (6 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-18  5:11 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad
  Cc: dev

Inserting two rules with the same hash calculation result into the same
matcher will cause collisions, which can cause degradation in PPS.
Changing the order of some fields in the STE can change the hash result,
and doing this for every value would give us a different hash distribution
for the inputs. By using precomputed optimal DW locations, we can change
the STE order for a limited set of the most common values to reduce the
number of hash collisions and improve latency.

Signed-off-by: Itamar Gozlan <igozlan@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_definer.c | 64 +++++++++++++++++++++++++++
 1 file changed, 64 insertions(+)

diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index e564062313..eb788a772a 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -100,6 +100,33 @@
 	__mlx5_dw_off(typ, fld))) >> __mlx5_dw_bit_off(typ, fld)) & \
 	__mlx5_mask(typ, fld))
 
+#define MAX_ROW_LOG 31
+
+enum header_layout {
+	MLX5DR_HL_IPV4_SRC = 64,
+	MLX5DR_HL_IPV4_DST = 65,
+	MAX_HL_PRIO,
+};
+
+/* Each row (i) indicates a different matcher size, and each column (j)
+ * represents {DW5, DW4, DW3, DW2, DW1, DW0}.
+ * For values 0,..,2^i, and j (DW) 0,..,5: optimal_dist_dw[i][j] is 1 if the
+ * number of different hash results on these values equals 2^i, meaning this
+ * DW hash distribution is complete.
+ */
+int optimal_dist_dw[MAX_ROW_LOG][DW_SELECTORS_MATCH] = {
+	{1, 1, 1, 1, 1, 1}, {0, 1, 1, 0, 1, 0}, {0, 1, 1, 0, 1, 0},
+	{1, 0, 1, 0, 1, 0}, {0, 0, 0, 1, 1, 0}, {0, 1, 1, 0, 1, 0},
+	{0, 0, 0, 0, 1, 0}, {0, 1, 1, 0, 1, 0}, {0, 0, 0, 0, 0, 0},
+	{1, 0, 1, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 1, 0, 1, 0, 0},
+	{1, 0, 0, 0, 0, 0}, {0, 0, 1, 0, 0, 1}, {1, 1, 1, 0, 0, 0},
+	{1, 1, 1, 0, 1, 0}, {0, 0, 1, 1, 0, 0}, {0, 1, 1, 0, 0, 1},
+	{0, 0, 1, 0, 0, 1}, {0, 0, 1, 0, 0, 0}, {1, 0, 1, 1, 0, 0},
+	{1, 0, 1, 0, 0, 1}, {0, 0, 1, 1, 0, 1}, {1, 1, 1, 0, 0, 0},
+	{0, 1, 0, 1, 0, 1}, {0, 0, 0, 0, 0, 1}, {0, 0, 0, 1, 1, 1},
+	{0, 0, 1, 0, 0, 1}, {1, 1, 0, 1, 1, 0}, {0, 0, 0, 0, 1, 0},
+	{0, 0, 0, 1, 1, 0}};
+
 struct mlx5dr_definer_sel_ctrl {
 	uint8_t allowed_full_dw; /* Full DW selectors cover all offsets */
 	uint8_t allowed_lim_dw;  /* Limited DW selectors cover offset < 64 */
@@ -3185,6 +3212,37 @@ mlx5dr_definer_find_best_range_fit(struct mlx5dr_definer *definer,
 	return rte_errno;
 }
 
+static void mlx5dr_definer_optimize_order(struct mlx5dr_definer *definer, int num_log)
+{
+	uint8_t hl_prio[MAX_HL_PRIO - 1] = {MLX5DR_HL_IPV4_SRC,
+					    MLX5DR_HL_IPV4_DST,
+					    MAX_HL_PRIO};
+	int dw = 0, i = 0, j;
+	int *dw_flag;
+	uint8_t tmp;
+
+	dw_flag = optimal_dist_dw[num_log];
+
+	while (hl_prio[i] != MAX_HL_PRIO) {
+		j = 0;
+		/* Finding a candidate to improve its hash distribution */
+		while (j < DW_SELECTORS_MATCH && (hl_prio[i] != definer->dw_selector[j]))
+			j++;
+
+		/* Finding a DW location with good hash distribution */
+		while (dw < DW_SELECTORS_MATCH && dw_flag[dw] == 0)
+			dw++;
+
+		if (dw < DW_SELECTORS_MATCH && j < DW_SELECTORS_MATCH) {
+			tmp = definer->dw_selector[dw];
+			definer->dw_selector[dw] = definer->dw_selector[j];
+			definer->dw_selector[j] = tmp;
+			dw++;
+		}
+		i++;
+	}
+}
+
 static int
 mlx5dr_definer_find_best_match_fit(struct mlx5dr_context *ctx,
 				   struct mlx5dr_definer *definer,
@@ -3355,6 +3413,12 @@ mlx5dr_definer_calc_layout(struct mlx5dr_matcher *matcher,
 		goto free_fc;
 	}
 
+	if (!mlx5dr_definer_is_jumbo(match_definer) &&
+	    !mlx5dr_matcher_req_fw_wqe(matcher) &&
+	    !mlx5dr_matcher_is_resizable(matcher) &&
+	    !mlx5dr_matcher_is_insert_by_idx(matcher))
+		mlx5dr_definer_optimize_order(match_definer, matcher->attr.rule.num_log);
+
 	/* Find the range definer layout for match templates fcrs */
 	ret = mlx5dr_definer_find_best_range_fit(range_definer, matcher);
 	if (ret) {
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [v2 05/10] net/mlx5/hws: check the rule status on rule update
  2024-02-18  5:11   ` [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
                       ` (2 preceding siblings ...)
  2024-02-18  5:11     ` [v2 04/10] net/mlx5/hws: reordering the STE fields to improve hash Itamar Gozlan
@ 2024-02-18  5:11     ` Itamar Gozlan
  2024-02-18  5:11     ` [v2 06/10] net/mlx5/hws: fix VLAN item handling on non relaxed mode Itamar Gozlan
                       ` (5 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-18  5:11 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad
  Cc: dev

From: Hamdan Igbaria <hamdani@nvidia.com>

Only allow rule updates for rules with their status value equal to
MLX5DR_RULE_STATUS_CREATED.
Otherwise, the rule may be in an unstable stage like deleting and
this will result in a faulty unexpected scenario.

Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_rule.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c
index 6bf087e187..aa00c54e53 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.c
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.c
@@ -977,6 +977,12 @@ int mlx5dr_rule_action_update(struct mlx5dr_rule *rule_handle,
 	if (unlikely(mlx5dr_rule_enqueue_precheck_update(rule_handle, attr)))
 		return -rte_errno;
 
+	if (rule_handle->status != MLX5DR_RULE_STATUS_CREATED) {
+		DR_LOG(ERR, "Current rule status does not allow update");
+		rte_errno = EBUSY;
+		return -rte_errno;
+	}
+
 	ret = mlx5dr_rule_create_hws(rule_handle,
 				     attr,
 				     0,
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [v2 06/10] net/mlx5/hws: fix VLAN item handling on non relaxed mode
  2024-02-18  5:11   ` [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
                       ` (3 preceding siblings ...)
  2024-02-18  5:11     ` [v2 05/10] net/mlx5/hws: check the rule status on rule update Itamar Gozlan
@ 2024-02-18  5:11     ` Itamar Gozlan
  2024-02-18  5:11     ` [v2 07/10] net/mlx5/hws: extend action template creation API Itamar Gozlan
                       ` (4 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-18  5:11 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad, Mark Bloch,
	Alex Vesker
  Cc: dev

From: Hamdan Igbaria <hamdani@nvidia.com>

If a VLAN item was passed with null mask, the item handler would
return immediately and thus won't set default values for non relax
mode.
Also change the non relax default set to single-tagged (CVLAN).

Fixes: c55c2bf35333 ("net/mlx5/hws: add definer layer")
Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_definer.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index eb788a772a..b8a546989a 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -223,6 +223,7 @@ struct mlx5dr_definer_conv_data {
 	X(SET,		ib_l4_opcode,		v->hdr.opcode,		rte_flow_item_ib_bth) \
 	X(SET,		random_number,		v->value,		rte_flow_item_random) \
 	X(SET,		ib_l4_bth_a,		v->hdr.a,		rte_flow_item_ib_bth) \
+	X(SET,		cvlan,			STE_CVLAN,		rte_flow_item_vlan) \
 
 /* Item set function format */
 #define X(set_type, func_name, value, item_type) \
@@ -864,6 +865,15 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
 	struct mlx5dr_definer_fc *fc;
 	bool inner = cd->tunnel;
 
+	if (!cd->relaxed) {
+		/* Mark packet as tagged (CVLAN) */
+		fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)];
+		fc->item_idx = item_idx;
+		fc->tag_mask_set = &mlx5dr_definer_ones_set;
+		fc->tag_set = &mlx5dr_definer_cvlan_set;
+		DR_CALC_SET(fc, eth_l2, first_vlan_qualifier, inner);
+	}
+
 	if (!m)
 		return 0;
 
@@ -872,8 +882,7 @@ mlx5dr_definer_conv_item_vlan(struct mlx5dr_definer_conv_data *cd,
 		return rte_errno;
 	}
 
-	if (!cd->relaxed || m->has_more_vlan) {
-		/* Mark packet as tagged (CVLAN or SVLAN) even if TCI is not specified.*/
+	if (m->has_more_vlan) {
 		fc = &cd->fc[DR_CALC_FNAME(VLAN_TYPE, inner)];
 		fc->item_idx = item_idx;
 		fc->tag_mask_set = &mlx5dr_definer_ones_set;
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [v2 07/10] net/mlx5/hws: extend action template creation API
  2024-02-18  5:11   ` [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
                       ` (4 preceding siblings ...)
  2024-02-18  5:11     ` [v2 06/10] net/mlx5/hws: fix VLAN item handling on non relaxed mode Itamar Gozlan
@ 2024-02-18  5:11     ` Itamar Gozlan
  2024-02-18  5:11     ` [v2 08/10] net/mlx5/hws: add missing actions STE limitation Itamar Gozlan
                       ` (3 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-18  5:11 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad
  Cc: dev

From: Hamdan Igbaria <hamdani@nvidia.com>

Extend mlx5dr_action_template_create function params
to include flags parameter.

Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr.h         | 10 +++++++++-
 drivers/net/mlx5/hws/mlx5dr_action.c  | 11 ++++++++++-
 drivers/net/mlx5/hws/mlx5dr_action.h  |  1 +
 drivers/net/mlx5/hws/mlx5dr_matcher.c | 16 ++++++++++------
 drivers/net/mlx5/mlx5_flow_hw.c       |  2 +-
 5 files changed, 31 insertions(+), 9 deletions(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index 49f72118ba..3647e25cf2 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -84,6 +84,11 @@ enum mlx5dr_match_template_flags {
 	MLX5DR_MATCH_TEMPLATE_FLAG_RELAXED_MATCH = 1,
 };
 
+enum mlx5dr_action_template_flags {
+	/* Allow relaxed actions order. */
+	MLX5DR_ACTION_TEMPLATE_FLAG_RELAXED_ORDER = 1 << 0,
+};
+
 enum mlx5dr_send_queue_actions {
 	/* Start executing all pending queued rules */
 	MLX5DR_SEND_QUEUE_ACTION_DRAIN_ASYNC = 1 << 0,
@@ -383,10 +388,13 @@ int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt);
  *	An array of actions based on the order of actions which will be provided
  *	with rule_actions to mlx5dr_rule_create. The last action is marked
  *	using MLX5DR_ACTION_TYP_LAST.
+ * @param[in] flags
+ *	Template creation flags
  * @return pointer to mlx5dr_action_template on success NULL otherwise
  */
 struct mlx5dr_action_template *
-mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[]);
+mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[],
+			      uint32_t flags);
 
 /* Destroy action template.
  *
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 862ee3e332..370886907f 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -3385,12 +3385,19 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
 }
 
 struct mlx5dr_action_template *
-mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[])
+mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[],
+			      uint32_t flags)
 {
 	struct mlx5dr_action_template *at;
 	uint8_t num_actions = 0;
 	int i;
 
+	if (flags > MLX5DR_ACTION_TEMPLATE_FLAG_RELAXED_ORDER) {
+		DR_LOG(ERR, "Unsupported action template flag provided");
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
 	at = simple_calloc(1, sizeof(*at));
 	if (!at) {
 		DR_LOG(ERR, "Failed to allocate action template");
@@ -3398,6 +3405,8 @@ mlx5dr_action_template_create(const enum mlx5dr_action_type action_type[])
 		return NULL;
 	}
 
+	at->flags = flags;
+
 	while (action_type[num_actions++] != MLX5DR_ACTION_TYP_LAST)
 		;
 
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h
index fad35a845b..a8d9720c42 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.h
+++ b/drivers/net/mlx5/hws/mlx5dr_action.h
@@ -119,6 +119,7 @@ struct mlx5dr_action_template {
 	uint8_t num_of_action_stes;
 	uint8_t num_actions;
 	uint8_t only_term;
+	uint32_t flags;
 };
 
 struct mlx5dr_action {
diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c
index 0d5c462734..402242308d 100644
--- a/drivers/net/mlx5/hws/mlx5dr_matcher.c
+++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c
@@ -686,12 +686,16 @@ static int mlx5dr_matcher_check_and_process_at(struct mlx5dr_matcher *matcher,
 	bool valid;
 	int ret;
 
-	/* Check if action combinabtion is valid */
-	valid = mlx5dr_action_check_combo(at->action_type_arr, matcher->tbl->type);
-	if (!valid) {
-		DR_LOG(ERR, "Invalid combination in action template");
-		rte_errno = EINVAL;
-		return rte_errno;
+	if (!(at->flags & MLX5DR_ACTION_TEMPLATE_FLAG_RELAXED_ORDER)) {
+		/* Check if actions combinabtion is valid,
+		 * in the case of not relaxed actions order.
+		 */
+		valid = mlx5dr_action_check_combo(at->action_type_arr, matcher->tbl->type);
+		if (!valid) {
+			DR_LOG(ERR, "Invalid combination in action template");
+			rte_errno = EINVAL;
+			return rte_errno;
+		}
 	}
 
 	/* Process action template to setters */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 3bb3a9a178..9d3dad65d4 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -6202,7 +6202,7 @@ flow_hw_dr_actions_template_create(struct rte_eth_dev *dev,
 		at->recom_off = recom_off;
 		action_types[recom_off] = recom_type;
 	}
-	dr_template = mlx5dr_action_template_create(action_types);
+	dr_template = mlx5dr_action_template_create(action_types, 0);
 	if (dr_template) {
 		at->dr_actions_num = curr_off;
 	} else {
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [v2 08/10] net/mlx5/hws: add missing actions STE limitation
  2024-02-18  5:11   ` [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
                       ` (5 preceding siblings ...)
  2024-02-18  5:11     ` [v2 07/10] net/mlx5/hws: extend action template creation API Itamar Gozlan
@ 2024-02-18  5:11     ` Itamar Gozlan
  2024-02-18  5:11     ` [v2 09/10] net/mlx5/hws: support push_esp flag for insert header action Itamar Gozlan
                       ` (2 subsequent siblings)
  9 siblings, 0 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-18  5:11 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad
  Cc: dev

From: Hamdan Igbaria <hamdani@nvidia.com>

Today if we pass a remove header action and after it an insert header
action then our action template builder will set two different
STE setters, because it won't allow insert header in same STE as
remove header.
But if we have the opposite order of insert header and then remove
header actions, then the setter will set both of them on the same STE
since the opposite check was missing.
This patch added the missing opposite limitation.

Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_action.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 370886907f..8589de5557 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -3308,7 +3308,8 @@ int mlx5dr_action_template_process(struct mlx5dr_action_template *at)
 		case MLX5DR_ACTION_TYP_REMOVE_HEADER:
 		case MLX5DR_ACTION_TYP_REFORMAT_TNL_L2_TO_L2:
 			/* Single remove header to header */
-			setter = mlx5dr_action_setter_find_first(last_setter, ASF_SINGLE1 | ASF_MODIFY);
+			setter = mlx5dr_action_setter_find_first(last_setter,
+					ASF_SINGLE1 | ASF_MODIFY | ASF_INSERT);
 			setter->flags |= ASF_SINGLE1 | ASF_REMOVE;
 			setter->set_single = &mlx5dr_action_setter_single;
 			setter->idx_single = i;
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [v2 09/10] net/mlx5/hws: support push_esp flag for insert header action
  2024-02-18  5:11   ` [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
                       ` (6 preceding siblings ...)
  2024-02-18  5:11     ` [v2 08/10] net/mlx5/hws: add missing actions STE limitation Itamar Gozlan
@ 2024-02-18  5:11     ` Itamar Gozlan
  2024-02-18  5:11     ` [v2 10/10] net/mlx5/hws: typo fix parm to param Itamar Gozlan
  2024-02-26 10:16     ` [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index Raslan Darawsheh
  9 siblings, 0 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-18  5:11 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad
  Cc: dev

From: Hamdan Igbaria <hamdani@nvidia.com>

support push_esp flag for insert header action, it must be set when
inserting an ESP header, it's also sets the next_protocol field in
the IPsec trailer.

Signed-off-by: Hamdan Igbaria <hamdani@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/common/mlx5/mlx5_prm.h       | 3 ++-
 drivers/net/mlx5/hws/mlx5dr.h        | 4 ++++
 drivers/net/mlx5/hws/mlx5dr_action.c | 4 ++++
 drivers/net/mlx5/hws/mlx5dr_action.h | 1 +
 drivers/net/mlx5/hws/mlx5dr_cmd.c    | 2 ++
 drivers/net/mlx5/hws/mlx5dr_cmd.h    | 1 +
 6 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 787211c85c..282e59e52c 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -3616,7 +3616,8 @@ struct mlx5_ifc_stc_ste_param_insert_bits {
 	u8 action_type[0x4];
 	u8 encap[0x1];
 	u8 inline_data[0x1];
-	u8 reserved_at_6[0x4];
+	u8 push_esp[0x1];
+	u8 reserved_at_7[0x3];
 	u8 insert_anchor[0x6];
 	u8 reserved_at_10[0x1];
 	u8 insert_offset[0x7];
diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index 3647e25cf2..d612f300c6 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -192,6 +192,10 @@ struct mlx5dr_action_insert_header {
 	 * requiring device to update offloaded fields (for example IPv4 total length).
 	 */
 	bool encap;
+	/* It must be set when adding ESP header.
+	 * It's also sets the next_protocol value in the ipsec trailer.
+	 */
+	bool push_esp;
 };
 
 enum mlx5dr_action_remove_header_type {
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.c b/drivers/net/mlx5/hws/mlx5dr_action.c
index 8589de5557..f55069c675 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.c
+++ b/drivers/net/mlx5/hws/mlx5dr_action.c
@@ -598,6 +598,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
 		attr->action_type = MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT;
 		attr->action_offset = MLX5DR_ACTION_OFFSET_DW6;
 		attr->insert_header.encap = action->reformat.encap;
+		attr->insert_header.push_esp = action->reformat.push_esp;
 		attr->insert_header.insert_anchor = action->reformat.anchor;
 		attr->insert_header.arg_id = action->reformat.arg_obj->id;
 		attr->insert_header.header_size = action->reformat.header_size;
@@ -635,6 +636,7 @@ static void mlx5dr_action_fill_stc_attr(struct mlx5dr_action *action,
 		attr->action_offset = MLX5DR_ACTION_OFFSET_DW6;
 		attr->reparse_mode = MLX5_IFC_STC_REPARSE_ALWAYS;
 		attr->insert_header.encap = 0;
+		attr->insert_header.push_esp = 0;
 		attr->insert_header.is_inline = 1;
 		attr->insert_header.insert_anchor = MLX5_HEADER_ANCHOR_PACKET_START;
 		attr->insert_header.insert_offset = MLX5DR_ACTION_HDR_LEN_L2_MACS;
@@ -1340,6 +1342,7 @@ mlx5dr_action_handle_insert_with_ptr(struct mlx5dr_action *action,
 			action[i].reformat.anchor = MLX5_HEADER_ANCHOR_PACKET_START;
 			action[i].reformat.offset = 0;
 			action[i].reformat.encap = 1;
+			action[i].reformat.push_esp = 0;
 		}
 
 		if (likely(reparse == MLX5DR_ACTION_STC_REPARSE_DEFAULT))
@@ -2087,6 +2090,7 @@ mlx5dr_action_create_insert_header_reparse(struct mlx5dr_context *ctx,
 
 		action[i].reformat.anchor = hdrs[i].anchor;
 		action[i].reformat.encap = hdrs[i].encap;
+		action[i].reformat.push_esp = hdrs[i].push_esp;
 		action[i].reformat.offset = hdrs[i].offset;
 		reformat_hdrs[i].sz = hdrs[i].hdr.sz;
 		reformat_hdrs[i].data = hdrs[i].hdr.data;
diff --git a/drivers/net/mlx5/hws/mlx5dr_action.h b/drivers/net/mlx5/hws/mlx5dr_action.h
index a8d9720c42..0c8e4bbb5a 100644
--- a/drivers/net/mlx5/hws/mlx5dr_action.h
+++ b/drivers/net/mlx5/hws/mlx5dr_action.h
@@ -149,6 +149,7 @@ struct mlx5dr_action {
 					uint8_t offset;
 					bool encap;
 					uint8_t require_reparse;
+					bool push_esp;
 				} reformat;
 				struct {
 					struct mlx5dr_action
diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c
index f77b194708..4676f65d60 100644
--- a/drivers/net/mlx5/hws/mlx5dr_cmd.c
+++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c
@@ -486,6 +486,8 @@ mlx5dr_cmd_stc_modify_set_stc_param(struct mlx5dr_cmd_stc_modify_attr *stc_attr,
 			 MLX5_MODIFICATION_TYPE_INSERT);
 		MLX5_SET(stc_ste_param_insert, stc_parm, encap,
 			 stc_attr->insert_header.encap);
+		MLX5_SET(stc_ste_param_insert, stc_parm, push_esp,
+			 stc_attr->insert_header.push_esp);
 		MLX5_SET(stc_ste_param_insert, stc_parm, inline_data,
 			 stc_attr->insert_header.is_inline);
 		MLX5_SET(stc_ste_param_insert, stc_parm, insert_anchor,
diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.h b/drivers/net/mlx5/hws/mlx5dr_cmd.h
index 694231e08f..ee4a61b7eb 100644
--- a/drivers/net/mlx5/hws/mlx5dr_cmd.h
+++ b/drivers/net/mlx5/hws/mlx5dr_cmd.h
@@ -122,6 +122,7 @@ struct mlx5dr_cmd_stc_modify_attr {
 			uint8_t encap;
 			uint16_t insert_anchor;
 			uint16_t insert_offset;
+			uint8_t push_esp;
 		} insert_header;
 		struct {
 			uint8_t aso_type;
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [v2 10/10] net/mlx5/hws: typo fix parm to param
  2024-02-18  5:11   ` [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
                       ` (7 preceding siblings ...)
  2024-02-18  5:11     ` [v2 09/10] net/mlx5/hws: support push_esp flag for insert header action Itamar Gozlan
@ 2024-02-18  5:11     ` Itamar Gozlan
  2024-02-26 10:16     ` [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index Raslan Darawsheh
  9 siblings, 0 replies; 20+ messages in thread
From: Itamar Gozlan @ 2024-02-18  5:11 UTC (permalink / raw)
  To: igozlan, erezsh, hamdani, kliteyn, viacheslavo, thomas,
	suanmingm, Dariusz Sosnowski, Ori Kam, Matan Azrad
  Cc: dev

Fixing a typo in mlx5dr_cmd.c file, changing a variable name
from stc_parm to stc_param, as short for parameter.

Signed-off-by: Itamar Gozlan <igozlan@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_cmd.c | 68 +++++++++++++++----------------
 1 file changed, 34 insertions(+), 34 deletions(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr_cmd.c b/drivers/net/mlx5/hws/mlx5dr_cmd.c
index 4676f65d60..fd07028e5f 100644
--- a/drivers/net/mlx5/hws/mlx5dr_cmd.c
+++ b/drivers/net/mlx5/hws/mlx5dr_cmd.c
@@ -453,66 +453,66 @@ mlx5dr_cmd_stc_create(struct ibv_context *ctx,
 
 static int
 mlx5dr_cmd_stc_modify_set_stc_param(struct mlx5dr_cmd_stc_modify_attr *stc_attr,
-				    void *stc_parm)
+				    void *stc_param)
 {
 	switch (stc_attr->action_type) {
 	case MLX5_IFC_STC_ACTION_TYPE_COUNTER:
-		MLX5_SET(stc_ste_param_flow_counter, stc_parm, flow_counter_id, stc_attr->id);
+		MLX5_SET(stc_ste_param_flow_counter, stc_param, flow_counter_id, stc_attr->id);
 		break;
 	case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_TIR:
-		MLX5_SET(stc_ste_param_tir, stc_parm, tirn, stc_attr->dest_tir_num);
+		MLX5_SET(stc_ste_param_tir, stc_param, tirn, stc_attr->dest_tir_num);
 		break;
 	case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_FT:
-		MLX5_SET(stc_ste_param_table, stc_parm, table_id, stc_attr->dest_table_id);
+		MLX5_SET(stc_ste_param_table, stc_param, table_id, stc_attr->dest_table_id);
 		break;
 	case MLX5_IFC_STC_ACTION_TYPE_ACC_MODIFY_LIST:
-		MLX5_SET(stc_ste_param_header_modify_list, stc_parm,
+		MLX5_SET(stc_ste_param_header_modify_list, stc_param,
 			 header_modify_pattern_id, stc_attr->modify_header.pattern_id);
-		MLX5_SET(stc_ste_param_header_modify_list, stc_parm,
+		MLX5_SET(stc_ste_param_header_modify_list, stc_param,
 			 header_modify_argument_id, stc_attr->modify_header.arg_id);
 		break;
 	case MLX5_IFC_STC_ACTION_TYPE_HEADER_REMOVE:
-		MLX5_SET(stc_ste_param_remove, stc_parm, action_type,
+		MLX5_SET(stc_ste_param_remove, stc_param, action_type,
 			 MLX5_MODIFICATION_TYPE_REMOVE);
-		MLX5_SET(stc_ste_param_remove, stc_parm, decap,
+		MLX5_SET(stc_ste_param_remove, stc_param, decap,
 			 stc_attr->remove_header.decap);
-		MLX5_SET(stc_ste_param_remove, stc_parm, remove_start_anchor,
+		MLX5_SET(stc_ste_param_remove, stc_param, remove_start_anchor,
 			 stc_attr->remove_header.start_anchor);
-		MLX5_SET(stc_ste_param_remove, stc_parm, remove_end_anchor,
+		MLX5_SET(stc_ste_param_remove, stc_param, remove_end_anchor,
 			 stc_attr->remove_header.end_anchor);
 		break;
 	case MLX5_IFC_STC_ACTION_TYPE_HEADER_INSERT:
-		MLX5_SET(stc_ste_param_insert, stc_parm, action_type,
+		MLX5_SET(stc_ste_param_insert, stc_param, action_type,
 			 MLX5_MODIFICATION_TYPE_INSERT);
-		MLX5_SET(stc_ste_param_insert, stc_parm, encap,
+		MLX5_SET(stc_ste_param_insert, stc_param, encap,
 			 stc_attr->insert_header.encap);
-		MLX5_SET(stc_ste_param_insert, stc_parm, push_esp,
+		MLX5_SET(stc_ste_param_insert, stc_param, push_esp,
 			 stc_attr->insert_header.push_esp);
-		MLX5_SET(stc_ste_param_insert, stc_parm, inline_data,
+		MLX5_SET(stc_ste_param_insert, stc_param, inline_data,
 			 stc_attr->insert_header.is_inline);
-		MLX5_SET(stc_ste_param_insert, stc_parm, insert_anchor,
+		MLX5_SET(stc_ste_param_insert, stc_param, insert_anchor,
 			 stc_attr->insert_header.insert_anchor);
 		/* HW gets the next 2 sizes in words */
-		MLX5_SET(stc_ste_param_insert, stc_parm, insert_size,
+		MLX5_SET(stc_ste_param_insert, stc_param, insert_size,
 			 stc_attr->insert_header.header_size / W_SIZE);
-		MLX5_SET(stc_ste_param_insert, stc_parm, insert_offset,
+		MLX5_SET(stc_ste_param_insert, stc_param, insert_offset,
 			 stc_attr->insert_header.insert_offset / W_SIZE);
-		MLX5_SET(stc_ste_param_insert, stc_parm, insert_argument,
+		MLX5_SET(stc_ste_param_insert, stc_param, insert_argument,
 			 stc_attr->insert_header.arg_id);
 		break;
 	case MLX5_IFC_STC_ACTION_TYPE_COPY:
 	case MLX5_IFC_STC_ACTION_TYPE_SET:
 	case MLX5_IFC_STC_ACTION_TYPE_ADD:
 	case MLX5_IFC_STC_ACTION_TYPE_ADD_FIELD:
-		*(__be64 *)stc_parm = stc_attr->modify_action.data;
+		*(__be64 *)stc_param = stc_attr->modify_action.data;
 		break;
 	case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_VPORT:
 	case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_UPLINK:
-		MLX5_SET(stc_ste_param_vport, stc_parm, vport_number,
+		MLX5_SET(stc_ste_param_vport, stc_param, vport_number,
 			 stc_attr->vport.vport_num);
-		MLX5_SET(stc_ste_param_vport, stc_parm, eswitch_owner_vhca_id,
+		MLX5_SET(stc_ste_param_vport, stc_param, eswitch_owner_vhca_id,
 			 stc_attr->vport.esw_owner_vhca_id);
-		MLX5_SET(stc_ste_param_vport, stc_parm, eswitch_owner_vhca_id_valid, 1);
+		MLX5_SET(stc_ste_param_vport, stc_param, eswitch_owner_vhca_id_valid, 1);
 		break;
 	case MLX5_IFC_STC_ACTION_TYPE_DROP:
 	case MLX5_IFC_STC_ACTION_TYPE_NOP:
@@ -520,27 +520,27 @@ mlx5dr_cmd_stc_modify_set_stc_param(struct mlx5dr_cmd_stc_modify_attr *stc_attr,
 	case MLX5_IFC_STC_ACTION_TYPE_ALLOW:
 		break;
 	case MLX5_IFC_STC_ACTION_TYPE_ASO:
-		MLX5_SET(stc_ste_param_execute_aso, stc_parm, aso_object_id,
+		MLX5_SET(stc_ste_param_execute_aso, stc_param, aso_object_id,
 			 stc_attr->aso.devx_obj_id);
-		MLX5_SET(stc_ste_param_execute_aso, stc_parm, return_reg_id,
+		MLX5_SET(stc_ste_param_execute_aso, stc_param, return_reg_id,
 			 stc_attr->aso.return_reg_id);
-		MLX5_SET(stc_ste_param_execute_aso, stc_parm, aso_type,
+		MLX5_SET(stc_ste_param_execute_aso, stc_param, aso_type,
 			 stc_attr->aso.aso_type);
 		break;
 	case MLX5_IFC_STC_ACTION_TYPE_JUMP_TO_STE_TABLE:
-		MLX5_SET(stc_ste_param_ste_table, stc_parm, ste_obj_id,
+		MLX5_SET(stc_ste_param_ste_table, stc_param, ste_obj_id,
 			 stc_attr->ste_table.ste_obj_id);
-		MLX5_SET(stc_ste_param_ste_table, stc_parm, match_definer_id,
+		MLX5_SET(stc_ste_param_ste_table, stc_param, match_definer_id,
 			 stc_attr->ste_table.match_definer_id);
-		MLX5_SET(stc_ste_param_ste_table, stc_parm, log_hash_size,
+		MLX5_SET(stc_ste_param_ste_table, stc_param, log_hash_size,
 			 stc_attr->ste_table.log_hash_size);
 		break;
 	case MLX5_IFC_STC_ACTION_TYPE_REMOVE_WORDS:
-		MLX5_SET(stc_ste_param_remove_words, stc_parm, action_type,
+		MLX5_SET(stc_ste_param_remove_words, stc_param, action_type,
 			 MLX5_MODIFICATION_TYPE_REMOVE_WORDS);
-		MLX5_SET(stc_ste_param_remove_words, stc_parm, remove_start_anchor,
+		MLX5_SET(stc_ste_param_remove_words, stc_param, remove_start_anchor,
 			 stc_attr->remove_words.start_anchor);
-		MLX5_SET(stc_ste_param_remove_words, stc_parm,
+		MLX5_SET(stc_ste_param_remove_words, stc_param,
 			 remove_size, stc_attr->remove_words.num_of_words);
 		break;
 	default:
@@ -557,7 +557,7 @@ mlx5dr_cmd_stc_modify(struct mlx5dr_devx_obj *devx_obj,
 {
 	uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0};
 	uint32_t in[MLX5_ST_SZ_DW(create_stc_in)] = {0};
-	void *stc_parm;
+	void *stc_param;
 	void *attr;
 	int ret;
 
@@ -577,8 +577,8 @@ mlx5dr_cmd_stc_modify(struct mlx5dr_devx_obj *devx_obj,
 		   MLX5_IFC_MODIFY_STC_FIELD_SELECT_NEW_STC);
 
 	/* Set destination TIRN, TAG, FT ID, STE ID */
-	stc_parm = MLX5_ADDR_OF(stc, attr, stc_param);
-	ret = mlx5dr_cmd_stc_modify_set_stc_param(stc_attr, stc_parm);
+	stc_param = MLX5_ADDR_OF(stc, attr, stc_param);
+	ret = mlx5dr_cmd_stc_modify_set_stc_param(stc_attr, stc_param);
 	if (ret)
 		return ret;
 
-- 
2.39.3


^ permalink raw reply	[flat|nested] 20+ messages in thread

* RE: [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index
  2024-02-18  5:11   ` [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
                       ` (8 preceding siblings ...)
  2024-02-18  5:11     ` [v2 10/10] net/mlx5/hws: typo fix parm to param Itamar Gozlan
@ 2024-02-26 10:16     ` Raslan Darawsheh
  9 siblings, 0 replies; 20+ messages in thread
From: Raslan Darawsheh @ 2024-02-26 10:16 UTC (permalink / raw)
  To: Itamar Gozlan, Itamar Gozlan, Erez Shitrit, Hamdan Agbariya,
	Yevgeny Kliteynik, Slava Ovsiienko,
	NBU-Contact-Thomas Monjalon (EXTERNAL),
	Suanming Mou, Dariusz Sosnowski, Ori Kam, Matan Azrad,
	Alex Vesker
  Cc: dev

Hi,

> -----Original Message-----
> From: Itamar Gozlan <igozlan@nvidia.com>
> Sent: Sunday, February 18, 2024 7:11 AM
> To: Itamar Gozlan <igozlan@nvidia.com>; Erez Shitrit <erezsh@nvidia.com>;
> Hamdan Agbariya <hamdani@nvidia.com>; Yevgeny Kliteynik
> <kliteyn@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-
> Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>; Suanming
> Mou <suanmingm@nvidia.com>; Dariusz Sosnowski
> <dsosnowski@nvidia.com>; Ori Kam <orika@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; Alex Vesker <valex@nvidia.com>
> Cc: dev@dpdk.org
> Subject: [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index
> 
> The location of indexed rules is determined by the index, not the item hash. A
> matcher test is added to prevent access to non-existent items.
> This avoids unnecessary processing and potential segmentation faults.
> 
> Fixes: 405242c ("net/mlx5/hws: add rule object")
> Signed-off-by: Itamar Gozlan <igozlan@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>


Fixed few fixes tags, 
Series applied to next-net-mlx,

Kindest regards
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2024-02-26 10:16 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-13  9:50 [PATCH 1/9] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
2024-02-13  9:50 ` [PATCH 2/9] net/mlx5/hws: add check for not supported fields in VXLAN Itamar Gozlan
2024-02-13  9:50 ` [PATCH 3/9] net/mlx5/hws: add support for resizable matchers Itamar Gozlan
2024-02-13  9:50 ` [PATCH 4/9] net/mlx5/hws: reordering the STE fields to improve hash Itamar Gozlan
2024-02-13  9:50 ` [PATCH 5/9] net/mlx5/hws: check the rule status on rule update Itamar Gozlan
2024-02-13  9:50 ` [PATCH 6/9] net/mlx5/hws: fix VLAN item handling on non relaxed mode Itamar Gozlan
2024-02-13  9:50 ` [PATCH 7/9] net/mlx5/hws: extend action template creation API Itamar Gozlan
2024-02-13  9:50 ` [PATCH 8/9] net/mlx5/hws: add missing actions STE limitation Itamar Gozlan
2024-02-13  9:50 ` [PATCH 9/9] net/mlx5/hws: support push_esp flag for insert header action Itamar Gozlan
2024-02-18  5:11   ` [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index Itamar Gozlan
2024-02-18  5:11     ` [v2 02/10] net/mlx5/hws: add check for not supported fields in VXLAN Itamar Gozlan
2024-02-18  5:11     ` [v2 03/10] net/mlx5/hws: add support for resizable matchers Itamar Gozlan
2024-02-18  5:11     ` [v2 04/10] net/mlx5/hws: reordering the STE fields to improve hash Itamar Gozlan
2024-02-18  5:11     ` [v2 05/10] net/mlx5/hws: check the rule status on rule update Itamar Gozlan
2024-02-18  5:11     ` [v2 06/10] net/mlx5/hws: fix VLAN item handling on non relaxed mode Itamar Gozlan
2024-02-18  5:11     ` [v2 07/10] net/mlx5/hws: extend action template creation API Itamar Gozlan
2024-02-18  5:11     ` [v2 08/10] net/mlx5/hws: add missing actions STE limitation Itamar Gozlan
2024-02-18  5:11     ` [v2 09/10] net/mlx5/hws: support push_esp flag for insert header action Itamar Gozlan
2024-02-18  5:11     ` [v2 10/10] net/mlx5/hws: typo fix parm to param Itamar Gozlan
2024-02-26 10:16     ` [v2 01/10] net/mlx5/hws: skip RTE item when inserting rules by index Raslan Darawsheh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).