DPDK patches and discussions
 help / color / mirror / Atom feed
From: Gregory Etelson <getelson@nvidia.com>
To: <dev@dpdk.org>
Cc: getelson@nvidia.com,   <mkashani@nvidia.com>,
	rasland@nvidia.com, thomas@monjalon.net,
	"Yevgeny Kliteynik" <kliteyn@nvidia.com>,
	"Dariusz Sosnowski" <dsosnowski@nvidia.com>,
	"Viacheslav Ovsiienko" <viacheslavo@nvidia.com>,
	"Ori Kam" <orika@nvidia.com>,
	"Suanming Mou" <suanmingm@nvidia.com>,
	"Matan Azrad" <matan@nvidia.com>
Subject: [PATCH 1/5] net/mlx5/hws: add support for resizable matchers
Date: Fri, 2 Feb 2024 13:56:07 +0200	[thread overview]
Message-ID: <20240202115611.288892-2-getelson@nvidia.com> (raw)
In-Reply-To: <20240202115611.288892-1-getelson@nvidia.com>

From: Yevgeny Kliteynik <kliteyn@nvidia.com>

Add support for matcher resize with the following new API calls:
 - mlx5dr_matcher_resize_set_target
 - mlx5dr_matcher_resize_rule_move

The first function links two matchers and allows moving rules from src
matcher to dst matcher. Both matchers should have the same characteristics
(e.g. same mt, same at). It is the user's responsibility to make sure that
the dst matcher has enough space for the moved rules.
After this function, the user can move rules from src into dst matcher,
and he is no longer allowed to insert rules to the src matcher.

The second function is used to move the rule from matcher that is being
resized to a bigger matcher. Moving a single rule includes creating a new
rule in the destination matcher, and deleting the rule from the source
matcher. This operation creates a single completion.

Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr.h         |  39 +++++
 drivers/net/mlx5/hws/mlx5dr_definer.c |   5 +-
 drivers/net/mlx5/hws/mlx5dr_definer.h |   3 +
 drivers/net/mlx5/hws/mlx5dr_matcher.c | 181 +++++++++++++++++++-
 drivers/net/mlx5/hws/mlx5dr_matcher.h |  21 +++
 drivers/net/mlx5/hws/mlx5dr_rule.c    | 229 ++++++++++++++++++++++++--
 drivers/net/mlx5/hws/mlx5dr_rule.h    |  34 +++-
 drivers/net/mlx5/hws/mlx5dr_send.c    |  45 +++++
 drivers/net/mlx5/mlx5_flow.h          |   2 +
 9 files changed, 537 insertions(+), 22 deletions(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index d88f73ab57..9d8f8e13dc 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -139,6 +139,8 @@ struct mlx5dr_matcher_attr {
 	/* Define the insertion and distribution modes for this matcher */
 	enum mlx5dr_matcher_insert_mode insert_mode;
 	enum mlx5dr_matcher_distribute_mode distribute_mode;
+	/* Define whether the created matcher supports resizing into a bigger matcher */
+	bool resizable;
 	union {
 		struct {
 			uint8_t sz_row_log;
@@ -419,6 +421,43 @@ int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher);
 int mlx5dr_matcher_attach_at(struct mlx5dr_matcher *matcher,
 			     struct mlx5dr_action_template *at);
 
+/* Link two matchers and enable moving rules from src matcher to dst matcher.
+ * Both matchers must be in the same table type, must be created with 'resizable'
+ * property, and should have the same characteristics (e.g. same mt, same at).
+ *
+ * It is the user's responsibility to make sure that the dst matcher
+ * was allocated with the appropriate size.
+ *
+ * Once the function is completed, the user is:
+ *  - allowed to move rules from src into dst matcher
+ *  - no longer allowed to insert rules to the src matcher
+ *
+ * The user is always allowed to insert rules to the dst matcher and
+ * to delete rules from any matcher.
+ *
+ * @param[in] src_matcher
+ *	source matcher for moving rules from
+ * @param[in] dst_matcher
+ *	destination matcher for moving rules to
+ * @return zero on successful move, non zero otherwise.
+ */
+int mlx5dr_matcher_resize_set_target(struct mlx5dr_matcher *src_matcher,
+				     struct mlx5dr_matcher *dst_matcher);
+
+/* Enqueue moving rule operation: moving rule from src matcher to a dst matcher
+ *
+ * @param[in] src_matcher
+ *	matcher that the rule belongs to
+ * @param[in] rule
+ *	the rule to move
+ * @param[in] attr
+ *	rule attributes
+ * @return zero on success, non zero otherwise.
+ */
+int mlx5dr_matcher_resize_rule_move(struct mlx5dr_matcher *src_matcher,
+				    struct mlx5dr_rule *rule,
+				    struct mlx5dr_rule_attr *attr);
+
 /* Get the size of the rule handle (mlx5dr_rule) to be used on rule creation.
  *
  * @return size in bytes of rule handle struct.
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 0b60479406..6703c233bb 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -2919,9 +2919,8 @@ int mlx5dr_definer_get_id(struct mlx5dr_definer *definer)
 	return definer->obj->id;
 }
 
-static int
-mlx5dr_definer_compare(struct mlx5dr_definer *definer_a,
-		       struct mlx5dr_definer *definer_b)
+int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a,
+			   struct mlx5dr_definer *definer_b)
 {
 	int i;
 
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h
index 6f1c99e37a..9c3db53ff3 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.h
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.h
@@ -673,4 +673,7 @@ int mlx5dr_definer_init_cache(struct mlx5dr_definer_cache **cache);
 
 void mlx5dr_definer_uninit_cache(struct mlx5dr_definer_cache *cache);
 
+int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a,
+			   struct mlx5dr_definer *definer_b);
+
 #endif
diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c
index 4ea161eae6..5075342d72 100644
--- a/drivers/net/mlx5/hws/mlx5dr_matcher.c
+++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c
@@ -704,6 +704,65 @@ static int mlx5dr_matcher_check_and_process_at(struct mlx5dr_matcher *matcher,
 	return 0;
 }
 
+static int
+mlx5dr_matcher_resize_init(struct mlx5dr_matcher *src_matcher)
+{
+	struct mlx5dr_matcher_resize_data *resize_data;
+
+	resize_data = simple_calloc(1, sizeof(*resize_data));
+	if (!resize_data) {
+		rte_errno = ENOMEM;
+		return rte_errno;
+	}
+
+	resize_data->stc = src_matcher->action_ste.stc;
+	resize_data->action_ste_rtc_0 = src_matcher->action_ste.rtc_0;
+	resize_data->action_ste_rtc_1 = src_matcher->action_ste.rtc_1;
+	resize_data->action_ste_pool = src_matcher->action_ste.max_stes ?
+				       src_matcher->action_ste.pool :
+				       NULL;
+
+	/* Place the new resized matcher on the dst matcher's list */
+	LIST_INSERT_HEAD(&src_matcher->resize_dst->resize_data,
+			 resize_data, next);
+
+	/* Move all the previous resized matchers to the dst matcher's list */
+	while (!LIST_EMPTY(&src_matcher->resize_data)) {
+		resize_data = LIST_FIRST(&src_matcher->resize_data);
+		LIST_REMOVE(resize_data, next);
+		LIST_INSERT_HEAD(&src_matcher->resize_dst->resize_data,
+				 resize_data, next);
+	}
+
+	return 0;
+}
+
+static void
+mlx5dr_matcher_resize_uninit(struct mlx5dr_matcher *matcher)
+{
+	struct mlx5dr_matcher_resize_data *resize_data;
+
+	if (!mlx5dr_matcher_is_resizable(matcher) ||
+	    !matcher->action_ste.max_stes)
+		return;
+
+	while (!LIST_EMPTY(&matcher->resize_data)) {
+		resize_data = LIST_FIRST(&matcher->resize_data);
+		LIST_REMOVE(resize_data, next);
+
+		mlx5dr_action_free_single_stc(matcher->tbl->ctx,
+					      matcher->tbl->type,
+					      &resize_data->stc);
+
+		if (matcher->tbl->type == MLX5DR_TABLE_TYPE_FDB)
+			mlx5dr_cmd_destroy_obj(resize_data->action_ste_rtc_1);
+		mlx5dr_cmd_destroy_obj(resize_data->action_ste_rtc_0);
+		if (resize_data->action_ste_pool)
+			mlx5dr_pool_destroy(resize_data->action_ste_pool);
+		simple_free(resize_data);
+	}
+}
+
 static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher)
 {
 	bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(matcher->mt);
@@ -790,7 +849,9 @@ static void mlx5dr_matcher_unbind_at(struct mlx5dr_matcher *matcher)
 {
 	struct mlx5dr_table *tbl = matcher->tbl;
 
-	if (!matcher->action_ste.max_stes || matcher->flags & MLX5DR_MATCHER_FLAGS_COLLISION)
+	if (!matcher->action_ste.max_stes ||
+	    matcher->flags & MLX5DR_MATCHER_FLAGS_COLLISION ||
+	    mlx5dr_matcher_is_in_resize(matcher))
 		return;
 
 	mlx5dr_action_free_single_stc(tbl->ctx, tbl->type, &matcher->action_ste.stc);
@@ -947,6 +1008,10 @@ mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps,
 			DR_LOG(ERR, "Root matcher does not support at attaching");
 			goto not_supported;
 		}
+		if (attr->resizable) {
+			DR_LOG(ERR, "Root matcher does not support resizeing");
+			goto not_supported;
+		}
 		return 0;
 	}
 
@@ -960,6 +1025,8 @@ mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps,
 	    attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_HASH)
 		attr->table.sz_col_log = mlx5dr_matcher_rules_to_tbl_depth(attr->rule.num_log);
 
+	matcher->flags |= attr->resizable ? MLX5DR_MATCHER_FLAGS_RESIZABLE : 0;
+
 	return mlx5dr_matcher_check_attr_sz(caps, attr);
 
 not_supported:
@@ -1018,6 +1085,7 @@ static int mlx5dr_matcher_create_and_connect(struct mlx5dr_matcher *matcher)
 
 static void mlx5dr_matcher_destroy_and_disconnect(struct mlx5dr_matcher *matcher)
 {
+	mlx5dr_matcher_resize_uninit(matcher);
 	mlx5dr_matcher_disconnect(matcher);
 	mlx5dr_matcher_create_uninit_shared(matcher);
 	mlx5dr_matcher_destroy_rtc(matcher, DR_MATCHER_RTC_TYPE_MATCH);
@@ -1452,3 +1520,114 @@ int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt)
 	simple_free(mt);
 	return 0;
 }
+
+static int mlx5dr_matcher_resize_precheck(struct mlx5dr_matcher *src_matcher,
+					  struct mlx5dr_matcher *dst_matcher)
+{
+	int i;
+
+	if (mlx5dr_table_is_root(src_matcher->tbl) ||
+	    mlx5dr_table_is_root(dst_matcher->tbl)) {
+		DR_LOG(ERR, "Src/dst matcher belongs to root table - resize unsupported");
+		goto out_einval;
+	}
+
+	if (src_matcher->tbl->type != dst_matcher->tbl->type) {
+		DR_LOG(ERR, "Table type mismatch for src/dst matchers");
+		goto out_einval;
+	}
+
+	if (mlx5dr_matcher_req_fw_wqe(src_matcher) ||
+	    mlx5dr_matcher_req_fw_wqe(dst_matcher)) {
+		DR_LOG(ERR, "Matchers require FW WQE - resize unsupported");
+		goto out_einval;
+	}
+
+	if (!mlx5dr_matcher_is_resizable(src_matcher) ||
+	    !mlx5dr_matcher_is_resizable(dst_matcher)) {
+		DR_LOG(ERR, "Src/dst matcher is not resizable");
+		goto out_einval;
+	}
+
+	if (mlx5dr_matcher_is_insert_by_idx(src_matcher) !=
+	    mlx5dr_matcher_is_insert_by_idx(dst_matcher)) {
+		DR_LOG(ERR, "Src/dst matchers insert mode mismatch");
+		goto out_einval;
+	}
+
+	if (mlx5dr_matcher_is_in_resize(src_matcher) ||
+	    mlx5dr_matcher_is_in_resize(dst_matcher)) {
+		DR_LOG(ERR, "Src/dst matcher is already in resize");
+		goto out_einval;
+	}
+
+	/* Compare match templates - make sure the definers are equivalent */
+	if (src_matcher->num_of_mt != dst_matcher->num_of_mt) {
+		DR_LOG(ERR, "Src/dst matcher match templates mismatch");
+		goto out_einval;
+	}
+
+	if (src_matcher->action_ste.max_stes != dst_matcher->action_ste.max_stes) {
+		DR_LOG(ERR, "Src/dst matcher max STEs mismatch");
+		goto out_einval;
+	}
+
+	for (i = 0; i < src_matcher->num_of_mt; i++) {
+		if (mlx5dr_definer_compare(src_matcher->mt[i].definer,
+					   dst_matcher->mt[i].definer)) {
+			DR_LOG(ERR, "Src/dst matcher definers mismatch");
+			goto out_einval;
+		}
+	}
+
+	return 0;
+
+out_einval:
+	rte_errno = EINVAL;
+	return rte_errno;
+}
+
+int mlx5dr_matcher_resize_set_target(struct mlx5dr_matcher *src_matcher,
+				     struct mlx5dr_matcher *dst_matcher)
+{
+	int ret = 0;
+
+	pthread_spin_lock(&src_matcher->tbl->ctx->ctrl_lock);
+
+	if (mlx5dr_matcher_resize_precheck(src_matcher, dst_matcher)) {
+		ret = -rte_errno;
+		goto out;
+	}
+
+	src_matcher->resize_dst = dst_matcher;
+
+	if (mlx5dr_matcher_resize_init(src_matcher)) {
+		src_matcher->resize_dst = NULL;
+		ret = -rte_errno;
+	}
+
+out:
+	pthread_spin_unlock(&src_matcher->tbl->ctx->ctrl_lock);
+	return ret;
+}
+
+int mlx5dr_matcher_resize_rule_move(struct mlx5dr_matcher *src_matcher,
+				    struct mlx5dr_rule *rule,
+				    struct mlx5dr_rule_attr *attr)
+{
+	if (unlikely(!mlx5dr_matcher_is_in_resize(src_matcher))) {
+		DR_LOG(ERR, "Matcher is not resizable or not in resize");
+		goto out_einval;
+	}
+
+	if (unlikely(src_matcher != rule->matcher)) {
+		DR_LOG(ERR, "Rule doesn't belong to src matcher");
+		goto out_einval;
+	}
+
+	return mlx5dr_rule_move_hws_add(rule, attr);
+
+out_einval:
+	rte_errno = EINVAL;
+	return -rte_errno;
+}
diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h
index 363a61fd41..0f2bf96e8b 100644
--- a/drivers/net/mlx5/hws/mlx5dr_matcher.h
+++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h
@@ -26,6 +26,7 @@ enum mlx5dr_matcher_flags {
 	MLX5DR_MATCHER_FLAGS_RANGE_DEFINER	= 1 << 0,
 	MLX5DR_MATCHER_FLAGS_HASH_DEFINER	= 1 << 1,
 	MLX5DR_MATCHER_FLAGS_COLLISION		= 1 << 2,
+	MLX5DR_MATCHER_FLAGS_RESIZABLE		= 1 << 3,
 };
 
 struct mlx5dr_match_template {
@@ -59,6 +60,14 @@ struct mlx5dr_matcher_action_ste {
 	uint8_t max_stes;
 };
 
+struct mlx5dr_matcher_resize_data {
+	struct mlx5dr_pool_chunk stc;
+	struct mlx5dr_devx_obj *action_ste_rtc_0;
+	struct mlx5dr_devx_obj *action_ste_rtc_1;
+	struct mlx5dr_pool *action_ste_pool;
+	LIST_ENTRY(mlx5dr_matcher_resize_data) next;
+};
+
 struct mlx5dr_matcher {
 	struct mlx5dr_table *tbl;
 	struct mlx5dr_matcher_attr attr;
@@ -71,10 +80,12 @@ struct mlx5dr_matcher {
 	uint8_t flags;
 	struct mlx5dr_devx_obj *end_ft;
 	struct mlx5dr_matcher *col_matcher;
+	struct mlx5dr_matcher *resize_dst;
 	struct mlx5dr_matcher_match_ste match_ste;
 	struct mlx5dr_matcher_action_ste action_ste;
 	struct mlx5dr_definer *hash_definer;
 	LIST_ENTRY(mlx5dr_matcher) next;
+	LIST_HEAD(resize_data_head, mlx5dr_matcher_resize_data) resize_data;
 };
 
 static inline bool
@@ -89,6 +100,16 @@ mlx5dr_matcher_mt_is_range(struct mlx5dr_match_template *mt)
 	return (!!mt->range_definer);
 }
 
+static inline bool mlx5dr_matcher_is_resizable(struct mlx5dr_matcher *matcher)
+{
+	return !!(matcher->flags & MLX5DR_MATCHER_FLAGS_RESIZABLE);
+}
+
+static inline bool mlx5dr_matcher_is_in_resize(struct mlx5dr_matcher *matcher)
+{
+	return !!matcher->resize_dst;
+}
+
 static inline bool mlx5dr_matcher_req_fw_wqe(struct mlx5dr_matcher *matcher)
 {
 	/* Currently HWS doesn't support hash different from match or range */
diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c
index fa19303b91..03e62a3f14 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.c
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.c
@@ -111,6 +111,23 @@ static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe,
 	}
 }
 
+static void mlx5dr_rule_move_get_rtc(struct mlx5dr_rule *rule,
+				     struct mlx5dr_send_ste_attr *ste_attr)
+{
+	struct mlx5dr_matcher *dst_matcher = rule->matcher->resize_dst;
+
+	if (rule->resize_info->rtc_0) {
+		ste_attr->rtc_0 = dst_matcher->match_ste.rtc_0->id;
+		ste_attr->retry_rtc_0 = dst_matcher->col_matcher ?
+					dst_matcher->col_matcher->match_ste.rtc_0->id : 0;
+	}
+	if (rule->resize_info->rtc_1) {
+		ste_attr->rtc_1 = dst_matcher->match_ste.rtc_1->id;
+		ste_attr->retry_rtc_1 = dst_matcher->col_matcher ?
+					dst_matcher->col_matcher->match_ste.rtc_1->id : 0;
+	}
+}
+
 static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue,
 				 struct mlx5dr_rule *rule,
 				 bool err,
@@ -131,12 +148,41 @@ static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue,
 	mlx5dr_send_engine_gen_comp(queue, user_data, comp_status);
 }
 
+static void
+mlx5dr_rule_save_resize_info(struct mlx5dr_rule *rule,
+			     struct mlx5dr_send_ste_attr *ste_attr)
+{
+	rule->resize_info = simple_calloc(1, sizeof(*rule->resize_info));
+	if (unlikely(!rule->resize_info)) {
+		assert(rule->resize_info);
+		rte_errno = ENOMEM;
+	}
+
+	memcpy(rule->resize_info->ctrl_seg, ste_attr->wqe_ctrl,
+	       sizeof(rule->resize_info->ctrl_seg));
+	memcpy(rule->resize_info->data_seg, ste_attr->wqe_data,
+	       sizeof(rule->resize_info->data_seg));
+
+	rule->resize_info->action_ste_pool = rule->matcher->action_ste.max_stes ?
+					     rule->matcher->action_ste.pool :
+					     NULL;
+}
+
+static void mlx5dr_rule_clear_resize_info(struct mlx5dr_rule *rule)
+{
+	if (rule->resize_info) {
+		simple_free(rule->resize_info);
+		rule->resize_info = NULL;
+	}
+}
+
 static void
 mlx5dr_rule_save_delete_info(struct mlx5dr_rule *rule,
 			     struct mlx5dr_send_ste_attr *ste_attr)
 {
 	struct mlx5dr_match_template *mt = rule->matcher->mt;
 	bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(mt);
+	struct mlx5dr_rule_match_tag *tag;
 
 	if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) {
 		uint8_t *src_tag;
@@ -158,17 +204,31 @@ mlx5dr_rule_save_delete_info(struct mlx5dr_rule *rule,
 		return;
 	}
 
+	if (unlikely(mlx5dr_matcher_is_resizable(rule->matcher))) {
+		mlx5dr_rule_save_resize_info(rule, ste_attr);
+		tag = &rule->resize_info->tag;
+	} else {
+		tag = &rule->tag;
+	}
+
 	if (is_jumbo)
 		memcpy(rule->tag.jumbo, ste_attr->wqe_data->jumbo, MLX5DR_JUMBO_TAG_SZ);
 	else
-		memcpy(rule->tag.match, ste_attr->wqe_data->tag, MLX5DR_MATCH_TAG_SZ);
+		memcpy(tag->match, ste_attr->wqe_data->tag, MLX5DR_MATCH_TAG_SZ);
 }
 
 static void
 mlx5dr_rule_clear_delete_info(struct mlx5dr_rule *rule)
 {
-	if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher)))
+	if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) {
 		simple_free(rule->tag_ptr);
+		return;
+	}
+
+	if (unlikely(mlx5dr_matcher_is_resizable(rule->matcher))) {
+		mlx5dr_rule_clear_resize_info(rule);
+		return;
+	}
 }
 
 static void
@@ -185,8 +245,10 @@ mlx5dr_rule_load_delete_info(struct mlx5dr_rule *rule,
 			ste_attr->range_wqe_tag = &rule->tag_ptr[1];
 			ste_attr->send_attr.range_definer_id = rule->tag_ptr[1].reserved[1];
 		}
-	} else {
+	} else if (likely(!mlx5dr_matcher_is_resizable(rule->matcher))) {
 		ste_attr->wqe_tag = &rule->tag;
+	} else {
+		ste_attr->wqe_tag = &rule->resize_info->tag;
 	}
 }
 
@@ -217,6 +279,7 @@ static int mlx5dr_rule_alloc_action_ste(struct mlx5dr_rule *rule,
 void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule)
 {
 	struct mlx5dr_matcher *matcher = rule->matcher;
+	struct mlx5dr_pool *pool;
 
 	if (rule->action_ste_idx > -1 &&
 	    !matcher->attr.optimize_using_rule_idx &&
@@ -226,7 +289,11 @@ void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule)
 		/* This release is safe only when the rule match part was deleted */
 		ste.order = rte_log2_u32(matcher->action_ste.max_stes);
 		ste.offset = rule->action_ste_idx;
-		mlx5dr_pool_chunk_free(matcher->action_ste.pool, &ste);
+
+		/* Free the original action pool if rule was resized */
+		pool = mlx5dr_matcher_is_resizable(matcher) ? rule->resize_info->action_ste_pool :
+							      matcher->action_ste.pool;
+		mlx5dr_pool_chunk_free(pool, &ste);
 	}
 }
 
@@ -263,6 +330,23 @@ static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule,
 	apply->require_dep = 0;
 }
 
+static void mlx5dr_rule_move_init(struct mlx5dr_rule *rule,
+				  struct mlx5dr_rule_attr *attr)
+{
+	/* Save the old RTC IDs to be later used in match STE delete */
+	rule->resize_info->rtc_0 = rule->rtc_0;
+	rule->resize_info->rtc_1 = rule->rtc_1;
+	rule->resize_info->rule_idx = attr->rule_idx;
+
+	rule->rtc_0 = 0;
+	rule->rtc_1 = 0;
+
+	rule->pending_wqes = 0;
+	rule->action_ste_idx = -1;
+	rule->status = MLX5DR_RULE_STATUS_CREATING;
+	rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_WRITING;
+}
+
 static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule,
 					 struct mlx5dr_rule_attr *attr,
 					 uint8_t mt_idx,
@@ -343,7 +427,9 @@ static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule,
 	/* Send WQEs to FW */
 	mlx5dr_send_stes_fw(queue, &ste_attr);
 
-	/* Backup TAG on the rule for deletion */
+	/* Backup TAG on the rule for deletion, and save ctrl/data
+	 * segments to be used when resizing the matcher.
+	 */
 	mlx5dr_rule_save_delete_info(rule, &ste_attr);
 	mlx5dr_send_engine_inc_rule(queue);
 
@@ -466,7 +552,9 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule,
 		mlx5dr_send_ste(queue, &ste_attr);
 	}
 
-	/* Backup TAG on the rule for deletion, only after insertion */
+	/* Backup TAG on the rule for deletion and resize info for
+	 * moving rules to a new matcher, only after insertion.
+	 */
 	if (!is_update)
 		mlx5dr_rule_save_delete_info(rule, &ste_attr);
 
@@ -493,7 +581,7 @@ static void mlx5dr_rule_destroy_failed_hws(struct mlx5dr_rule *rule,
 	/* Rule failed now we can safely release action STEs */
 	mlx5dr_rule_free_action_ste_idx(rule);
 
-	/* Clear complex tag */
+	/* Clear complex tag or info that was saved for matcher resizing */
 	mlx5dr_rule_clear_delete_info(rule);
 
 	/* If a rule that was indicated as burst (need to trigger HW) has failed
@@ -568,12 +656,12 @@ static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule,
 
 	mlx5dr_rule_load_delete_info(rule, &ste_attr);
 
-	if (unlikely(fw_wqe)) {
+	if (unlikely(fw_wqe))
 		mlx5dr_send_stes_fw(queue, &ste_attr);
-		mlx5dr_rule_clear_delete_info(rule);
-	} else {
+	else
 		mlx5dr_send_ste(queue, &ste_attr);
-	}
+
+	mlx5dr_rule_clear_delete_info(rule);
 
 	return 0;
 }
@@ -661,9 +749,11 @@ static int mlx5dr_rule_destroy_root(struct mlx5dr_rule *rule,
 	return 0;
 }
 
-static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_context *ctx,
+static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_rule *rule,
 					struct mlx5dr_rule_attr *attr)
 {
+	struct mlx5dr_context *ctx = rule->matcher->tbl->ctx;
+
 	if (unlikely(!attr->user_data)) {
 		rte_errno = EINVAL;
 		return rte_errno;
@@ -678,6 +768,113 @@ static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_context *ctx,
 	return 0;
 }
 
+static int mlx5dr_rule_enqueue_precheck_create(struct mlx5dr_rule *rule,
+					       struct mlx5dr_rule_attr *attr)
+{
+	if (unlikely(mlx5dr_matcher_is_in_resize(rule->matcher))) {
+		/* Matcher in resize - new rules are not allowed */
+		rte_errno = EAGAIN;
+		return rte_errno;
+	}
+
+	return mlx5dr_rule_enqueue_precheck(rule, attr);
+}
+
+static int mlx5dr_rule_enqueue_precheck_update(struct mlx5dr_rule *rule,
+					       struct mlx5dr_rule_attr *attr)
+{
+	if (unlikely(mlx5dr_matcher_is_resizable(rule->matcher))) {
+		/* Update is not supported on resizable matchers */
+		rte_errno = ENOTSUP;
+		return rte_errno;
+	}
+
+	return mlx5dr_rule_enqueue_precheck_create(rule, attr);
+}
+
+int mlx5dr_rule_move_hws_remove(struct mlx5dr_rule *rule,
+				void *queue_ptr,
+				void *user_data)
+{
+	bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(rule->matcher->mt);
+	struct mlx5dr_wqe_gta_ctrl_seg empty_wqe_ctrl = {0};
+	struct mlx5dr_matcher *matcher = rule->matcher;
+	struct mlx5dr_send_engine *queue = queue_ptr;
+	struct mlx5dr_send_ste_attr ste_attr = {0};
+
+	rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_DELETING;
+
+	ste_attr.send_attr.fence = 0;
+	ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE;
+	ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS;
+	ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA;
+	ste_attr.send_attr.rule = rule;
+	ste_attr.send_attr.notify_hw = 1;
+	ste_attr.send_attr.user_data = user_data;
+	ste_attr.rtc_0 = rule->resize_info->rtc_0;
+	ste_attr.rtc_1 = rule->resize_info->rtc_1;
+	ste_attr.used_id_rtc_0 = &rule->resize_info->rtc_0;
+	ste_attr.used_id_rtc_1 = &rule->resize_info->rtc_1;
+	ste_attr.wqe_ctrl = &empty_wqe_ctrl;
+	ste_attr.wqe_tag_is_jumbo = is_jumbo;
+	ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE;
+
+	if (unlikely(mlx5dr_matcher_is_insert_by_idx(matcher)))
+		ste_attr.direct_index = rule->resize_info->rule_idx;
+
+	mlx5dr_rule_load_delete_info(rule, &ste_attr);
+	mlx5dr_send_ste(queue, &ste_attr);
+
+	return 0;
+}
+
+int mlx5dr_rule_move_hws_add(struct mlx5dr_rule *rule,
+			     struct mlx5dr_rule_attr *attr)
+{
+	bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(rule->matcher->mt);
+	struct mlx5dr_context *ctx = rule->matcher->tbl->ctx;
+	struct mlx5dr_matcher *matcher = rule->matcher;
+	struct mlx5dr_send_ste_attr ste_attr = {0};
+	struct mlx5dr_send_engine *queue;
+
+	if (unlikely(mlx5dr_rule_enqueue_precheck(rule, attr)))
+		return -rte_errno;
+
+	queue = &ctx->send_queue[attr->queue_id];
+
+	if (unlikely(mlx5dr_send_engine_err(queue))) {
+		rte_errno = EIO;
+		return rte_errno;
+	}
+
+	mlx5dr_rule_move_init(rule, attr);
+
+	mlx5dr_rule_move_get_rtc(rule, &ste_attr);
+
+	ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE;
+	ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS;
+	ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA;
+	ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE;
+	ste_attr.wqe_tag_is_jumbo = is_jumbo;
+
+	ste_attr.send_attr.rule = rule;
+	ste_attr.send_attr.fence = 0;
+	ste_attr.send_attr.notify_hw = !attr->burst;
+	ste_attr.send_attr.user_data = attr->user_data;
+
+	ste_attr.used_id_rtc_0 = &rule->rtc_0;
+	ste_attr.used_id_rtc_1 = &rule->rtc_1;
+	ste_attr.wqe_ctrl = (struct mlx5dr_wqe_gta_ctrl_seg *)rule->resize_info->ctrl_seg;
+	ste_attr.wqe_data = (struct mlx5dr_wqe_gta_data_seg_ste *)rule->resize_info->data_seg;
+	ste_attr.direct_index = mlx5dr_matcher_is_insert_by_idx(matcher) ?
+				attr->rule_idx : 0;
+
+	mlx5dr_send_ste(queue, &ste_attr);
+	mlx5dr_send_engine_inc_rule(queue);
+
+	return 0;
+}
+
 int mlx5dr_rule_create(struct mlx5dr_matcher *matcher,
 		       uint8_t mt_idx,
 		       const struct rte_flow_item items[],
@@ -686,13 +883,11 @@ int mlx5dr_rule_create(struct mlx5dr_matcher *matcher,
 		       struct mlx5dr_rule_attr *attr,
 		       struct mlx5dr_rule *rule_handle)
 {
-	struct mlx5dr_context *ctx;
 	int ret;
 
 	rule_handle->matcher = matcher;
-	ctx = matcher->tbl->ctx;
 
-	if (mlx5dr_rule_enqueue_precheck(ctx, attr))
+	if (unlikely(mlx5dr_rule_enqueue_precheck_create(rule_handle, attr)))
 		return -rte_errno;
 
 	assert(matcher->num_of_mt >= mt_idx);
@@ -720,7 +915,7 @@ int mlx5dr_rule_destroy(struct mlx5dr_rule *rule,
 {
 	int ret;
 
-	if (mlx5dr_rule_enqueue_precheck(rule->matcher->tbl->ctx, attr))
+	if (unlikely(mlx5dr_rule_enqueue_precheck(rule, attr)))
 		return -rte_errno;
 
 	if (unlikely(mlx5dr_table_is_root(rule->matcher->tbl)))
@@ -753,7 +948,7 @@ int mlx5dr_rule_action_update(struct mlx5dr_rule *rule_handle,
 		return -rte_errno;
 	}
 
-	if (mlx5dr_rule_enqueue_precheck(matcher->tbl->ctx, attr))
+	if (unlikely(mlx5dr_rule_enqueue_precheck_update(rule_handle, attr)))
 		return -rte_errno;
 
 	ret = mlx5dr_rule_create_hws(rule_handle,
diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h
index f7d97eead5..14115fe329 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.h
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.h
@@ -10,7 +10,9 @@ enum {
 	MLX5DR_ACTIONS_SZ = 12,
 	MLX5DR_MATCH_TAG_SZ = 32,
 	MLX5DR_JUMBO_TAG_SZ = 44,
-	MLX5DR_STE_SZ = 64,
+	MLX5DR_STE_SZ = MLX5DR_STE_CTRL_SZ +
+			MLX5DR_ACTIONS_SZ +
+			MLX5DR_MATCH_TAG_SZ,
 };
 
 enum mlx5dr_rule_status {
@@ -23,6 +25,12 @@ enum mlx5dr_rule_status {
 	MLX5DR_RULE_STATUS_FAILED,
 };
 
+enum mlx5dr_rule_move_state {
+	MLX5DR_RULE_RESIZE_STATE_IDLE,
+	MLX5DR_RULE_RESIZE_STATE_WRITING,
+	MLX5DR_RULE_RESIZE_STATE_DELETING,
+};
+
 struct mlx5dr_rule_match_tag {
 	union {
 		uint8_t jumbo[MLX5DR_JUMBO_TAG_SZ];
@@ -33,6 +41,17 @@ struct mlx5dr_rule_match_tag {
 	};
 };
 
+struct mlx5dr_rule_resize_info {
+	uint8_t state;
+	uint32_t rtc_0;
+	uint32_t rtc_1;
+	uint32_t rule_idx;
+	struct mlx5dr_pool *action_ste_pool;
+	struct mlx5dr_rule_match_tag tag;
+	uint8_t ctrl_seg[MLX5DR_WQE_SZ_GTA_CTRL]; /* Ctrl segment of STE: 48 bytes */
+	uint8_t data_seg[MLX5DR_STE_SZ];	  /* Data segment of STE: 64 bytes */
+};
+
 struct mlx5dr_rule {
 	struct mlx5dr_matcher *matcher;
 	union {
@@ -40,6 +59,7 @@ struct mlx5dr_rule {
 		/* Pointer to tag to store more than one tag */
 		struct mlx5dr_rule_match_tag *tag_ptr;
 		struct ibv_flow *flow;
+		struct mlx5dr_rule_resize_info *resize_info;
 	};
 	uint32_t rtc_0; /* The RTC into which the STE was inserted */
 	uint32_t rtc_1; /* The RTC into which the STE was inserted */
@@ -50,4 +70,16 @@ struct mlx5dr_rule {
 
 void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule);
 
+int mlx5dr_rule_move_hws_remove(struct mlx5dr_rule *rule,
+				void *queue, void *user_data);
+
+int mlx5dr_rule_move_hws_add(struct mlx5dr_rule *rule,
+			     struct mlx5dr_rule_attr *attr);
+
+static inline bool mlx5dr_rule_move_in_progress(struct mlx5dr_rule *rule)
+{
+	return rule->resize_info &&
+	       rule->resize_info->state != MLX5DR_RULE_RESIZE_STATE_IDLE;
+}
+
 #endif /* MLX5DR_RULE_H_ */
diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c
index 622d574bfa..936dfc1fe6 100644
--- a/drivers/net/mlx5/hws/mlx5dr_send.c
+++ b/drivers/net/mlx5/hws/mlx5dr_send.c
@@ -444,6 +444,46 @@ void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue)
 	mlx5dr_send_engine_post_ring(sq, queue->uar, wqe_ctrl);
 }
 
+static void
+mlx5dr_send_engine_update_rule_resize(struct mlx5dr_send_engine *queue,
+				      struct mlx5dr_send_ring_priv *priv,
+				      enum rte_flow_op_status *status)
+{
+	switch (priv->rule->resize_info->state) {
+	case MLX5DR_RULE_RESIZE_STATE_WRITING:
+		if (priv->rule->status == MLX5DR_RULE_STATUS_FAILING) {
+			/* Backup original RTCs */
+			uint32_t orig_rtc_0 = priv->rule->resize_info->rtc_0;
+			uint32_t orig_rtc_1 = priv->rule->resize_info->rtc_1;
+
+			/* Delete partialy failed move rule using resize_info */
+			priv->rule->resize_info->rtc_0 = priv->rule->rtc_0;
+			priv->rule->resize_info->rtc_1 = priv->rule->rtc_1;
+
+			/* Move rule to orginal RTC for future delete */
+			priv->rule->rtc_0 = orig_rtc_0;
+			priv->rule->rtc_1 = orig_rtc_1;
+		}
+		/* Clean leftovers */
+		mlx5dr_rule_move_hws_remove(priv->rule, queue, priv->user_data);
+		break;
+
+	case MLX5DR_RULE_RESIZE_STATE_DELETING:
+		if (priv->rule->status == MLX5DR_RULE_STATUS_FAILING) {
+			*status = RTE_FLOW_OP_ERROR;
+		} else {
+			*status = RTE_FLOW_OP_SUCCESS;
+			priv->rule->matcher = priv->rule->matcher->resize_dst;
+		}
+		priv->rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_IDLE;
+		priv->rule->status = MLX5DR_RULE_STATUS_CREATED;
+		break;
+
+	default:
+		break;
+	}
+}
+
 static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue,
 					   struct mlx5dr_send_ring_priv *priv,
 					   uint16_t wqe_cnt,
@@ -465,6 +505,11 @@ static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue,
 
 	/* Update rule status for the last completion */
 	if (!priv->rule->pending_wqes) {
+		if (unlikely(mlx5dr_rule_move_in_progress(priv->rule))) {
+			mlx5dr_send_engine_update_rule_resize(queue, priv, status);
+			return;
+		}
+
 		if (unlikely(priv->rule->status == MLX5DR_RULE_STATUS_FAILING)) {
 			/* Rule completely failed and doesn't require cleanup */
 			if (!priv->rule->rtc_0 && !priv->rule->rtc_1)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index b35079b30a..b003e97dc9 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -17,6 +17,8 @@
 #include <mlx5_prm.h>
 
 #include "mlx5.h"
+#include "hws/mlx5dr.h"
+#include "rte_flow.h"
 #include "rte_pmd_mlx5.h"
 #include "hws/mlx5dr.h"
 
-- 
2.39.2


  reply	other threads:[~2024-02-02 11:56 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-02 11:56 [PATCH 0/5] net/mlx5: add support for flow table resizing Gregory Etelson
2024-02-02 11:56 ` Gregory Etelson [this message]
2024-02-28 10:25   ` [PATCH v2 0/4] " Gregory Etelson
2024-02-28 10:25     ` [PATCH v2 1/4] net/mlx5: add resize function to ipool Gregory Etelson
2024-02-28 10:25     ` [PATCH v2 2/4] net/mlx5: fix parameters verification in HWS table create Gregory Etelson
2024-02-28 10:25     ` [PATCH v2 3/4] net/mlx5: move multi-pattern actions management to table level Gregory Etelson
2024-02-28 10:25     ` [PATCH v2 4/4] net/mlx5: add support for flow table resizing Gregory Etelson
2024-02-28 13:33   ` [PATCH v3 0/4] " Gregory Etelson
2024-02-28 13:33     ` [PATCH v3 1/4] net/mlx5: add resize function to ipool Gregory Etelson
2024-02-28 13:33     ` [PATCH v3 2/4] net/mlx5: fix parameters verification in HWS table create Gregory Etelson
2024-02-28 13:33     ` [PATCH v3 3/4] net/mlx5: move multi-pattern actions management to table level Gregory Etelson
2024-02-28 13:33     ` [PATCH v3 4/4] net/mlx5: add support for flow table resizing Gregory Etelson
2024-02-28 15:50     ` [PATCH v3 0/4] " Raslan Darawsheh
2024-02-02 11:56 ` [PATCH 2/5] net/mlx5: add resize function to ipool Gregory Etelson
2024-02-02 11:56 ` [PATCH 3/5] net/mlx5: fix parameters verification in HWS table create Gregory Etelson
2024-02-02 11:56 ` [PATCH 4/5] net/mlx5: move multi-pattern actions management to table level Gregory Etelson
2024-02-02 11:56 ` [PATCH 5/5] net/mlx5: add support for flow table resizing Gregory Etelson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240202115611.288892-2-getelson@nvidia.com \
    --to=getelson@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=dsosnowski@nvidia.com \
    --cc=kliteyn@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=mkashani@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=suanmingm@nvidia.com \
    --cc=thomas@monjalon.net \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).