DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/5] net/mlx5: add support for flow table resizing
@ 2024-02-02 11:56 Gregory Etelson
  2024-02-02 11:56 ` [PATCH 1/5] net/mlx5/hws: add support for resizable matchers Gregory Etelson
                   ` (4 more replies)
  0 siblings, 5 replies; 17+ messages in thread
From: Gregory Etelson @ 2024-02-02 11:56 UTC (permalink / raw)
  To: dev; +Cc: getelson,  , rasland, thomas

Gregory Etelson (3):
  net/mlx5: fix parameters verification in HWS table create
  net/mlx5: move multi-pattern actions management to table level
  net/mlx5: add support for flow table resizing

Maayan Kashani (1):
  net/mlx5: add resize function to ipool

Yevgeny Kliteynik (1):
  net/mlx5/hws: add support for resizable matchers

 drivers/net/mlx5/hws/mlx5dr.h         |  39 ++
 drivers/net/mlx5/hws/mlx5dr_definer.c |   5 +-
 drivers/net/mlx5/hws/mlx5dr_definer.h |   3 +
 drivers/net/mlx5/hws/mlx5dr_matcher.c | 181 ++++++-
 drivers/net/mlx5/hws/mlx5dr_matcher.h |  21 +
 drivers/net/mlx5/hws/mlx5dr_rule.c    | 229 +++++++-
 drivers/net/mlx5/hws/mlx5dr_rule.h    |  34 +-
 drivers/net/mlx5/hws/mlx5dr_send.c    |  45 ++
 drivers/net/mlx5/mlx5.h               |   5 +
 drivers/net/mlx5/mlx5_flow.c          |  51 ++
 drivers/net/mlx5/mlx5_flow.h          | 103 +++-
 drivers/net/mlx5/mlx5_flow_hw.c       | 748 +++++++++++++++++++-------
 drivers/net/mlx5/mlx5_host.c          | 211 ++++++++
 drivers/net/mlx5/mlx5_utils.c         |  29 +
 drivers/net/mlx5/mlx5_utils.h         |  16 +
 15 files changed, 1498 insertions(+), 222 deletions(-)
 create mode 100644 drivers/net/mlx5/mlx5_host.c

Depends-on: series-30952 ([v2] ethdev: add template table resize API)

-- 
2.39.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/5] net/mlx5/hws: add support for resizable matchers
  2024-02-02 11:56 [PATCH 0/5] net/mlx5: add support for flow table resizing Gregory Etelson
@ 2024-02-02 11:56 ` Gregory Etelson
  2024-02-28 10:25   ` [PATCH v2 0/4] net/mlx5: add support for flow table resizing Gregory Etelson
  2024-02-28 13:33   ` [PATCH v3 0/4] " Gregory Etelson
  2024-02-02 11:56 ` [PATCH 2/5] net/mlx5: add resize function to ipool Gregory Etelson
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 17+ messages in thread
From: Gregory Etelson @ 2024-02-02 11:56 UTC (permalink / raw)
  To: dev
  Cc: getelson,  ,
	rasland, thomas, Yevgeny Kliteynik, Dariusz Sosnowski,
	Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad

From: Yevgeny Kliteynik <kliteyn@nvidia.com>

Add support for matcher resize with the following new API calls:
 - mlx5dr_matcher_resize_set_target
 - mlx5dr_matcher_resize_rule_move

The first function links two matchers and allows moving rules from src
matcher to dst matcher. Both matchers should have the same characteristics
(e.g. same mt, same at). It is the user's responsibility to make sure that
the dst matcher has enough space for the moved rules.
After this function, the user can move rules from src into dst matcher,
and he is no longer allowed to insert rules to the src matcher.

The second function is used to move the rule from matcher that is being
resized to a bigger matcher. Moving a single rule includes creating a new
rule in the destination matcher, and deleting the rule from the source
matcher. This operation creates a single completion.

Signed-off-by: Yevgeny Kliteynik <kliteyn@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr.h         |  39 +++++
 drivers/net/mlx5/hws/mlx5dr_definer.c |   5 +-
 drivers/net/mlx5/hws/mlx5dr_definer.h |   3 +
 drivers/net/mlx5/hws/mlx5dr_matcher.c | 181 +++++++++++++++++++-
 drivers/net/mlx5/hws/mlx5dr_matcher.h |  21 +++
 drivers/net/mlx5/hws/mlx5dr_rule.c    | 229 ++++++++++++++++++++++++--
 drivers/net/mlx5/hws/mlx5dr_rule.h    |  34 +++-
 drivers/net/mlx5/hws/mlx5dr_send.c    |  45 +++++
 drivers/net/mlx5/mlx5_flow.h          |   2 +
 9 files changed, 537 insertions(+), 22 deletions(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index d88f73ab57..9d8f8e13dc 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -139,6 +139,8 @@ struct mlx5dr_matcher_attr {
 	/* Define the insertion and distribution modes for this matcher */
 	enum mlx5dr_matcher_insert_mode insert_mode;
 	enum mlx5dr_matcher_distribute_mode distribute_mode;
+	/* Define whether the created matcher supports resizing into a bigger matcher */
+	bool resizable;
 	union {
 		struct {
 			uint8_t sz_row_log;
@@ -419,6 +421,43 @@ int mlx5dr_matcher_destroy(struct mlx5dr_matcher *matcher);
 int mlx5dr_matcher_attach_at(struct mlx5dr_matcher *matcher,
 			     struct mlx5dr_action_template *at);
 
+/* Link two matchers and enable moving rules from src matcher to dst matcher.
+ * Both matchers must be in the same table type, must be created with 'resizable'
+ * property, and should have the same characteristics (e.g. same mt, same at).
+ *
+ * It is the user's responsibility to make sure that the dst matcher
+ * was allocated with the appropriate size.
+ *
+ * Once the function is completed, the user is:
+ *  - allowed to move rules from src into dst matcher
+ *  - no longer allowed to insert rules to the src matcher
+ *
+ * The user is always allowed to insert rules to the dst matcher and
+ * to delete rules from any matcher.
+ *
+ * @param[in] src_matcher
+ *	source matcher for moving rules from
+ * @param[in] dst_matcher
+ *	destination matcher for moving rules to
+ * @return zero on successful move, non zero otherwise.
+ */
+int mlx5dr_matcher_resize_set_target(struct mlx5dr_matcher *src_matcher,
+				     struct mlx5dr_matcher *dst_matcher);
+
+/* Enqueue moving rule operation: moving rule from src matcher to a dst matcher
+ *
+ * @param[in] src_matcher
+ *	matcher that the rule belongs to
+ * @param[in] rule
+ *	the rule to move
+ * @param[in] attr
+ *	rule attributes
+ * @return zero on success, non zero otherwise.
+ */
+int mlx5dr_matcher_resize_rule_move(struct mlx5dr_matcher *src_matcher,
+				    struct mlx5dr_rule *rule,
+				    struct mlx5dr_rule_attr *attr);
+
 /* Get the size of the rule handle (mlx5dr_rule) to be used on rule creation.
  *
  * @return size in bytes of rule handle struct.
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 0b60479406..6703c233bb 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -2919,9 +2919,8 @@ int mlx5dr_definer_get_id(struct mlx5dr_definer *definer)
 	return definer->obj->id;
 }
 
-static int
-mlx5dr_definer_compare(struct mlx5dr_definer *definer_a,
-		       struct mlx5dr_definer *definer_b)
+int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a,
+			   struct mlx5dr_definer *definer_b)
 {
 	int i;
 
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.h b/drivers/net/mlx5/hws/mlx5dr_definer.h
index 6f1c99e37a..9c3db53ff3 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.h
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.h
@@ -673,4 +673,7 @@ int mlx5dr_definer_init_cache(struct mlx5dr_definer_cache **cache);
 
 void mlx5dr_definer_uninit_cache(struct mlx5dr_definer_cache *cache);
 
+int mlx5dr_definer_compare(struct mlx5dr_definer *definer_a,
+			   struct mlx5dr_definer *definer_b);
+
 #endif
diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.c b/drivers/net/mlx5/hws/mlx5dr_matcher.c
index 4ea161eae6..5075342d72 100644
--- a/drivers/net/mlx5/hws/mlx5dr_matcher.c
+++ b/drivers/net/mlx5/hws/mlx5dr_matcher.c
@@ -704,6 +704,65 @@ static int mlx5dr_matcher_check_and_process_at(struct mlx5dr_matcher *matcher,
 	return 0;
 }
 
+static int
+mlx5dr_matcher_resize_init(struct mlx5dr_matcher *src_matcher)
+{
+	struct mlx5dr_matcher_resize_data *resize_data;
+
+	resize_data = simple_calloc(1, sizeof(*resize_data));
+	if (!resize_data) {
+		rte_errno = ENOMEM;
+		return rte_errno;
+	}
+
+	resize_data->stc = src_matcher->action_ste.stc;
+	resize_data->action_ste_rtc_0 = src_matcher->action_ste.rtc_0;
+	resize_data->action_ste_rtc_1 = src_matcher->action_ste.rtc_1;
+	resize_data->action_ste_pool = src_matcher->action_ste.max_stes ?
+				       src_matcher->action_ste.pool :
+				       NULL;
+
+	/* Place the new resized matcher on the dst matcher's list */
+	LIST_INSERT_HEAD(&src_matcher->resize_dst->resize_data,
+			 resize_data, next);
+
+	/* Move all the previous resized matchers to the dst matcher's list */
+	while (!LIST_EMPTY(&src_matcher->resize_data)) {
+		resize_data = LIST_FIRST(&src_matcher->resize_data);
+		LIST_REMOVE(resize_data, next);
+		LIST_INSERT_HEAD(&src_matcher->resize_dst->resize_data,
+				 resize_data, next);
+	}
+
+	return 0;
+}
+
+static void
+mlx5dr_matcher_resize_uninit(struct mlx5dr_matcher *matcher)
+{
+	struct mlx5dr_matcher_resize_data *resize_data;
+
+	if (!mlx5dr_matcher_is_resizable(matcher) ||
+	    !matcher->action_ste.max_stes)
+		return;
+
+	while (!LIST_EMPTY(&matcher->resize_data)) {
+		resize_data = LIST_FIRST(&matcher->resize_data);
+		LIST_REMOVE(resize_data, next);
+
+		mlx5dr_action_free_single_stc(matcher->tbl->ctx,
+					      matcher->tbl->type,
+					      &resize_data->stc);
+
+		if (matcher->tbl->type == MLX5DR_TABLE_TYPE_FDB)
+			mlx5dr_cmd_destroy_obj(resize_data->action_ste_rtc_1);
+		mlx5dr_cmd_destroy_obj(resize_data->action_ste_rtc_0);
+		if (resize_data->action_ste_pool)
+			mlx5dr_pool_destroy(resize_data->action_ste_pool);
+		simple_free(resize_data);
+	}
+}
+
 static int mlx5dr_matcher_bind_at(struct mlx5dr_matcher *matcher)
 {
 	bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(matcher->mt);
@@ -790,7 +849,9 @@ static void mlx5dr_matcher_unbind_at(struct mlx5dr_matcher *matcher)
 {
 	struct mlx5dr_table *tbl = matcher->tbl;
 
-	if (!matcher->action_ste.max_stes || matcher->flags & MLX5DR_MATCHER_FLAGS_COLLISION)
+	if (!matcher->action_ste.max_stes ||
+	    matcher->flags & MLX5DR_MATCHER_FLAGS_COLLISION ||
+	    mlx5dr_matcher_is_in_resize(matcher))
 		return;
 
 	mlx5dr_action_free_single_stc(tbl->ctx, tbl->type, &matcher->action_ste.stc);
@@ -947,6 +1008,10 @@ mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps,
 			DR_LOG(ERR, "Root matcher does not support at attaching");
 			goto not_supported;
 		}
+		if (attr->resizable) {
+			DR_LOG(ERR, "Root matcher does not support resizeing");
+			goto not_supported;
+		}
 		return 0;
 	}
 
@@ -960,6 +1025,8 @@ mlx5dr_matcher_process_attr(struct mlx5dr_cmd_query_caps *caps,
 	    attr->insert_mode == MLX5DR_MATCHER_INSERT_BY_HASH)
 		attr->table.sz_col_log = mlx5dr_matcher_rules_to_tbl_depth(attr->rule.num_log);
 
+	matcher->flags |= attr->resizable ? MLX5DR_MATCHER_FLAGS_RESIZABLE : 0;
+
 	return mlx5dr_matcher_check_attr_sz(caps, attr);
 
 not_supported:
@@ -1018,6 +1085,7 @@ static int mlx5dr_matcher_create_and_connect(struct mlx5dr_matcher *matcher)
 
 static void mlx5dr_matcher_destroy_and_disconnect(struct mlx5dr_matcher *matcher)
 {
+	mlx5dr_matcher_resize_uninit(matcher);
 	mlx5dr_matcher_disconnect(matcher);
 	mlx5dr_matcher_create_uninit_shared(matcher);
 	mlx5dr_matcher_destroy_rtc(matcher, DR_MATCHER_RTC_TYPE_MATCH);
@@ -1452,3 +1520,114 @@ int mlx5dr_match_template_destroy(struct mlx5dr_match_template *mt)
 	simple_free(mt);
 	return 0;
 }
+
+static int mlx5dr_matcher_resize_precheck(struct mlx5dr_matcher *src_matcher,
+					  struct mlx5dr_matcher *dst_matcher)
+{
+	int i;
+
+	if (mlx5dr_table_is_root(src_matcher->tbl) ||
+	    mlx5dr_table_is_root(dst_matcher->tbl)) {
+		DR_LOG(ERR, "Src/dst matcher belongs to root table - resize unsupported");
+		goto out_einval;
+	}
+
+	if (src_matcher->tbl->type != dst_matcher->tbl->type) {
+		DR_LOG(ERR, "Table type mismatch for src/dst matchers");
+		goto out_einval;
+	}
+
+	if (mlx5dr_matcher_req_fw_wqe(src_matcher) ||
+	    mlx5dr_matcher_req_fw_wqe(dst_matcher)) {
+		DR_LOG(ERR, "Matchers require FW WQE - resize unsupported");
+		goto out_einval;
+	}
+
+	if (!mlx5dr_matcher_is_resizable(src_matcher) ||
+	    !mlx5dr_matcher_is_resizable(dst_matcher)) {
+		DR_LOG(ERR, "Src/dst matcher is not resizable");
+		goto out_einval;
+	}
+
+	if (mlx5dr_matcher_is_insert_by_idx(src_matcher) !=
+	    mlx5dr_matcher_is_insert_by_idx(dst_matcher)) {
+		DR_LOG(ERR, "Src/dst matchers insert mode mismatch");
+		goto out_einval;
+	}
+
+	if (mlx5dr_matcher_is_in_resize(src_matcher) ||
+	    mlx5dr_matcher_is_in_resize(dst_matcher)) {
+		DR_LOG(ERR, "Src/dst matcher is already in resize");
+		goto out_einval;
+	}
+
+	/* Compare match templates - make sure the definers are equivalent */
+	if (src_matcher->num_of_mt != dst_matcher->num_of_mt) {
+		DR_LOG(ERR, "Src/dst matcher match templates mismatch");
+		goto out_einval;
+	}
+
+	if (src_matcher->action_ste.max_stes != dst_matcher->action_ste.max_stes) {
+		DR_LOG(ERR, "Src/dst matcher max STEs mismatch");
+		goto out_einval;
+	}
+
+	for (i = 0; i < src_matcher->num_of_mt; i++) {
+		if (mlx5dr_definer_compare(src_matcher->mt[i].definer,
+					   dst_matcher->mt[i].definer)) {
+			DR_LOG(ERR, "Src/dst matcher definers mismatch");
+			goto out_einval;
+		}
+	}
+
+	return 0;
+
+out_einval:
+	rte_errno = EINVAL;
+	return rte_errno;
+}
+
+int mlx5dr_matcher_resize_set_target(struct mlx5dr_matcher *src_matcher,
+				     struct mlx5dr_matcher *dst_matcher)
+{
+	int ret = 0;
+
+	pthread_spin_lock(&src_matcher->tbl->ctx->ctrl_lock);
+
+	if (mlx5dr_matcher_resize_precheck(src_matcher, dst_matcher)) {
+		ret = -rte_errno;
+		goto out;
+	}
+
+	src_matcher->resize_dst = dst_matcher;
+
+	if (mlx5dr_matcher_resize_init(src_matcher)) {
+		src_matcher->resize_dst = NULL;
+		ret = -rte_errno;
+	}
+
+out:
+	pthread_spin_unlock(&src_matcher->tbl->ctx->ctrl_lock);
+	return ret;
+}
+
+int mlx5dr_matcher_resize_rule_move(struct mlx5dr_matcher *src_matcher,
+				    struct mlx5dr_rule *rule,
+				    struct mlx5dr_rule_attr *attr)
+{
+	if (unlikely(!mlx5dr_matcher_is_in_resize(src_matcher))) {
+		DR_LOG(ERR, "Matcher is not resizable or not in resize");
+		goto out_einval;
+	}
+
+	if (unlikely(src_matcher != rule->matcher)) {
+		DR_LOG(ERR, "Rule doesn't belong to src matcher");
+		goto out_einval;
+	}
+
+	return mlx5dr_rule_move_hws_add(rule, attr);
+
+out_einval:
+	rte_errno = EINVAL;
+	return -rte_errno;
+}
diff --git a/drivers/net/mlx5/hws/mlx5dr_matcher.h b/drivers/net/mlx5/hws/mlx5dr_matcher.h
index 363a61fd41..0f2bf96e8b 100644
--- a/drivers/net/mlx5/hws/mlx5dr_matcher.h
+++ b/drivers/net/mlx5/hws/mlx5dr_matcher.h
@@ -26,6 +26,7 @@ enum mlx5dr_matcher_flags {
 	MLX5DR_MATCHER_FLAGS_RANGE_DEFINER	= 1 << 0,
 	MLX5DR_MATCHER_FLAGS_HASH_DEFINER	= 1 << 1,
 	MLX5DR_MATCHER_FLAGS_COLLISION		= 1 << 2,
+	MLX5DR_MATCHER_FLAGS_RESIZABLE		= 1 << 3,
 };
 
 struct mlx5dr_match_template {
@@ -59,6 +60,14 @@ struct mlx5dr_matcher_action_ste {
 	uint8_t max_stes;
 };
 
+struct mlx5dr_matcher_resize_data {
+	struct mlx5dr_pool_chunk stc;
+	struct mlx5dr_devx_obj *action_ste_rtc_0;
+	struct mlx5dr_devx_obj *action_ste_rtc_1;
+	struct mlx5dr_pool *action_ste_pool;
+	LIST_ENTRY(mlx5dr_matcher_resize_data) next;
+};
+
 struct mlx5dr_matcher {
 	struct mlx5dr_table *tbl;
 	struct mlx5dr_matcher_attr attr;
@@ -71,10 +80,12 @@ struct mlx5dr_matcher {
 	uint8_t flags;
 	struct mlx5dr_devx_obj *end_ft;
 	struct mlx5dr_matcher *col_matcher;
+	struct mlx5dr_matcher *resize_dst;
 	struct mlx5dr_matcher_match_ste match_ste;
 	struct mlx5dr_matcher_action_ste action_ste;
 	struct mlx5dr_definer *hash_definer;
 	LIST_ENTRY(mlx5dr_matcher) next;
+	LIST_HEAD(resize_data_head, mlx5dr_matcher_resize_data) resize_data;
 };
 
 static inline bool
@@ -89,6 +100,16 @@ mlx5dr_matcher_mt_is_range(struct mlx5dr_match_template *mt)
 	return (!!mt->range_definer);
 }
 
+static inline bool mlx5dr_matcher_is_resizable(struct mlx5dr_matcher *matcher)
+{
+	return !!(matcher->flags & MLX5DR_MATCHER_FLAGS_RESIZABLE);
+}
+
+static inline bool mlx5dr_matcher_is_in_resize(struct mlx5dr_matcher *matcher)
+{
+	return !!matcher->resize_dst;
+}
+
 static inline bool mlx5dr_matcher_req_fw_wqe(struct mlx5dr_matcher *matcher)
 {
 	/* Currently HWS doesn't support hash different from match or range */
diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c
index fa19303b91..03e62a3f14 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.c
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.c
@@ -111,6 +111,23 @@ static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe,
 	}
 }
 
+static void mlx5dr_rule_move_get_rtc(struct mlx5dr_rule *rule,
+				     struct mlx5dr_send_ste_attr *ste_attr)
+{
+	struct mlx5dr_matcher *dst_matcher = rule->matcher->resize_dst;
+
+	if (rule->resize_info->rtc_0) {
+		ste_attr->rtc_0 = dst_matcher->match_ste.rtc_0->id;
+		ste_attr->retry_rtc_0 = dst_matcher->col_matcher ?
+					dst_matcher->col_matcher->match_ste.rtc_0->id : 0;
+	}
+	if (rule->resize_info->rtc_1) {
+		ste_attr->rtc_1 = dst_matcher->match_ste.rtc_1->id;
+		ste_attr->retry_rtc_1 = dst_matcher->col_matcher ?
+					dst_matcher->col_matcher->match_ste.rtc_1->id : 0;
+	}
+}
+
 static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue,
 				 struct mlx5dr_rule *rule,
 				 bool err,
@@ -131,12 +148,41 @@ static void mlx5dr_rule_gen_comp(struct mlx5dr_send_engine *queue,
 	mlx5dr_send_engine_gen_comp(queue, user_data, comp_status);
 }
 
+static void
+mlx5dr_rule_save_resize_info(struct mlx5dr_rule *rule,
+			     struct mlx5dr_send_ste_attr *ste_attr)
+{
+	rule->resize_info = simple_calloc(1, sizeof(*rule->resize_info));
+	if (unlikely(!rule->resize_info)) {
+		assert(rule->resize_info);
+		rte_errno = ENOMEM;
+	}
+
+	memcpy(rule->resize_info->ctrl_seg, ste_attr->wqe_ctrl,
+	       sizeof(rule->resize_info->ctrl_seg));
+	memcpy(rule->resize_info->data_seg, ste_attr->wqe_data,
+	       sizeof(rule->resize_info->data_seg));
+
+	rule->resize_info->action_ste_pool = rule->matcher->action_ste.max_stes ?
+					     rule->matcher->action_ste.pool :
+					     NULL;
+}
+
+static void mlx5dr_rule_clear_resize_info(struct mlx5dr_rule *rule)
+{
+	if (rule->resize_info) {
+		simple_free(rule->resize_info);
+		rule->resize_info = NULL;
+	}
+}
+
 static void
 mlx5dr_rule_save_delete_info(struct mlx5dr_rule *rule,
 			     struct mlx5dr_send_ste_attr *ste_attr)
 {
 	struct mlx5dr_match_template *mt = rule->matcher->mt;
 	bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(mt);
+	struct mlx5dr_rule_match_tag *tag;
 
 	if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) {
 		uint8_t *src_tag;
@@ -158,17 +204,31 @@ mlx5dr_rule_save_delete_info(struct mlx5dr_rule *rule,
 		return;
 	}
 
+	if (unlikely(mlx5dr_matcher_is_resizable(rule->matcher))) {
+		mlx5dr_rule_save_resize_info(rule, ste_attr);
+		tag = &rule->resize_info->tag;
+	} else {
+		tag = &rule->tag;
+	}
+
 	if (is_jumbo)
 		memcpy(rule->tag.jumbo, ste_attr->wqe_data->jumbo, MLX5DR_JUMBO_TAG_SZ);
 	else
-		memcpy(rule->tag.match, ste_attr->wqe_data->tag, MLX5DR_MATCH_TAG_SZ);
+		memcpy(tag->match, ste_attr->wqe_data->tag, MLX5DR_MATCH_TAG_SZ);
 }
 
 static void
 mlx5dr_rule_clear_delete_info(struct mlx5dr_rule *rule)
 {
-	if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher)))
+	if (unlikely(mlx5dr_matcher_req_fw_wqe(rule->matcher))) {
 		simple_free(rule->tag_ptr);
+		return;
+	}
+
+	if (unlikely(mlx5dr_matcher_is_resizable(rule->matcher))) {
+		mlx5dr_rule_clear_resize_info(rule);
+		return;
+	}
 }
 
 static void
@@ -185,8 +245,10 @@ mlx5dr_rule_load_delete_info(struct mlx5dr_rule *rule,
 			ste_attr->range_wqe_tag = &rule->tag_ptr[1];
 			ste_attr->send_attr.range_definer_id = rule->tag_ptr[1].reserved[1];
 		}
-	} else {
+	} else if (likely(!mlx5dr_matcher_is_resizable(rule->matcher))) {
 		ste_attr->wqe_tag = &rule->tag;
+	} else {
+		ste_attr->wqe_tag = &rule->resize_info->tag;
 	}
 }
 
@@ -217,6 +279,7 @@ static int mlx5dr_rule_alloc_action_ste(struct mlx5dr_rule *rule,
 void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule)
 {
 	struct mlx5dr_matcher *matcher = rule->matcher;
+	struct mlx5dr_pool *pool;
 
 	if (rule->action_ste_idx > -1 &&
 	    !matcher->attr.optimize_using_rule_idx &&
@@ -226,7 +289,11 @@ void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule)
 		/* This release is safe only when the rule match part was deleted */
 		ste.order = rte_log2_u32(matcher->action_ste.max_stes);
 		ste.offset = rule->action_ste_idx;
-		mlx5dr_pool_chunk_free(matcher->action_ste.pool, &ste);
+
+		/* Free the original action pool if rule was resized */
+		pool = mlx5dr_matcher_is_resizable(matcher) ? rule->resize_info->action_ste_pool :
+							      matcher->action_ste.pool;
+		mlx5dr_pool_chunk_free(pool, &ste);
 	}
 }
 
@@ -263,6 +330,23 @@ static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule,
 	apply->require_dep = 0;
 }
 
+static void mlx5dr_rule_move_init(struct mlx5dr_rule *rule,
+				  struct mlx5dr_rule_attr *attr)
+{
+	/* Save the old RTC IDs to be later used in match STE delete */
+	rule->resize_info->rtc_0 = rule->rtc_0;
+	rule->resize_info->rtc_1 = rule->rtc_1;
+	rule->resize_info->rule_idx = attr->rule_idx;
+
+	rule->rtc_0 = 0;
+	rule->rtc_1 = 0;
+
+	rule->pending_wqes = 0;
+	rule->action_ste_idx = -1;
+	rule->status = MLX5DR_RULE_STATUS_CREATING;
+	rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_WRITING;
+}
+
 static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule,
 					 struct mlx5dr_rule_attr *attr,
 					 uint8_t mt_idx,
@@ -343,7 +427,9 @@ static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule,
 	/* Send WQEs to FW */
 	mlx5dr_send_stes_fw(queue, &ste_attr);
 
-	/* Backup TAG on the rule for deletion */
+	/* Backup TAG on the rule for deletion, and save ctrl/data
+	 * segments to be used when resizing the matcher.
+	 */
 	mlx5dr_rule_save_delete_info(rule, &ste_attr);
 	mlx5dr_send_engine_inc_rule(queue);
 
@@ -466,7 +552,9 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule,
 		mlx5dr_send_ste(queue, &ste_attr);
 	}
 
-	/* Backup TAG on the rule for deletion, only after insertion */
+	/* Backup TAG on the rule for deletion and resize info for
+	 * moving rules to a new matcher, only after insertion.
+	 */
 	if (!is_update)
 		mlx5dr_rule_save_delete_info(rule, &ste_attr);
 
@@ -493,7 +581,7 @@ static void mlx5dr_rule_destroy_failed_hws(struct mlx5dr_rule *rule,
 	/* Rule failed now we can safely release action STEs */
 	mlx5dr_rule_free_action_ste_idx(rule);
 
-	/* Clear complex tag */
+	/* Clear complex tag or info that was saved for matcher resizing */
 	mlx5dr_rule_clear_delete_info(rule);
 
 	/* If a rule that was indicated as burst (need to trigger HW) has failed
@@ -568,12 +656,12 @@ static int mlx5dr_rule_destroy_hws(struct mlx5dr_rule *rule,
 
 	mlx5dr_rule_load_delete_info(rule, &ste_attr);
 
-	if (unlikely(fw_wqe)) {
+	if (unlikely(fw_wqe))
 		mlx5dr_send_stes_fw(queue, &ste_attr);
-		mlx5dr_rule_clear_delete_info(rule);
-	} else {
+	else
 		mlx5dr_send_ste(queue, &ste_attr);
-	}
+
+	mlx5dr_rule_clear_delete_info(rule);
 
 	return 0;
 }
@@ -661,9 +749,11 @@ static int mlx5dr_rule_destroy_root(struct mlx5dr_rule *rule,
 	return 0;
 }
 
-static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_context *ctx,
+static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_rule *rule,
 					struct mlx5dr_rule_attr *attr)
 {
+	struct mlx5dr_context *ctx = rule->matcher->tbl->ctx;
+
 	if (unlikely(!attr->user_data)) {
 		rte_errno = EINVAL;
 		return rte_errno;
@@ -678,6 +768,113 @@ static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_context *ctx,
 	return 0;
 }
 
+static int mlx5dr_rule_enqueue_precheck_create(struct mlx5dr_rule *rule,
+					       struct mlx5dr_rule_attr *attr)
+{
+	if (unlikely(mlx5dr_matcher_is_in_resize(rule->matcher))) {
+		/* Matcher in resize - new rules are not allowed */
+		rte_errno = EAGAIN;
+		return rte_errno;
+	}
+
+	return mlx5dr_rule_enqueue_precheck(rule, attr);
+}
+
+static int mlx5dr_rule_enqueue_precheck_update(struct mlx5dr_rule *rule,
+					       struct mlx5dr_rule_attr *attr)
+{
+	if (unlikely(mlx5dr_matcher_is_resizable(rule->matcher))) {
+		/* Update is not supported on resizable matchers */
+		rte_errno = ENOTSUP;
+		return rte_errno;
+	}
+
+	return mlx5dr_rule_enqueue_precheck_create(rule, attr);
+}
+
+int mlx5dr_rule_move_hws_remove(struct mlx5dr_rule *rule,
+				void *queue_ptr,
+				void *user_data)
+{
+	bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(rule->matcher->mt);
+	struct mlx5dr_wqe_gta_ctrl_seg empty_wqe_ctrl = {0};
+	struct mlx5dr_matcher *matcher = rule->matcher;
+	struct mlx5dr_send_engine *queue = queue_ptr;
+	struct mlx5dr_send_ste_attr ste_attr = {0};
+
+	rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_DELETING;
+
+	ste_attr.send_attr.fence = 0;
+	ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE;
+	ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS;
+	ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA;
+	ste_attr.send_attr.rule = rule;
+	ste_attr.send_attr.notify_hw = 1;
+	ste_attr.send_attr.user_data = user_data;
+	ste_attr.rtc_0 = rule->resize_info->rtc_0;
+	ste_attr.rtc_1 = rule->resize_info->rtc_1;
+	ste_attr.used_id_rtc_0 = &rule->resize_info->rtc_0;
+	ste_attr.used_id_rtc_1 = &rule->resize_info->rtc_1;
+	ste_attr.wqe_ctrl = &empty_wqe_ctrl;
+	ste_attr.wqe_tag_is_jumbo = is_jumbo;
+	ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_DEACTIVATE;
+
+	if (unlikely(mlx5dr_matcher_is_insert_by_idx(matcher)))
+		ste_attr.direct_index = rule->resize_info->rule_idx;
+
+	mlx5dr_rule_load_delete_info(rule, &ste_attr);
+	mlx5dr_send_ste(queue, &ste_attr);
+
+	return 0;
+}
+
+int mlx5dr_rule_move_hws_add(struct mlx5dr_rule *rule,
+			     struct mlx5dr_rule_attr *attr)
+{
+	bool is_jumbo = mlx5dr_matcher_mt_is_jumbo(rule->matcher->mt);
+	struct mlx5dr_context *ctx = rule->matcher->tbl->ctx;
+	struct mlx5dr_matcher *matcher = rule->matcher;
+	struct mlx5dr_send_ste_attr ste_attr = {0};
+	struct mlx5dr_send_engine *queue;
+
+	if (unlikely(mlx5dr_rule_enqueue_precheck(rule, attr)))
+		return -rte_errno;
+
+	queue = &ctx->send_queue[attr->queue_id];
+
+	if (unlikely(mlx5dr_send_engine_err(queue))) {
+		rte_errno = EIO;
+		return rte_errno;
+	}
+
+	mlx5dr_rule_move_init(rule, attr);
+
+	mlx5dr_rule_move_get_rtc(rule, &ste_attr);
+
+	ste_attr.send_attr.opmod = MLX5DR_WQE_GTA_OPMOD_STE;
+	ste_attr.send_attr.opcode = MLX5DR_WQE_OPCODE_TBL_ACCESS;
+	ste_attr.send_attr.len = MLX5DR_WQE_SZ_GTA_CTRL + MLX5DR_WQE_SZ_GTA_DATA;
+	ste_attr.gta_opcode = MLX5DR_WQE_GTA_OP_ACTIVATE;
+	ste_attr.wqe_tag_is_jumbo = is_jumbo;
+
+	ste_attr.send_attr.rule = rule;
+	ste_attr.send_attr.fence = 0;
+	ste_attr.send_attr.notify_hw = !attr->burst;
+	ste_attr.send_attr.user_data = attr->user_data;
+
+	ste_attr.used_id_rtc_0 = &rule->rtc_0;
+	ste_attr.used_id_rtc_1 = &rule->rtc_1;
+	ste_attr.wqe_ctrl = (struct mlx5dr_wqe_gta_ctrl_seg *)rule->resize_info->ctrl_seg;
+	ste_attr.wqe_data = (struct mlx5dr_wqe_gta_data_seg_ste *)rule->resize_info->data_seg;
+	ste_attr.direct_index = mlx5dr_matcher_is_insert_by_idx(matcher) ?
+				attr->rule_idx : 0;
+
+	mlx5dr_send_ste(queue, &ste_attr);
+	mlx5dr_send_engine_inc_rule(queue);
+
+	return 0;
+}
+
 int mlx5dr_rule_create(struct mlx5dr_matcher *matcher,
 		       uint8_t mt_idx,
 		       const struct rte_flow_item items[],
@@ -686,13 +883,11 @@ int mlx5dr_rule_create(struct mlx5dr_matcher *matcher,
 		       struct mlx5dr_rule_attr *attr,
 		       struct mlx5dr_rule *rule_handle)
 {
-	struct mlx5dr_context *ctx;
 	int ret;
 
 	rule_handle->matcher = matcher;
-	ctx = matcher->tbl->ctx;
 
-	if (mlx5dr_rule_enqueue_precheck(ctx, attr))
+	if (unlikely(mlx5dr_rule_enqueue_precheck_create(rule_handle, attr)))
 		return -rte_errno;
 
 	assert(matcher->num_of_mt >= mt_idx);
@@ -720,7 +915,7 @@ int mlx5dr_rule_destroy(struct mlx5dr_rule *rule,
 {
 	int ret;
 
-	if (mlx5dr_rule_enqueue_precheck(rule->matcher->tbl->ctx, attr))
+	if (unlikely(mlx5dr_rule_enqueue_precheck(rule, attr)))
 		return -rte_errno;
 
 	if (unlikely(mlx5dr_table_is_root(rule->matcher->tbl)))
@@ -753,7 +948,7 @@ int mlx5dr_rule_action_update(struct mlx5dr_rule *rule_handle,
 		return -rte_errno;
 	}
 
-	if (mlx5dr_rule_enqueue_precheck(matcher->tbl->ctx, attr))
+	if (unlikely(mlx5dr_rule_enqueue_precheck_update(rule_handle, attr)))
 		return -rte_errno;
 
 	ret = mlx5dr_rule_create_hws(rule_handle,
diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.h b/drivers/net/mlx5/hws/mlx5dr_rule.h
index f7d97eead5..14115fe329 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.h
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.h
@@ -10,7 +10,9 @@ enum {
 	MLX5DR_ACTIONS_SZ = 12,
 	MLX5DR_MATCH_TAG_SZ = 32,
 	MLX5DR_JUMBO_TAG_SZ = 44,
-	MLX5DR_STE_SZ = 64,
+	MLX5DR_STE_SZ = MLX5DR_STE_CTRL_SZ +
+			MLX5DR_ACTIONS_SZ +
+			MLX5DR_MATCH_TAG_SZ,
 };
 
 enum mlx5dr_rule_status {
@@ -23,6 +25,12 @@ enum mlx5dr_rule_status {
 	MLX5DR_RULE_STATUS_FAILED,
 };
 
+enum mlx5dr_rule_move_state {
+	MLX5DR_RULE_RESIZE_STATE_IDLE,
+	MLX5DR_RULE_RESIZE_STATE_WRITING,
+	MLX5DR_RULE_RESIZE_STATE_DELETING,
+};
+
 struct mlx5dr_rule_match_tag {
 	union {
 		uint8_t jumbo[MLX5DR_JUMBO_TAG_SZ];
@@ -33,6 +41,17 @@ struct mlx5dr_rule_match_tag {
 	};
 };
 
+struct mlx5dr_rule_resize_info {
+	uint8_t state;
+	uint32_t rtc_0;
+	uint32_t rtc_1;
+	uint32_t rule_idx;
+	struct mlx5dr_pool *action_ste_pool;
+	struct mlx5dr_rule_match_tag tag;
+	uint8_t ctrl_seg[MLX5DR_WQE_SZ_GTA_CTRL]; /* Ctrl segment of STE: 48 bytes */
+	uint8_t data_seg[MLX5DR_STE_SZ];	  /* Data segment of STE: 64 bytes */
+};
+
 struct mlx5dr_rule {
 	struct mlx5dr_matcher *matcher;
 	union {
@@ -40,6 +59,7 @@ struct mlx5dr_rule {
 		/* Pointer to tag to store more than one tag */
 		struct mlx5dr_rule_match_tag *tag_ptr;
 		struct ibv_flow *flow;
+		struct mlx5dr_rule_resize_info *resize_info;
 	};
 	uint32_t rtc_0; /* The RTC into which the STE was inserted */
 	uint32_t rtc_1; /* The RTC into which the STE was inserted */
@@ -50,4 +70,16 @@ struct mlx5dr_rule {
 
 void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule);
 
+int mlx5dr_rule_move_hws_remove(struct mlx5dr_rule *rule,
+				void *queue, void *user_data);
+
+int mlx5dr_rule_move_hws_add(struct mlx5dr_rule *rule,
+			     struct mlx5dr_rule_attr *attr);
+
+static inline bool mlx5dr_rule_move_in_progress(struct mlx5dr_rule *rule)
+{
+	return rule->resize_info &&
+	       rule->resize_info->state != MLX5DR_RULE_RESIZE_STATE_IDLE;
+}
+
 #endif /* MLX5DR_RULE_H_ */
diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c
index 622d574bfa..936dfc1fe6 100644
--- a/drivers/net/mlx5/hws/mlx5dr_send.c
+++ b/drivers/net/mlx5/hws/mlx5dr_send.c
@@ -444,6 +444,46 @@ void mlx5dr_send_engine_flush_queue(struct mlx5dr_send_engine *queue)
 	mlx5dr_send_engine_post_ring(sq, queue->uar, wqe_ctrl);
 }
 
+static void
+mlx5dr_send_engine_update_rule_resize(struct mlx5dr_send_engine *queue,
+				      struct mlx5dr_send_ring_priv *priv,
+				      enum rte_flow_op_status *status)
+{
+	switch (priv->rule->resize_info->state) {
+	case MLX5DR_RULE_RESIZE_STATE_WRITING:
+		if (priv->rule->status == MLX5DR_RULE_STATUS_FAILING) {
+			/* Backup original RTCs */
+			uint32_t orig_rtc_0 = priv->rule->resize_info->rtc_0;
+			uint32_t orig_rtc_1 = priv->rule->resize_info->rtc_1;
+
+			/* Delete partialy failed move rule using resize_info */
+			priv->rule->resize_info->rtc_0 = priv->rule->rtc_0;
+			priv->rule->resize_info->rtc_1 = priv->rule->rtc_1;
+
+			/* Move rule to orginal RTC for future delete */
+			priv->rule->rtc_0 = orig_rtc_0;
+			priv->rule->rtc_1 = orig_rtc_1;
+		}
+		/* Clean leftovers */
+		mlx5dr_rule_move_hws_remove(priv->rule, queue, priv->user_data);
+		break;
+
+	case MLX5DR_RULE_RESIZE_STATE_DELETING:
+		if (priv->rule->status == MLX5DR_RULE_STATUS_FAILING) {
+			*status = RTE_FLOW_OP_ERROR;
+		} else {
+			*status = RTE_FLOW_OP_SUCCESS;
+			priv->rule->matcher = priv->rule->matcher->resize_dst;
+		}
+		priv->rule->resize_info->state = MLX5DR_RULE_RESIZE_STATE_IDLE;
+		priv->rule->status = MLX5DR_RULE_STATUS_CREATED;
+		break;
+
+	default:
+		break;
+	}
+}
+
 static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue,
 					   struct mlx5dr_send_ring_priv *priv,
 					   uint16_t wqe_cnt,
@@ -465,6 +505,11 @@ static void mlx5dr_send_engine_update_rule(struct mlx5dr_send_engine *queue,
 
 	/* Update rule status for the last completion */
 	if (!priv->rule->pending_wqes) {
+		if (unlikely(mlx5dr_rule_move_in_progress(priv->rule))) {
+			mlx5dr_send_engine_update_rule_resize(queue, priv, status);
+			return;
+		}
+
 		if (unlikely(priv->rule->status == MLX5DR_RULE_STATUS_FAILING)) {
 			/* Rule completely failed and doesn't require cleanup */
 			if (!priv->rule->rtc_0 && !priv->rule->rtc_1)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index b35079b30a..b003e97dc9 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -17,6 +17,8 @@
 #include <mlx5_prm.h>
 
 #include "mlx5.h"
+#include "hws/mlx5dr.h"
+#include "rte_flow.h"
 #include "rte_pmd_mlx5.h"
 #include "hws/mlx5dr.h"
 
-- 
2.39.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 2/5] net/mlx5: add resize function to ipool
  2024-02-02 11:56 [PATCH 0/5] net/mlx5: add support for flow table resizing Gregory Etelson
  2024-02-02 11:56 ` [PATCH 1/5] net/mlx5/hws: add support for resizable matchers Gregory Etelson
@ 2024-02-02 11:56 ` Gregory Etelson
  2024-02-02 11:56 ` [PATCH 3/5] net/mlx5: fix parameters verification in HWS table create Gregory Etelson
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 17+ messages in thread
From: Gregory Etelson @ 2024-02-02 11:56 UTC (permalink / raw)
  To: dev
  Cc: getelson,  ,
	rasland, thomas, Dariusz Sosnowski, Viacheslav Ovsiienko,
	Ori Kam, Suanming Mou, Matan Azrad

From: Maayan Kashani <mkashani@nvidia.com>

Before this patch, ipool size could be fixed by
setting max_idx in mlx5_indexed_pool_config upon
ipool creation. Or it can be auto resized to the
maximum limit by setting max_idx to zero upon
ipool creation and the saved value is the maximum
index possible.
This patch adds ipool_resize API that enables to
update the value of max_idx in case it is not set to
maximum, meaning not in auto resize mode. It
enables the allocation of new trunk when using
malloc/zmalloc up to the max_idx limit. Please
notice the resize number of entries should be divisible by trunk_size.

Signed-off-by: Maayan Kashani <mkashani@nvidia.com>
---
 drivers/net/mlx5/mlx5_utils.c | 29 +++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_utils.h | 16 ++++++++++++++++
 2 files changed, 45 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c
index 4db738785f..e28db2ec43 100644
--- a/drivers/net/mlx5/mlx5_utils.c
+++ b/drivers/net/mlx5/mlx5_utils.c
@@ -809,6 +809,35 @@ mlx5_ipool_get_next(struct mlx5_indexed_pool *pool, uint32_t *pos)
 	return NULL;
 }
 
+int
+mlx5_ipool_resize(struct mlx5_indexed_pool *pool, uint32_t num_entries)
+{
+	uint32_t cur_max_idx;
+	uint32_t max_index = mlx5_trunk_idx_offset_get(pool, TRUNK_MAX_IDX + 1);
+
+	if (num_entries % pool->cfg.trunk_size) {
+		DRV_LOG(ERR, "num_entries param should be trunk_size(=%u) multiplication\n",
+			pool->cfg.trunk_size);
+		return -EINVAL;
+	}
+
+	mlx5_ipool_lock(pool);
+	cur_max_idx = pool->cfg.max_idx + num_entries;
+	/* If the ipool max idx is above maximum or uint overflow occurred. */
+	if (cur_max_idx > max_index || cur_max_idx < num_entries) {
+		DRV_LOG(ERR, "Ipool resize failed\n");
+		DRV_LOG(ERR, "Adding %u entries to existing %u entries, will cross max limit(=%u)\n",
+		num_entries, cur_max_idx, max_index);
+		mlx5_ipool_unlock(pool);
+		return -EINVAL;
+	}
+
+	/* Update maximum entries number. */
+	pool->cfg.max_idx = cur_max_idx;
+	mlx5_ipool_unlock(pool);
+	return 0;
+}
+
 void
 mlx5_ipool_dump(struct mlx5_indexed_pool *pool)
 {
diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h
index 82e8298781..f3c0d76a6d 100644
--- a/drivers/net/mlx5/mlx5_utils.h
+++ b/drivers/net/mlx5/mlx5_utils.h
@@ -427,6 +427,22 @@ void mlx5_ipool_flush_cache(struct mlx5_indexed_pool *pool);
  */
 void *mlx5_ipool_get_next(struct mlx5_indexed_pool *pool, uint32_t *pos);
 
+/**
+ * This function resize the ipool.
+ *
+ * @param pool
+ *   Pointer to the index memory pool handler.
+ * @param num_entries
+ *   Number of entries to be added to the pool.
+ *   This number should be divisible by trunk_size.
+ *
+ * @return
+ *   - non-zero value on error.
+ *   - 0 on success.
+ *
+ */
+int mlx5_ipool_resize(struct mlx5_indexed_pool *pool, uint32_t num_entries);
+
 /**
  * This function allocates new empty Three-level table.
  *
-- 
2.39.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 3/5] net/mlx5: fix parameters verification in HWS table create
  2024-02-02 11:56 [PATCH 0/5] net/mlx5: add support for flow table resizing Gregory Etelson
  2024-02-02 11:56 ` [PATCH 1/5] net/mlx5/hws: add support for resizable matchers Gregory Etelson
  2024-02-02 11:56 ` [PATCH 2/5] net/mlx5: add resize function to ipool Gregory Etelson
@ 2024-02-02 11:56 ` Gregory Etelson
  2024-02-02 11:56 ` [PATCH 4/5] net/mlx5: move multi-pattern actions management to table level Gregory Etelson
  2024-02-02 11:56 ` [PATCH 5/5] net/mlx5: add support for flow table resizing Gregory Etelson
  4 siblings, 0 replies; 17+ messages in thread
From: Gregory Etelson @ 2024-02-02 11:56 UTC (permalink / raw)
  To: dev
  Cc: getelson,  ,
	rasland, thomas, Dariusz Sosnowski, Viacheslav Ovsiienko,
	Ori Kam, Suanming Mou, Matan Azrad, Rongwei Liu

Modified the conditionals in `flow_hw_table_create()` to use bitwise
AND instead of equality checks when assessing
`table_cfg->attr->specialize` bitmask.
This will allow for greater flexibility as the bitmask may encapsulate
multiple flags.
The patch maintains the previous behavior with single flag values,
while providing support for multiple flags.

Fixes: 592d5367b5e4 ("net/mlx5: enable hint in async flow table")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_hw.c | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index da873ae2e2..3125500641 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -4368,12 +4368,23 @@ flow_hw_table_create(struct rte_eth_dev *dev,
 	matcher_attr.rule.num_log = rte_log2_u32(nb_flows);
 	/* Parse hints information. */
 	if (attr->specialize) {
-		if (attr->specialize == RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG)
-			matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_WIRE;
-		else if (attr->specialize == RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG)
-			matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_VPORT;
-		else
-			DRV_LOG(INFO, "Unsupported hint value %x", attr->specialize);
+		uint32_t val = RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG |
+			       RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG;
+
+		if ((attr->specialize & val) == val) {
+			DRV_LOG(INFO, "Invalid hint value %x",
+				attr->specialize);
+			rte_errno = EINVAL;
+			goto it_error;
+		}
+		if (attr->specialize &
+		    RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG)
+			matcher_attr.optimize_flow_src =
+				MLX5DR_MATCHER_FLOW_SRC_WIRE;
+		else if (attr->specialize &
+			 RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG)
+			matcher_attr.optimize_flow_src =
+				MLX5DR_MATCHER_FLOW_SRC_VPORT;
 	}
 	/* Build the item template. */
 	for (i = 0; i < nb_item_templates; i++) {
-- 
2.39.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 4/5] net/mlx5: move multi-pattern actions management to table level
  2024-02-02 11:56 [PATCH 0/5] net/mlx5: add support for flow table resizing Gregory Etelson
                   ` (2 preceding siblings ...)
  2024-02-02 11:56 ` [PATCH 3/5] net/mlx5: fix parameters verification in HWS table create Gregory Etelson
@ 2024-02-02 11:56 ` Gregory Etelson
  2024-02-02 11:56 ` [PATCH 5/5] net/mlx5: add support for flow table resizing Gregory Etelson
  4 siblings, 0 replies; 17+ messages in thread
From: Gregory Etelson @ 2024-02-02 11:56 UTC (permalink / raw)
  To: dev
  Cc: getelson,  ,
	rasland, thomas, Dariusz Sosnowski, Viacheslav Ovsiienko,
	Ori Kam, Suanming Mou, Matan Azrad

The multi-pattern actions related structures and management code
have been moved to the table level.
That code refactor is required for the upcoming table resize feature.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |  73 +++++++++-
 drivers/net/mlx5/mlx5_flow_hw.c | 229 +++++++++++++++-----------------
 2 files changed, 177 insertions(+), 125 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index b003e97dc9..497d4b0f0c 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1390,7 +1390,6 @@ struct mlx5_hw_encap_decap_action {
 	/* Is header_reformat action shared across flows in table. */
 	uint32_t shared:1;
 	uint32_t multi_pattern:1;
-	volatile uint32_t *multi_pattern_refcnt;
 	size_t data_size; /* Action metadata size. */
 	uint8_t data[]; /* Action data. */
 };
@@ -1413,7 +1412,6 @@ struct mlx5_hw_modify_header_action {
 	/* Is MODIFY_HEADER action shared across flows in table. */
 	uint32_t shared:1;
 	uint32_t multi_pattern:1;
-	volatile uint32_t *multi_pattern_refcnt;
 	/* Amount of modification commands stored in the precompiled buffer. */
 	uint32_t mhdr_cmds_num;
 	/* Precompiled modification commands. */
@@ -1467,6 +1465,76 @@ struct mlx5_flow_group {
 #define MLX5_HW_TBL_MAX_ITEM_TEMPLATE 2
 #define MLX5_HW_TBL_MAX_ACTION_TEMPLATE 32
 
+#define MLX5_MULTIPATTERN_ENCAP_NUM 5
+#define MLX5_MAX_TABLE_RESIZE_NUM 64
+
+struct mlx5_multi_pattern_segment {
+	uint32_t capacity;
+	uint32_t head_index;
+	struct mlx5dr_action *mhdr_action;
+	struct mlx5dr_action *reformat_action[MLX5_MULTIPATTERN_ENCAP_NUM];
+};
+
+struct mlx5_tbl_multi_pattern_ctx {
+	struct {
+		uint32_t elements_num;
+		struct mlx5dr_action_reformat_header reformat_hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+		/**
+		 * insert_header structure is larger than reformat_header.
+		 * Enclosing these structures with union will case a gap between
+		 * reformat_hdr array elements.
+		 * mlx5dr_action_create_reformat() expects adjacent array elements.
+		 */
+		struct mlx5dr_action_insert_header insert_hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+	} reformat[MLX5_MULTIPATTERN_ENCAP_NUM];
+
+	struct {
+		uint32_t elements_num;
+		struct mlx5dr_action_mh_pattern pattern[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+	} mh;
+	struct mlx5_multi_pattern_segment segments[MLX5_MAX_TABLE_RESIZE_NUM];
+};
+
+static __rte_always_inline void
+mlx5_multi_pattern_activate(struct mlx5_tbl_multi_pattern_ctx *mpctx)
+{
+	mpctx->segments[0].head_index = 1;
+}
+
+static __rte_always_inline bool
+mlx5_is_multi_pattern_active(const struct mlx5_tbl_multi_pattern_ctx *mpctx)
+{
+	return mpctx->segments[0].head_index == 1;
+}
+
+static __rte_always_inline struct mlx5_multi_pattern_segment *
+mlx5_multi_pattern_segment_get_next(struct mlx5_tbl_multi_pattern_ctx *mpctx)
+{
+	int i;
+
+	for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) {
+		if (!mpctx->segments[i].capacity)
+			return &mpctx->segments[i];
+	}
+	return NULL;
+}
+
+static __rte_always_inline struct mlx5_multi_pattern_segment *
+mlx5_multi_pattern_segment_find(struct mlx5_tbl_multi_pattern_ctx *mpctx,
+				uint32_t flow_resource_ix)
+{
+	int i;
+
+	for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) {
+		uint32_t limit = mpctx->segments[i].head_index +
+				 mpctx->segments[i].capacity;
+
+		if (flow_resource_ix < limit)
+			return &mpctx->segments[i];
+	}
+	return NULL;
+}
+
 struct mlx5_flow_template_table_cfg {
 	struct rte_flow_template_table_attr attr; /* Table attributes passed through flow API. */
 	bool external; /* True if created by flow API, false if table is internal to PMD. */
@@ -1487,6 +1555,7 @@ struct rte_flow_template_table {
 	uint8_t nb_item_templates; /* Item template number. */
 	uint8_t nb_action_templates; /* Action template number. */
 	uint32_t refcnt; /* Table reference counter. */
+	struct mlx5_tbl_multi_pattern_ctx mpctx;
 };
 
 #endif
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 3125500641..e5c770c6fc 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -74,41 +74,14 @@ struct mlx5_indlst_legacy {
 #define MLX5_CONST_ENCAP_ITEM(encap_type, ptr) \
 (((const struct encap_type *)(ptr))->definition)
 
-struct mlx5_multi_pattern_ctx {
-	union {
-		struct mlx5dr_action_reformat_header reformat_hdr;
-		struct mlx5dr_action_mh_pattern mh_pattern;
-	};
-	union {
-		/* action template auxiliary structures for object destruction */
-		struct mlx5_hw_encap_decap_action *encap;
-		struct mlx5_hw_modify_header_action *mhdr;
-	};
-	/* multi pattern action */
-	struct mlx5dr_rule_action *rule_action;
-};
-
-#define MLX5_MULTIPATTERN_ENCAP_NUM 4
-
-struct mlx5_tbl_multi_pattern_ctx {
-	struct {
-		uint32_t elements_num;
-		struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
-	} reformat[MLX5_MULTIPATTERN_ENCAP_NUM];
-
-	struct {
-		uint32_t elements_num;
-		struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
-	} mh;
-};
-
-#define MLX5_EMPTY_MULTI_PATTERN_CTX {{{0,}},}
-
 static int
 mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev,
 			       struct rte_flow_template_table *tbl,
-			       struct mlx5_tbl_multi_pattern_ctx *mpat,
+			       struct mlx5_multi_pattern_segment *segment,
+			       uint32_t bulk_size,
 			       struct rte_flow_error *error);
+static void
+mlx5_destroy_multi_pattern_segment(struct mlx5_multi_pattern_segment *segment);
 
 static __rte_always_inline int
 mlx5_multi_pattern_reformat_to_index(enum mlx5dr_action_type type)
@@ -570,28 +543,14 @@ flow_hw_ct_compile(struct rte_eth_dev *dev,
 static void
 flow_hw_template_destroy_reformat_action(struct mlx5_hw_encap_decap_action *encap_decap)
 {
-	if (encap_decap->multi_pattern) {
-		uint32_t refcnt = __atomic_sub_fetch(encap_decap->multi_pattern_refcnt,
-						     1, __ATOMIC_RELAXED);
-		if (refcnt)
-			return;
-		mlx5_free((void *)(uintptr_t)encap_decap->multi_pattern_refcnt);
-	}
-	if (encap_decap->action)
+	if (encap_decap->action && !encap_decap->multi_pattern)
 		mlx5dr_action_destroy(encap_decap->action);
 }
 
 static void
 flow_hw_template_destroy_mhdr_action(struct mlx5_hw_modify_header_action *mhdr)
 {
-	if (mhdr->multi_pattern) {
-		uint32_t refcnt = __atomic_sub_fetch(mhdr->multi_pattern_refcnt,
-						     1, __ATOMIC_RELAXED);
-		if (refcnt)
-			return;
-		mlx5_free((void *)(uintptr_t)mhdr->multi_pattern_refcnt);
-	}
-	if (mhdr->action)
+	if (mhdr->action && !mhdr->multi_pattern)
 		mlx5dr_action_destroy(mhdr->action);
 }
 
@@ -1870,6 +1829,7 @@ mlx5_tbl_translate_reformat(struct mlx5_priv *priv,
 	const struct rte_flow_attr *attr = &table_attr->flow_attr;
 	enum mlx5dr_table_type tbl_type = get_mlx5dr_table_type(attr);
 	struct mlx5dr_action_reformat_header hdr;
+	struct mlx5dr_action_insert_header ihdr;
 	uint8_t buf[MLX5_ENCAP_MAX_LEN];
 	bool shared_rfmt = false;
 	int ret;
@@ -1911,21 +1871,25 @@ mlx5_tbl_translate_reformat(struct mlx5_priv *priv,
 		acts->encap_decap->shared = true;
 	} else {
 		uint32_t ix;
-		typeof(mp_ctx->reformat[0]) *reformat_ctx = mp_ctx->reformat +
-							    mp_reformat_ix;
+		typeof(mp_ctx->reformat[0]) *reformat = mp_ctx->reformat +
+							mp_reformat_ix;
 
-		ix = reformat_ctx->elements_num++;
-		reformat_ctx->ctx[ix].reformat_hdr = hdr;
-		reformat_ctx->ctx[ix].rule_action = &acts->rule_acts[at->reformat_off];
-		reformat_ctx->ctx[ix].encap = acts->encap_decap;
+		ix = reformat->elements_num++;
+		if (refmt_type == MLX5DR_ACTION_TYP_INSERT_HEADER)
+			reformat->insert_hdr[ix] = ihdr;
+		else
+			reformat->reformat_hdr[ix] = hdr;
 		acts->rule_acts[at->reformat_off].reformat.hdr_idx = ix;
 		acts->encap_decap_pos = at->reformat_off;
+		acts->encap_decap->multi_pattern = 1;
 		acts->encap_decap->data_size = data_size;
+		acts->encap_decap->action_type = refmt_type;
 		ret = __flow_hw_act_data_encap_append
 			(priv, acts, (at->actions + reformat_src)->type,
 			 reformat_src, at->reformat_off, data_size);
 		if (ret)
 			return -rte_errno;
+		mlx5_multi_pattern_activate(mp_ctx);
 	}
 	return 0;
 }
@@ -1974,12 +1938,11 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev,
 	} else {
 		typeof(mp_ctx->mh) *mh = &mp_ctx->mh;
 		uint32_t idx = mh->elements_num;
-		struct mlx5_multi_pattern_ctx *mh_ctx = mh->ctx + mh->elements_num++;
 
-		mh_ctx->mh_pattern = pattern;
-		mh_ctx->mhdr = acts->mhdr;
-		mh_ctx->rule_action = &acts->rule_acts[mhdr_ix];
+		mh->pattern[mh->elements_num++] = pattern;
+		acts->mhdr->multi_pattern = 1;
 		acts->rule_acts[mhdr_ix].modify_header.pattern_idx = idx;
+		mlx5_multi_pattern_activate(mp_ctx);
 	}
 	return 0;
 }
@@ -2539,16 +2502,17 @@ flow_hw_actions_translate(struct rte_eth_dev *dev,
 {
 	int ret;
 	uint32_t i;
-	struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX;
 
 	for (i = 0; i < tbl->nb_action_templates; i++) {
 		if (__flow_hw_actions_translate(dev, &tbl->cfg,
 						&tbl->ats[i].acts,
 						tbl->ats[i].action_template,
-						&mpat, error))
+						&tbl->mpctx, error))
 			goto err;
 	}
-	ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error);
+	ret = mlx5_tbl_multi_pattern_process(dev, tbl, &tbl->mpctx.segments[0],
+					     rte_log2_u32(tbl->cfg.attr.nb_flows),
+					     error);
 	if (ret)
 		goto err;
 	return 0;
@@ -2922,6 +2886,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 	int ret;
 	uint32_t age_idx = 0;
 	struct mlx5_aso_mtr *aso_mtr;
+	struct mlx5_multi_pattern_segment *mp_segment;
 
 	rte_memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * at->dr_actions_num);
 	attr.group = table->grp->group_id;
@@ -3052,6 +3017,10 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 			MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len);
 			break;
 		case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
+			mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx);
+			if (!mp_segment || !mp_segment->mhdr_action)
+				return -1;
+			rule_acts[hw_acts->mhdr->pos].action = mp_segment->mhdr_action;
 			if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID)
 				ret = flow_hw_set_vlan_vid_construct(dev, job,
 								     act_data,
@@ -3203,9 +3172,17 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 				     age_idx);
 	}
 	if (hw_acts->encap_decap && !hw_acts->encap_decap->shared) {
-		rule_acts[hw_acts->encap_decap_pos].reformat.offset =
-				job->flow->res_idx - 1;
-		rule_acts[hw_acts->encap_decap_pos].reformat.data = buf;
+		int ix = mlx5_multi_pattern_reformat_to_index(hw_acts->encap_decap->action_type);
+		struct mlx5dr_rule_action *ra = &rule_acts[hw_acts->encap_decap_pos];
+
+		if (ix < 0)
+			return -1;
+		mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx);
+		if (!mp_segment || !mp_segment->reformat_action[ix])
+			return -1;
+		ra->action = mp_segment->reformat_action[ix];
+		ra->reformat.offset = job->flow->res_idx - 1;
+		ra->reformat.data = buf;
 	}
 	if (hw_acts->push_remove && !hw_acts->push_remove->shared) {
 		rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset =
@@ -4111,86 +4088,65 @@ flow_hw_q_flow_flush(struct rte_eth_dev *dev,
 static int
 mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev,
 			       struct rte_flow_template_table *tbl,
-			       struct mlx5_tbl_multi_pattern_ctx *mpat,
+			       struct mlx5_multi_pattern_segment *segment,
+			       uint32_t bulk_size,
 			       struct rte_flow_error *error)
 {
+	int ret = 0;
 	uint32_t i;
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_tbl_multi_pattern_ctx *mpctx = &tbl->mpctx;
 	const struct rte_flow_template_table_attr *table_attr = &tbl->cfg.attr;
 	const struct rte_flow_attr *attr = &table_attr->flow_attr;
 	enum mlx5dr_table_type type = get_mlx5dr_table_type(attr);
 	uint32_t flags = mlx5_hw_act_flag[!!attr->group][type];
-	struct mlx5dr_action *dr_action;
-	uint32_t bulk_size = rte_log2_u32(table_attr->nb_flows);
+	struct mlx5dr_action *dr_action = NULL;
 
 	for (i = 0; i < MLX5_MULTIPATTERN_ENCAP_NUM; i++) {
-		uint32_t j;
-		uint32_t *reformat_refcnt;
-		typeof(mpat->reformat[0]) *reformat = mpat->reformat + i;
-		struct mlx5dr_action_reformat_header hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+		typeof(mpctx->reformat[0]) *reformat = mpctx->reformat + i;
 		enum mlx5dr_action_type reformat_type =
 			mlx5_multi_pattern_reformat_index_to_type(i);
 
 		if (!reformat->elements_num)
 			continue;
-		for (j = 0; j < reformat->elements_num; j++)
-			hdr[j] = reformat->ctx[j].reformat_hdr;
-		reformat_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t), 0,
-					      rte_socket_id());
-		if (!reformat_refcnt)
-			return rte_flow_error_set(error, ENOMEM,
-						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-						  NULL, "failed to allocate multi-pattern encap counter");
-		*reformat_refcnt = reformat->elements_num;
-		dr_action = mlx5dr_action_create_reformat
-			(priv->dr_ctx, reformat_type, reformat->elements_num, hdr,
-			 bulk_size, flags);
+		dr_action = reformat_type == MLX5DR_ACTION_TYP_INSERT_HEADER ?
+			mlx5dr_action_create_insert_header
+			(priv->dr_ctx, reformat->elements_num,
+			 reformat->insert_hdr, bulk_size, flags) :
+			mlx5dr_action_create_reformat
+			(priv->dr_ctx, reformat_type, reformat->elements_num,
+			 reformat->reformat_hdr, bulk_size, flags);
 		if (!dr_action) {
-			mlx5_free(reformat_refcnt);
-			return rte_flow_error_set(error, rte_errno,
-						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-						  NULL,
-						  "failed to create multi-pattern encap action");
-		}
-		for (j = 0; j < reformat->elements_num; j++) {
-			reformat->ctx[j].rule_action->action = dr_action;
-			reformat->ctx[j].encap->action = dr_action;
-			reformat->ctx[j].encap->multi_pattern = 1;
-			reformat->ctx[j].encap->multi_pattern_refcnt = reformat_refcnt;
+			ret = rte_flow_error_set(error, rte_errno,
+						 RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+						 NULL,
+						 "failed to create multi-pattern encap action");
+			goto error;
 		}
+		segment->reformat_action[i] = dr_action;
 	}
-	if (mpat->mh.elements_num) {
-		typeof(mpat->mh) *mh = &mpat->mh;
-		struct mlx5dr_action_mh_pattern pattern[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
-		uint32_t *mh_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t),
-						 0, rte_socket_id());
-
-		if (!mh_refcnt)
-			return rte_flow_error_set(error, ENOMEM,
-						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-						  NULL, "failed to allocate modify header counter");
-		*mh_refcnt = mpat->mh.elements_num;
-		for (i = 0; i < mpat->mh.elements_num; i++)
-			pattern[i] = mh->ctx[i].mh_pattern;
+	if (mpctx->mh.elements_num) {
+		typeof(mpctx->mh) *mh = &mpctx->mh;
 		dr_action = mlx5dr_action_create_modify_header
-			(priv->dr_ctx, mpat->mh.elements_num, pattern,
+			(priv->dr_ctx, mpctx->mh.elements_num, mh->pattern,
 			 bulk_size, flags);
 		if (!dr_action) {
-			mlx5_free(mh_refcnt);
-			return rte_flow_error_set(error, rte_errno,
+			ret = rte_flow_error_set(error, rte_errno,
 						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-						  NULL,
-						  "failed to create multi-pattern header modify action");
-		}
-		for (i = 0; i < mpat->mh.elements_num; i++) {
-			mh->ctx[i].rule_action->action = dr_action;
-			mh->ctx[i].mhdr->action = dr_action;
-			mh->ctx[i].mhdr->multi_pattern = 1;
-			mh->ctx[i].mhdr->multi_pattern_refcnt = mh_refcnt;
+						  NULL, "failed to create multi-pattern header modify action");
+			goto error;
 		}
+		segment->mhdr_action = dr_action;
+	}
+	if (dr_action) {
+		segment->capacity = RTE_BIT32(bulk_size);
+		if (segment != &mpctx->segments[MLX5_MAX_TABLE_RESIZE_NUM - 1])
+			segment[1].head_index = segment->head_index + segment->capacity;
 	}
-
 	return 0;
+error:
+	mlx5_destroy_multi_pattern_segment(segment);
+	return ret;
 }
 
 static int
@@ -4203,7 +4159,6 @@ mlx5_hw_build_template_table(struct rte_eth_dev *dev,
 {
 	int ret;
 	uint8_t i;
-	struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX;
 
 	for (i = 0; i < nb_action_templates; i++) {
 		uint32_t refcnt = __atomic_add_fetch(&action_templates[i]->refcnt, 1,
@@ -4224,16 +4179,21 @@ mlx5_hw_build_template_table(struct rte_eth_dev *dev,
 		ret = __flow_hw_actions_translate(dev, &tbl->cfg,
 						  &tbl->ats[i].acts,
 						  action_templates[i],
-						  &mpat, error);
+						  &tbl->mpctx, error);
 		if (ret) {
 			i++;
 			goto at_error;
 		}
 	}
 	tbl->nb_action_templates = nb_action_templates;
-	ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error);
-	if (ret)
-		goto at_error;
+	if (mlx5_is_multi_pattern_active(&tbl->mpctx)) {
+		ret = mlx5_tbl_multi_pattern_process(dev, tbl,
+						     &tbl->mpctx.segments[0],
+						     rte_log2_u32(tbl->cfg.attr.nb_flows),
+						     error);
+		if (ret)
+			goto at_error;
+	}
 	return 0;
 
 at_error:
@@ -4600,6 +4560,28 @@ flow_hw_template_table_create(struct rte_eth_dev *dev,
 				    action_templates, nb_action_templates, error);
 }
 
+static void
+mlx5_destroy_multi_pattern_segment(struct mlx5_multi_pattern_segment *segment)
+{
+	int i;
+
+	if (segment->mhdr_action)
+		mlx5dr_action_destroy(segment->mhdr_action);
+	for (i = 0; i < MLX5_MULTIPATTERN_ENCAP_NUM; i++) {
+		if (segment->reformat_action[i])
+			mlx5dr_action_destroy(segment->reformat_action[i]);
+	}
+	segment->capacity = 0;
+}
+
+static void
+flow_hw_destroy_table_multi_pattern_ctx(struct rte_flow_template_table *table)
+{
+	int sx;
+
+	for (sx = 0; sx < MLX5_MAX_TABLE_RESIZE_NUM; sx++)
+		mlx5_destroy_multi_pattern_segment(table->mpctx.segments + sx);
+}
 /**
  * Destroy flow table.
  *
@@ -4645,6 +4627,7 @@ flow_hw_table_destroy(struct rte_eth_dev *dev,
 		__atomic_fetch_sub(&table->ats[i].action_template->refcnt,
 				   1, __ATOMIC_RELAXED);
 	}
+	flow_hw_destroy_table_multi_pattern_ctx(table);
 	mlx5dr_matcher_destroy(table->matcher);
 	mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry);
 	mlx5_ipool_destroy(table->resource);
-- 
2.39.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 5/5] net/mlx5: add support for flow table resizing
  2024-02-02 11:56 [PATCH 0/5] net/mlx5: add support for flow table resizing Gregory Etelson
                   ` (3 preceding siblings ...)
  2024-02-02 11:56 ` [PATCH 4/5] net/mlx5: move multi-pattern actions management to table level Gregory Etelson
@ 2024-02-02 11:56 ` Gregory Etelson
  4 siblings, 0 replies; 17+ messages in thread
From: Gregory Etelson @ 2024-02-02 11:56 UTC (permalink / raw)
  To: dev
  Cc: getelson,  ,
	rasland, thomas, Dariusz Sosnowski, Viacheslav Ovsiienko,
	Ori Kam, Suanming Mou, Matan Azrad

Support template table API in PMD.
The patch allows to increase existing table capacity.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
 drivers/net/mlx5/mlx5.h         |   5 +
 drivers/net/mlx5/mlx5_flow.c    |  51 ++++
 drivers/net/mlx5/mlx5_flow.h    |  84 ++++--
 drivers/net/mlx5/mlx5_flow_hw.c | 512 +++++++++++++++++++++++++++-----
 drivers/net/mlx5/mlx5_host.c    | 211 +++++++++++++
 5 files changed, 758 insertions(+), 105 deletions(-)
 create mode 100644 drivers/net/mlx5/mlx5_host.c

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f2e2e04429..ff0ca7fa42 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -380,6 +380,9 @@ enum mlx5_hw_job_type {
 	MLX5_HW_Q_JOB_TYPE_UPDATE, /* Flow update job type. */
 	MLX5_HW_Q_JOB_TYPE_QUERY, /* Flow query job type. */
 	MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY, /* Flow update and query job type. */
+	MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE, /* Non-optimized flow create job type. */
+	MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY, /* Non-optimized destroy create job type. */
+	MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE, /* Move flow after table resize. */
 };
 
 enum mlx5_hw_indirect_type {
@@ -422,6 +425,8 @@ struct mlx5_hw_q {
 	struct mlx5_hw_q_job **job; /* LIFO header. */
 	struct rte_ring *indir_cq; /* Indirect action SW completion queue. */
 	struct rte_ring *indir_iq; /* Indirect action SW in progress queue. */
+	struct rte_ring *flow_transfer_pending;
+	struct rte_ring *flow_transfer_completed;
 } __rte_cache_aligned;
 
 
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 85e8c77c81..521119e138 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1198,6 +1198,20 @@ mlx5_flow_calc_table_hash(struct rte_eth_dev *dev,
 			  uint8_t pattern_template_index,
 			  uint32_t *hash, struct rte_flow_error *error);
 
+static int
+mlx5_template_table_resize(struct rte_eth_dev *dev,
+			   struct rte_flow_template_table *table,
+			   uint32_t nb_rules, struct rte_flow_error *error);
+static int
+mlx5_flow_async_update_resized(struct rte_eth_dev *dev, uint32_t queue,
+			       const struct rte_flow_op_attr *attr,
+			       struct rte_flow *rule, void *user_data,
+			       struct rte_flow_error *error);
+static int
+mlx5_table_resize_complete(struct rte_eth_dev *dev,
+			   struct rte_flow_template_table *table,
+			   struct rte_flow_error *error);
+
 static const struct rte_flow_ops mlx5_flow_ops = {
 	.validate = mlx5_flow_validate,
 	.create = mlx5_flow_create,
@@ -1253,6 +1267,9 @@ static const struct rte_flow_ops mlx5_flow_ops = {
 	.async_action_list_handle_query_update =
 		mlx5_flow_async_action_list_handle_query_update,
 	.flow_calc_table_hash = mlx5_flow_calc_table_hash,
+	.flow_template_table_resize = mlx5_template_table_resize,
+	.flow_update_resized = mlx5_flow_async_update_resized,
+	.flow_template_table_resize_complete = mlx5_table_resize_complete,
 };
 
 /* Tunnel information. */
@@ -11115,6 +11132,40 @@ mlx5_flow_calc_table_hash(struct rte_eth_dev *dev,
 					  hash, error);
 }
 
+static int
+mlx5_template_table_resize(struct rte_eth_dev *dev,
+			   struct rte_flow_template_table *table,
+			   uint32_t nb_rules, struct rte_flow_error *error)
+{
+	const struct mlx5_flow_driver_ops *fops;
+
+	MLX5_DRV_FOPS_OR_ERR(dev, fops, table_resize, ENOTSUP);
+	return fops->table_resize(dev, table, nb_rules, error);
+}
+
+static int
+mlx5_table_resize_complete(struct rte_eth_dev *dev,
+			   struct rte_flow_template_table *table,
+			   struct rte_flow_error *error)
+{
+	const struct mlx5_flow_driver_ops *fops;
+
+	MLX5_DRV_FOPS_OR_ERR(dev, fops, table_resize_complete, ENOTSUP);
+	return fops->table_resize_complete(dev, table, error);
+}
+
+static int
+mlx5_flow_async_update_resized(struct rte_eth_dev *dev, uint32_t queue,
+			       const struct rte_flow_op_attr *op_attr,
+			       struct rte_flow *rule, void *user_data,
+			       struct rte_flow_error *error)
+{
+	const struct mlx5_flow_driver_ops *fops;
+
+	MLX5_DRV_FOPS_OR_ERR(dev, fops, flow_update_resized, ENOTSUP);
+	return fops->flow_update_resized(dev, queue, op_attr, rule, user_data, error);
+}
+
 /**
  * Destroy all indirect actions (shared RSS).
  *
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 497d4b0f0c..c7d84af659 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1210,6 +1210,7 @@ struct rte_flow {
 	uint32_t tunnel:1;
 	uint32_t meter:24; /**< Holds flow meter id. */
 	uint32_t indirect_type:2; /**< Indirect action type. */
+	uint32_t matcher_selector:1; /**< Matcher index in resizable table. */
 	uint32_t rix_mreg_copy;
 	/**< Index to metadata register copy table resource. */
 	uint32_t counter; /**< Holds flow counter. */
@@ -1255,6 +1256,7 @@ struct rte_flow_hw {
 	};
 	struct rte_flow_template_table *table; /* The table flow allcated from. */
 	uint8_t mt_idx;
+	uint8_t matcher_selector:1;
 	uint32_t age_idx;
 	cnt_id_t cnt_id;
 	uint32_t mtr_id;
@@ -1469,6 +1471,11 @@ struct mlx5_flow_group {
 #define MLX5_MAX_TABLE_RESIZE_NUM 64
 
 struct mlx5_multi_pattern_segment {
+	/*
+	 * Modify Header Argument Objects number allocated for action in that
+	 * segment.
+	 * Capacity is always power of 2.
+	 */
 	uint32_t capacity;
 	uint32_t head_index;
 	struct mlx5dr_action *mhdr_action;
@@ -1507,43 +1514,22 @@ mlx5_is_multi_pattern_active(const struct mlx5_tbl_multi_pattern_ctx *mpctx)
 	return mpctx->segments[0].head_index == 1;
 }
 
-static __rte_always_inline struct mlx5_multi_pattern_segment *
-mlx5_multi_pattern_segment_get_next(struct mlx5_tbl_multi_pattern_ctx *mpctx)
-{
-	int i;
-
-	for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) {
-		if (!mpctx->segments[i].capacity)
-			return &mpctx->segments[i];
-	}
-	return NULL;
-}
-
-static __rte_always_inline struct mlx5_multi_pattern_segment *
-mlx5_multi_pattern_segment_find(struct mlx5_tbl_multi_pattern_ctx *mpctx,
-				uint32_t flow_resource_ix)
-{
-	int i;
-
-	for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) {
-		uint32_t limit = mpctx->segments[i].head_index +
-				 mpctx->segments[i].capacity;
-
-		if (flow_resource_ix < limit)
-			return &mpctx->segments[i];
-	}
-	return NULL;
-}
-
 struct mlx5_flow_template_table_cfg {
 	struct rte_flow_template_table_attr attr; /* Table attributes passed through flow API. */
 	bool external; /* True if created by flow API, false if table is internal to PMD. */
 };
 
+struct mlx5_matcher_info {
+	struct mlx5dr_matcher *matcher; /* Template matcher. */
+	uint32_t refcnt;
+};
+
 struct rte_flow_template_table {
 	LIST_ENTRY(rte_flow_template_table) next;
 	struct mlx5_flow_group *grp; /* The group rte_flow_template_table uses. */
-	struct mlx5dr_matcher *matcher; /* Template matcher. */
+	struct mlx5_matcher_info matcher_info[2];
+	uint32_t matcher_selector;
+	rte_rwlock_t matcher_replace_rwlk; /* RW lock for resizable tables */
 	/* Item templates bind to the table. */
 	struct rte_flow_pattern_template *its[MLX5_HW_TBL_MAX_ITEM_TEMPLATE];
 	/* Action templates bind to the table. */
@@ -1556,8 +1542,34 @@ struct rte_flow_template_table {
 	uint8_t nb_action_templates; /* Action template number. */
 	uint32_t refcnt; /* Table reference counter. */
 	struct mlx5_tbl_multi_pattern_ctx mpctx;
+	struct mlx5dr_matcher_attr matcher_attr;
 };
 
+static __rte_always_inline struct mlx5dr_matcher *
+mlx5_table_matcher(const struct rte_flow_template_table *table)
+{
+	return table->matcher_info[table->matcher_selector].matcher;
+}
+
+static __rte_always_inline struct mlx5_multi_pattern_segment *
+mlx5_multi_pattern_segment_find(struct rte_flow_template_table *table,
+				uint32_t flow_resource_ix)
+{
+	int i;
+	struct mlx5_tbl_multi_pattern_ctx *mpctx = &table->mpctx;
+
+	if (likely(!rte_flow_table_resizable(0, &table->cfg.attr)))
+		return &mpctx->segments[0];
+	for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) {
+		uint32_t limit = mpctx->segments[i].head_index +
+				 mpctx->segments[i].capacity;
+
+		if (flow_resource_ix < limit)
+			return &mpctx->segments[i];
+	}
+	return NULL;
+}
+
 #endif
 
 /*
@@ -2177,6 +2189,17 @@ typedef int
 			 const struct rte_flow_item pattern[],
 			 uint8_t pattern_template_index,
 			 uint32_t *hash, struct rte_flow_error *error);
+typedef int (*mlx5_table_resize_t)(struct rte_eth_dev *dev,
+				   struct rte_flow_template_table *table,
+				   uint32_t nb_rules, struct rte_flow_error *error);
+typedef int (*mlx5_flow_update_resized_t)
+			(struct rte_eth_dev *dev, uint32_t queue,
+			 const struct rte_flow_op_attr *attr,
+			 struct rte_flow *rule, void *user_data,
+			 struct rte_flow_error *error);
+typedef int (*table_resize_complete_t)(struct rte_eth_dev *dev,
+				       struct rte_flow_template_table *table,
+				       struct rte_flow_error *error);
 
 struct mlx5_flow_driver_ops {
 	mlx5_flow_validate_t validate;
@@ -2250,6 +2273,9 @@ struct mlx5_flow_driver_ops {
 	mlx5_flow_async_action_list_handle_query_update_t
 		async_action_list_handle_query_update;
 	mlx5_flow_calc_table_hash_t flow_calc_table_hash;
+	mlx5_table_resize_t table_resize;
+	mlx5_flow_update_resized_t flow_update_resized;
+	table_resize_complete_t table_resize_complete;
 };
 
 /* mlx5_flow.c */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index e5c770c6fc..874ae00028 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2886,7 +2886,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 	int ret;
 	uint32_t age_idx = 0;
 	struct mlx5_aso_mtr *aso_mtr;
-	struct mlx5_multi_pattern_segment *mp_segment;
+	struct mlx5_multi_pattern_segment *mp_segment = NULL;
 
 	rte_memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * at->dr_actions_num);
 	attr.group = table->grp->group_id;
@@ -2900,17 +2900,20 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 	} else {
 		attr.ingress = 1;
 	}
-	if (hw_acts->mhdr && hw_acts->mhdr->mhdr_cmds_num > 0) {
+	if (hw_acts->mhdr && hw_acts->mhdr->mhdr_cmds_num > 0 && !hw_acts->mhdr->shared) {
 		uint16_t pos = hw_acts->mhdr->pos;
 
-		if (!hw_acts->mhdr->shared) {
-			rule_acts[pos].modify_header.offset =
-						job->flow->res_idx - 1;
-			rule_acts[pos].modify_header.data =
-						(uint8_t *)job->mhdr_cmd;
-			rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds,
-				   sizeof(*job->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num);
-		}
+		mp_segment = mlx5_multi_pattern_segment_find(table, job->flow->res_idx);
+		if (!mp_segment || !mp_segment->mhdr_action)
+			return -1;
+		rule_acts[pos].action = mp_segment->mhdr_action;
+		/* offset is relative to DR action */
+		rule_acts[pos].modify_header.offset =
+					job->flow->res_idx - mp_segment->head_index;
+		rule_acts[pos].modify_header.data =
+					(uint8_t *)job->mhdr_cmd;
+		rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds,
+			   sizeof(*job->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num);
 	}
 	LIST_FOREACH(act_data, &hw_acts->act_list, next) {
 		uint32_t jump_group;
@@ -3017,10 +3020,6 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 			MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len);
 			break;
 		case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
-			mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx);
-			if (!mp_segment || !mp_segment->mhdr_action)
-				return -1;
-			rule_acts[hw_acts->mhdr->pos].action = mp_segment->mhdr_action;
 			if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID)
 				ret = flow_hw_set_vlan_vid_construct(dev, job,
 								     act_data,
@@ -3177,11 +3176,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 
 		if (ix < 0)
 			return -1;
-		mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx);
+		if (!mp_segment)
+			mp_segment = mlx5_multi_pattern_segment_find(table, job->flow->res_idx);
 		if (!mp_segment || !mp_segment->reformat_action[ix])
 			return -1;
 		ra->action = mp_segment->reformat_action[ix];
-		ra->reformat.offset = job->flow->res_idx - 1;
+		/* reformat offset is relative to selected DR action */
+		ra->reformat.offset = job->flow->res_idx - mp_segment->head_index;
 		ra->reformat.data = buf;
 	}
 	if (hw_acts->push_remove && !hw_acts->push_remove->shared) {
@@ -3353,10 +3354,26 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
 					    pattern_template_index, job);
 	if (!rule_items)
 		goto error;
-	ret = mlx5dr_rule_create(table->matcher,
-				 pattern_template_index, rule_items,
-				 action_template_index, rule_acts,
-				 &rule_attr, (struct mlx5dr_rule *)flow->rule);
+	if (likely(!rte_flow_table_resizable(dev->data->port_id, &table->cfg.attr))) {
+		ret = mlx5dr_rule_create(table->matcher_info[0].matcher,
+					 pattern_template_index, rule_items,
+					 action_template_index, rule_acts,
+					 &rule_attr,
+					 (struct mlx5dr_rule *)flow->rule);
+	} else {
+		uint32_t selector;
+
+		job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE;
+		rte_rwlock_read_lock(&table->matcher_replace_rwlk);
+		selector = table->matcher_selector;
+		ret = mlx5dr_rule_create(table->matcher_info[selector].matcher,
+					 pattern_template_index, rule_items,
+					 action_template_index, rule_acts,
+					 &rule_attr,
+					 (struct mlx5dr_rule *)flow->rule);
+		rte_rwlock_read_unlock(&table->matcher_replace_rwlk);
+		flow->matcher_selector = selector;
+	}
 	if (likely(!ret))
 		return (struct rte_flow *)flow;
 error:
@@ -3473,9 +3490,23 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
 		rte_errno = EINVAL;
 		goto error;
 	}
-	ret = mlx5dr_rule_create(table->matcher,
-				 0, items, action_template_index, rule_acts,
-				 &rule_attr, (struct mlx5dr_rule *)flow->rule);
+	if (likely(!rte_flow_table_resizable(dev->data->port_id, &table->cfg.attr))) {
+		ret = mlx5dr_rule_create(table->matcher_info[0].matcher,
+					 0, items, action_template_index,
+					 rule_acts, &rule_attr,
+					 (struct mlx5dr_rule *)flow->rule);
+	} else {
+		uint32_t selector;
+
+		job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE;
+		rte_rwlock_read_lock(&table->matcher_replace_rwlk);
+		selector = table->matcher_selector;
+		ret = mlx5dr_rule_create(table->matcher_info[selector].matcher,
+					 0, items, action_template_index,
+					 rule_acts, &rule_attr,
+					 (struct mlx5dr_rule *)flow->rule);
+		rte_rwlock_read_unlock(&table->matcher_replace_rwlk);
+	}
 	if (likely(!ret))
 		return (struct rte_flow *)flow;
 error:
@@ -3655,7 +3686,8 @@ flow_hw_async_flow_destroy(struct rte_eth_dev *dev,
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
 					  "fail to destroy rte flow: flow queue full");
-	job->type = MLX5_HW_Q_JOB_TYPE_DESTROY;
+	job->type = !rte_flow_table_resizable(dev->data->port_id, &fh->table->cfg.attr) ?
+		    MLX5_HW_Q_JOB_TYPE_DESTROY : MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY;
 	job->user_data = user_data;
 	job->flow = fh;
 	rule_attr.user_data = job;
@@ -3767,6 +3799,26 @@ flow_hw_pull_legacy_indirect_comp(struct rte_eth_dev *dev, struct mlx5_hw_q_job
 	}
 }
 
+static __rte_always_inline int
+mlx5_hw_pull_flow_transfer_comp(struct rte_eth_dev *dev,
+				uint32_t queue, struct rte_flow_op_result res[],
+				uint16_t n_res)
+{
+	uint32_t size, i;
+	struct mlx5_hw_q_job *job = NULL;
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct rte_ring *ring = priv->hw_q[queue].flow_transfer_completed;
+
+	size = RTE_MIN(rte_ring_count(ring), n_res);
+	for (i = 0; i < size; i++) {
+		res[i].status = RTE_FLOW_OP_SUCCESS;
+		rte_ring_dequeue(ring, (void **)&job);
+		res[i].user_data = job->user_data;
+		flow_hw_job_put(priv, job, queue);
+	}
+	return (int)size;
+}
+
 static inline int
 __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev,
 				 uint32_t queue,
@@ -3815,6 +3867,76 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev,
 	return ret_comp;
 }
 
+static __rte_always_inline void
+hw_cmpl_flow_update_or_destroy(struct rte_eth_dev *dev,
+			       struct mlx5_hw_q_job *job,
+			       uint32_t queue, struct rte_flow_error *error)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_aso_mtr_pool *pool = priv->hws_mpool;
+	struct rte_flow_hw *flow = job->flow;
+	struct rte_flow_template_table *table = flow->table;
+	/* Release the original resource index in case of update. */
+	uint32_t res_idx = flow->res_idx;
+
+	if (flow->fate_type == MLX5_FLOW_FATE_JUMP)
+		flow_hw_jump_release(dev, flow->jump);
+	else if (flow->fate_type == MLX5_FLOW_FATE_QUEUE)
+		mlx5_hrxq_obj_release(dev, flow->hrxq);
+	if (mlx5_hws_cnt_id_valid(flow->cnt_id))
+		flow_hw_age_count_release(priv, queue,
+					  flow, error);
+	if (flow->mtr_id) {
+		mlx5_ipool_free(pool->idx_pool,	flow->mtr_id);
+		flow->mtr_id = 0;
+	}
+	if (job->type != MLX5_HW_Q_JOB_TYPE_UPDATE) {
+		if (table) {
+			mlx5_ipool_free(table->resource, res_idx);
+			mlx5_ipool_free(table->flow, flow->idx);
+		}
+	} else {
+		rte_memcpy(flow, job->upd_flow,
+			   offsetof(struct rte_flow_hw, rule));
+		mlx5_ipool_free(table->resource, res_idx);
+	}
+}
+
+static __rte_always_inline void
+hw_cmpl_resizable_tbl(struct rte_eth_dev *dev,
+		      struct mlx5_hw_q_job *job,
+		      uint32_t queue, enum rte_flow_op_status status,
+		      struct rte_flow_error *error)
+{
+	struct rte_flow_hw *flow = job->flow;
+	struct rte_flow_template_table *table = flow->table;
+	uint32_t selector = flow->matcher_selector;
+	uint32_t other_selector = (selector + 1) & 1;
+
+	switch (job->type) {
+	case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE:
+		__atomic_add_fetch(&table->matcher_info[selector].refcnt,
+				   1, __ATOMIC_RELAXED);
+		break;
+	case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY:
+		__atomic_sub_fetch(&table->matcher_info[selector].refcnt, 1,
+				   __ATOMIC_RELAXED);
+		hw_cmpl_flow_update_or_destroy(dev, job, queue, error);
+		break;
+	case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE:
+		if (status == RTE_FLOW_OP_SUCCESS) {
+			__atomic_sub_fetch(&table->matcher_info[selector].refcnt, 1,
+					   __ATOMIC_RELAXED);
+			__atomic_add_fetch(&table->matcher_info[other_selector].refcnt,
+					   1, __ATOMIC_RELAXED);
+			flow->matcher_selector = other_selector;
+		}
+		break;
+	default:
+		break;
+	}
+}
+
 /**
  * Pull the enqueued flows.
  *
@@ -3843,9 +3965,7 @@ flow_hw_pull(struct rte_eth_dev *dev,
 	     struct rte_flow_error *error)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_aso_mtr_pool *pool = priv->hws_mpool;
 	struct mlx5_hw_q_job *job;
-	uint32_t res_idx;
 	int ret, i;
 
 	/* 1. Pull the flow completion. */
@@ -3856,31 +3976,20 @@ flow_hw_pull(struct rte_eth_dev *dev,
 				"fail to query flow queue");
 	for (i = 0; i <  ret; i++) {
 		job = (struct mlx5_hw_q_job *)res[i].user_data;
-		/* Release the original resource index in case of update. */
-		res_idx = job->flow->res_idx;
 		/* Restore user data. */
 		res[i].user_data = job->user_data;
-		if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY ||
-		    job->type == MLX5_HW_Q_JOB_TYPE_UPDATE) {
-			if (job->flow->fate_type == MLX5_FLOW_FATE_JUMP)
-				flow_hw_jump_release(dev, job->flow->jump);
-			else if (job->flow->fate_type == MLX5_FLOW_FATE_QUEUE)
-				mlx5_hrxq_obj_release(dev, job->flow->hrxq);
-			if (mlx5_hws_cnt_id_valid(job->flow->cnt_id))
-				flow_hw_age_count_release(priv, queue,
-							  job->flow, error);
-			if (job->flow->mtr_id) {
-				mlx5_ipool_free(pool->idx_pool,	job->flow->mtr_id);
-				job->flow->mtr_id = 0;
-			}
-			if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) {
-				mlx5_ipool_free(job->flow->table->resource, res_idx);
-				mlx5_ipool_free(job->flow->table->flow, job->flow->idx);
-			} else {
-				rte_memcpy(job->flow, job->upd_flow,
-					offsetof(struct rte_flow_hw, rule));
-				mlx5_ipool_free(job->flow->table->resource, res_idx);
-			}
+		switch (job->type) {
+		case MLX5_HW_Q_JOB_TYPE_DESTROY:
+		case MLX5_HW_Q_JOB_TYPE_UPDATE:
+			hw_cmpl_flow_update_or_destroy(dev, job, queue, error);
+			break;
+		case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE:
+		case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE:
+		case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY:
+			hw_cmpl_resizable_tbl(dev, job, queue, res[i].status, error);
+			break;
+		default:
+			break;
 		}
 		flow_hw_job_put(priv, job, queue);
 	}
@@ -3888,24 +3997,36 @@ flow_hw_pull(struct rte_eth_dev *dev,
 	if (ret < n_res)
 		ret += __flow_hw_pull_indir_action_comp(dev, queue, &res[ret],
 							n_res - ret);
+	if (ret < n_res)
+		ret += mlx5_hw_pull_flow_transfer_comp(dev, queue, &res[ret],
+						       n_res - ret);
+
 	return ret;
 }
 
+static uint32_t
+mlx5_hw_push_queue(struct rte_ring *pending_q, struct rte_ring *cmpl_q)
+{
+	void *job = NULL;
+	uint32_t i, size = rte_ring_count(pending_q);
+
+	for (i = 0; i < size; i++) {
+		rte_ring_dequeue(pending_q, &job);
+		rte_ring_enqueue(cmpl_q, job);
+	}
+	return size;
+}
+
 static inline uint32_t
 __flow_hw_push_action(struct rte_eth_dev *dev,
 		    uint32_t queue)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct rte_ring *iq = priv->hw_q[queue].indir_iq;
-	struct rte_ring *cq = priv->hw_q[queue].indir_cq;
-	void *job = NULL;
-	uint32_t ret, i;
+	struct mlx5_hw_q *hw_q = &priv->hw_q[queue];
 
-	ret = rte_ring_count(iq);
-	for (i = 0; i < ret; i++) {
-		rte_ring_dequeue(iq, &job);
-		rte_ring_enqueue(cq, job);
-	}
+	mlx5_hw_push_queue(hw_q->indir_iq, hw_q->indir_cq);
+	mlx5_hw_push_queue(hw_q->flow_transfer_pending,
+			   hw_q->flow_transfer_completed);
 	if (!priv->shared_host) {
 		if (priv->hws_ctpool)
 			mlx5_aso_push_wqe(priv->sh,
@@ -4314,6 +4435,7 @@ flow_hw_table_create(struct rte_eth_dev *dev,
 	grp = container_of(ge, struct mlx5_flow_group, entry);
 	tbl->grp = grp;
 	/* Prepare matcher information. */
+	matcher_attr.resizable = !!rte_flow_table_resizable(dev->data->port_id, &table_cfg->attr);
 	matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_ANY;
 	matcher_attr.priority = attr->flow_attr.priority;
 	matcher_attr.optimize_using_rule_idx = true;
@@ -4332,7 +4454,7 @@ flow_hw_table_create(struct rte_eth_dev *dev,
 			       RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG;
 
 		if ((attr->specialize & val) == val) {
-			DRV_LOG(INFO, "Invalid hint value %x",
+			DRV_LOG(ERR, "Invalid hint value %x",
 				attr->specialize);
 			rte_errno = EINVAL;
 			goto it_error;
@@ -4374,10 +4496,11 @@ flow_hw_table_create(struct rte_eth_dev *dev,
 		i = nb_item_templates;
 		goto it_error;
 	}
-	tbl->matcher = mlx5dr_matcher_create
+	tbl->matcher_info[0].matcher = mlx5dr_matcher_create
 		(tbl->grp->tbl, mt, nb_item_templates, at, nb_action_templates, &matcher_attr);
-	if (!tbl->matcher)
+	if (!tbl->matcher_info[0].matcher)
 		goto at_error;
+	tbl->matcher_attr = matcher_attr;
 	tbl->type = attr->flow_attr.transfer ? MLX5DR_TABLE_TYPE_FDB :
 		    (attr->flow_attr.egress ? MLX5DR_TABLE_TYPE_NIC_TX :
 		    MLX5DR_TABLE_TYPE_NIC_RX);
@@ -4385,6 +4508,7 @@ flow_hw_table_create(struct rte_eth_dev *dev,
 		LIST_INSERT_HEAD(&priv->flow_hw_tbl, tbl, next);
 	else
 		LIST_INSERT_HEAD(&priv->flow_hw_tbl_ongo, tbl, next);
+	rte_rwlock_init(&tbl->matcher_replace_rwlk);
 	return tbl;
 at_error:
 	for (i = 0; i < nb_action_templates; i++) {
@@ -4556,6 +4680,11 @@ flow_hw_template_table_create(struct rte_eth_dev *dev,
 
 	if (flow_hw_translate_group(dev, &cfg, group, &cfg.attr.flow_attr.group, error))
 		return NULL;
+	if (!cfg.attr.flow_attr.group && rte_flow_table_resizable(dev->data->port_id, attr)) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "table cannot be resized: invalid group");
+		return NULL;
+	}
 	return flow_hw_table_create(dev, &cfg, item_templates, nb_item_templates,
 				    action_templates, nb_action_templates, error);
 }
@@ -4628,7 +4757,10 @@ flow_hw_table_destroy(struct rte_eth_dev *dev,
 				   1, __ATOMIC_RELAXED);
 	}
 	flow_hw_destroy_table_multi_pattern_ctx(table);
-	mlx5dr_matcher_destroy(table->matcher);
+	if (table->matcher_info[0].matcher)
+		mlx5dr_matcher_destroy(table->matcher_info[0].matcher);
+	if (table->matcher_info[1].matcher)
+		mlx5dr_matcher_destroy(table->matcher_info[1].matcher);
 	mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry);
 	mlx5_ipool_destroy(table->resource);
 	mlx5_ipool_destroy(table->flow);
@@ -9178,6 +9310,16 @@ flow_hw_compare_config(const struct mlx5_flow_hw_attr *hw_attr,
 	return true;
 }
 
+static __rte_always_inline struct rte_ring *
+mlx5_hwq_ring_create(uint16_t port_id, uint32_t queue, uint32_t size, const char *str)
+{
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+
+	snprintf(mz_name, sizeof(mz_name), "port_%u_%s_%u", port_id, str, queue);
+	return rte_ring_create(mz_name, size, SOCKET_ID_ANY,
+			       RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
+}
+
 /**
  * Configure port HWS resources.
  *
@@ -9305,7 +9447,6 @@ flow_hw_configure(struct rte_eth_dev *dev,
 		goto err;
 	}
 	for (i = 0; i < nb_q_updated; i++) {
-		char mz_name[RTE_MEMZONE_NAMESIZE];
 		uint8_t *encap = NULL, *push = NULL;
 		struct mlx5_modification_cmd *mhdr_cmd = NULL;
 		struct rte_flow_item *items = NULL;
@@ -9339,22 +9480,22 @@ flow_hw_configure(struct rte_eth_dev *dev,
 			job[j].upd_flow = &upd_flow[j];
 			priv->hw_q[i].job[j] = &job[j];
 		}
-		snprintf(mz_name, sizeof(mz_name), "port_%u_indir_act_cq_%u",
-			 dev->data->port_id, i);
-		priv->hw_q[i].indir_cq = rte_ring_create(mz_name,
-				_queue_attr[i]->size, SOCKET_ID_ANY,
-				RING_F_SP_ENQ | RING_F_SC_DEQ |
-				RING_F_EXACT_SZ);
+		priv->hw_q[i].indir_cq = mlx5_hwq_ring_create
+			(dev->data->port_id, i, _queue_attr[i]->size, "indir_act_cq");
 		if (!priv->hw_q[i].indir_cq)
 			goto err;
-		snprintf(mz_name, sizeof(mz_name), "port_%u_indir_act_iq_%u",
-			 dev->data->port_id, i);
-		priv->hw_q[i].indir_iq = rte_ring_create(mz_name,
-				_queue_attr[i]->size, SOCKET_ID_ANY,
-				RING_F_SP_ENQ | RING_F_SC_DEQ |
-				RING_F_EXACT_SZ);
+		priv->hw_q[i].indir_iq = mlx5_hwq_ring_create
+			(dev->data->port_id, i, _queue_attr[i]->size, "indir_act_iq");
 		if (!priv->hw_q[i].indir_iq)
 			goto err;
+		priv->hw_q[i].flow_transfer_pending = mlx5_hwq_ring_create
+			(dev->data->port_id, i, _queue_attr[i]->size, "pending_transfer");
+		if (!priv->hw_q[i].flow_transfer_pending)
+			goto err;
+		priv->hw_q[i].flow_transfer_completed = mlx5_hwq_ring_create
+			(dev->data->port_id, i, _queue_attr[i]->size, "completed_transfer");
+		if (!priv->hw_q[i].flow_transfer_completed)
+			goto err;
 	}
 	dr_ctx_attr.pd = priv->sh->cdev->pd;
 	dr_ctx_attr.queues = nb_q_updated;
@@ -9570,6 +9711,8 @@ flow_hw_configure(struct rte_eth_dev *dev,
 	for (i = 0; i < nb_q_updated; i++) {
 		rte_ring_free(priv->hw_q[i].indir_iq);
 		rte_ring_free(priv->hw_q[i].indir_cq);
+		rte_ring_free(priv->hw_q[i].flow_transfer_pending);
+		rte_ring_free(priv->hw_q[i].flow_transfer_completed);
 	}
 	mlx5_free(priv->hw_q);
 	priv->hw_q = NULL;
@@ -11494,7 +11637,7 @@ flow_hw_calc_table_hash(struct rte_eth_dev *dev,
 	items = flow_hw_get_rule_items(dev, table, pattern,
 				       pattern_template_index,
 				       &job);
-	res = mlx5dr_rule_hash_calculate(table->matcher, items,
+	res = mlx5dr_rule_hash_calculate(mlx5_table_matcher(table), items,
 					 pattern_template_index,
 					 MLX5DR_RULE_HASH_CALC_MODE_RAW,
 					 hash);
@@ -11506,6 +11649,220 @@ flow_hw_calc_table_hash(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+flow_hw_table_resize_multi_pattern_actions(struct rte_eth_dev *dev,
+					   struct rte_flow_template_table *table,
+					   uint32_t nb_flows,
+					   struct rte_flow_error *error)
+{
+	struct mlx5_multi_pattern_segment *segment = table->mpctx.segments;
+	uint32_t bulk_size;
+	int i, ret;
+
+	/**
+	 * Segment always allocates Modify Header Argument Objects number in
+	 * powers of 2.
+	 * On resize, PMD adds minimal required argument objects number.
+	 * For example, if table size was 10, it allocated 16 argument objects.
+	 * Resize to 15 will not add new objects.
+	 */
+	for (i = 1;
+	     i < MLX5_MAX_TABLE_RESIZE_NUM && segment->capacity;
+	     i++, segment++);
+	if (i == MLX5_MAX_TABLE_RESIZE_NUM)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "too many resizes");
+	if (segment->head_index - 1 >= nb_flows)
+		return 0;
+	bulk_size = rte_align32pow2(nb_flows - segment->head_index + 1);
+	ret = mlx5_tbl_multi_pattern_process(dev, table, segment,
+					     rte_log2_u32(bulk_size),
+					     error);
+	if (ret)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "too many resizes");
+	return i;
+}
+
+static int
+flow_hw_table_resize(struct rte_eth_dev *dev,
+		     struct rte_flow_template_table *table,
+		     uint32_t nb_flows,
+		     struct rte_flow_error *error)
+{
+	struct mlx5dr_action_template *at[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+	struct mlx5dr_match_template *mt[MLX5_HW_TBL_MAX_ITEM_TEMPLATE];
+	struct mlx5dr_matcher_attr matcher_attr = table->matcher_attr;
+	struct mlx5_multi_pattern_segment *segment = NULL;
+	struct mlx5dr_matcher *matcher = NULL;
+	uint32_t i, selector = table->matcher_selector;
+	uint32_t other_selector = (selector + 1) & 1;
+	int ret;
+
+	if (!rte_flow_table_resizable(dev->data->port_id, &table->cfg.attr))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "no resizable attribute");
+	if (table->matcher_info[other_selector].matcher)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "last table resize was not completed");
+	if (nb_flows <= table->cfg.attr.nb_flows)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "shrinking table is not supported");
+	ret = mlx5_ipool_resize(table->flow, nb_flows);
+	if (ret)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "cannot resize flows pool");
+	ret = mlx5_ipool_resize(table->resource, nb_flows);
+	if (ret)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "cannot resize resources pool");
+	if (mlx5_is_multi_pattern_active(&table->mpctx)) {
+		ret = flow_hw_table_resize_multi_pattern_actions(dev, table, nb_flows, error);
+		if (ret < 0)
+			return ret;
+		if (ret > 0)
+			segment = table->mpctx.segments + ret;
+	}
+	for (i = 0; i < table->nb_item_templates; i++)
+		mt[i] = table->its[i]->mt;
+	for (i = 0; i < table->nb_action_templates; i++)
+		at[i] = table->ats[i].action_template->tmpl;
+	nb_flows = rte_align32pow2(nb_flows);
+	matcher_attr.rule.num_log = rte_log2_u32(nb_flows);
+	matcher = mlx5dr_matcher_create(table->grp->tbl, mt,
+					table->nb_item_templates, at,
+					table->nb_action_templates,
+					&matcher_attr);
+	if (!matcher) {
+		ret = rte_flow_error_set(error, rte_errno,
+					 RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					 table, "failed to create new matcher");
+		goto error;
+	}
+	rte_rwlock_write_lock(&table->matcher_replace_rwlk);
+	ret = mlx5dr_matcher_resize_set_target
+			(table->matcher_info[selector].matcher, matcher);
+	if (ret) {
+		rte_rwlock_write_unlock(&table->matcher_replace_rwlk);
+		ret = rte_flow_error_set(error, rte_errno,
+					 RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					 table, "failed to initiate matcher swap");
+		goto error;
+	}
+	table->cfg.attr.nb_flows = nb_flows;
+	table->matcher_info[other_selector].matcher = matcher;
+	table->matcher_info[other_selector].refcnt = 0;
+	table->matcher_selector = other_selector;
+	rte_rwlock_write_unlock(&table->matcher_replace_rwlk);
+	return 0;
+error:
+	if (segment)
+		mlx5_destroy_multi_pattern_segment(segment);
+	if (matcher) {
+		ret = mlx5dr_matcher_destroy(matcher);
+		return rte_flow_error_set(error, rte_errno,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "failed to destroy new matcher");
+	}
+	return ret;
+}
+
+static int
+flow_hw_table_resize_complete(__rte_unused struct rte_eth_dev *dev,
+			      struct rte_flow_template_table *table,
+			      struct rte_flow_error *error)
+{
+	int ret;
+	uint32_t selector = table->matcher_selector;
+	uint32_t other_selector = (selector + 1) & 1;
+	struct mlx5_matcher_info *matcher_info = &table->matcher_info[other_selector];
+
+	if (!rte_flow_table_resizable(dev->data->port_id, &table->cfg.attr))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "no resizable attribute");
+	if (!matcher_info->matcher || matcher_info->refcnt)
+		return rte_flow_error_set(error, EBUSY,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "cannot complete table resize");
+	ret = mlx5dr_matcher_destroy(matcher_info->matcher);
+	if (ret)
+		return rte_flow_error_set(error, rte_errno,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "failed to destroy retired matcher");
+	matcher_info->matcher = NULL;
+	return 0;
+}
+
+static int
+flow_hw_update_resized(struct rte_eth_dev *dev, uint32_t queue,
+		       const struct rte_flow_op_attr *attr,
+		       struct rte_flow *flow, void *user_data,
+		       struct rte_flow_error *error)
+{
+	int ret;
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_hw_q_job *job;
+	struct rte_flow_hw *hw_flow = (struct rte_flow_hw *)flow;
+	struct rte_flow_template_table *table = hw_flow->table;
+	uint32_t table_selector = table->matcher_selector;
+	uint32_t rule_selector = hw_flow->matcher_selector;
+	uint32_t other_selector;
+	struct mlx5dr_matcher *other_matcher;
+	struct mlx5dr_rule_attr rule_attr = {
+		.queue_id = queue,
+		.burst = attr->postpone,
+	};
+
+	/**
+	 * mlx5dr_matcher_resize_rule_move() accepts original table matcher -
+	 * the one that was used BEFORE table resize.
+	 * Since the function is called AFTER table resize,
+	 * `table->matcher_selector` always points to the new matcher and
+	 * `hw_flow->matcher_selector` points to a matcher used to create the flow.
+	 */
+	other_selector = rule_selector == table_selector ?
+			 (rule_selector + 1) & 1 : rule_selector;
+	other_matcher = table->matcher_info[other_selector].matcher;
+	if (!other_matcher)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "no active table resize");
+	job = flow_hw_job_get(priv, queue);
+	if (!job)
+		return rte_flow_error_set(error, ENOMEM,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "queue is full");
+	job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE;
+	job->user_data = user_data;
+	job->flow = hw_flow;
+	rule_attr.user_data = job;
+	if (rule_selector == table_selector) {
+		struct rte_ring *ring = !attr->postpone ?
+					priv->hw_q[queue].flow_transfer_completed :
+					priv->hw_q[queue].flow_transfer_pending;
+		rte_ring_enqueue(ring, job);
+		return 0;
+	}
+	ret = mlx5dr_matcher_resize_rule_move(other_matcher,
+					      (struct mlx5dr_rule *)hw_flow->rule,
+					      &rule_attr);
+	if (ret) {
+		flow_hw_job_put(priv, job, queue);
+		return rte_flow_error_set(error, rte_errno,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "flow transfer failed");
+	}
+	return 0;
+}
+
 const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = {
 	.info_get = flow_hw_info_get,
 	.configure = flow_hw_configure,
@@ -11517,11 +11874,14 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = {
 	.actions_template_destroy = flow_hw_actions_template_destroy,
 	.template_table_create = flow_hw_template_table_create,
 	.template_table_destroy = flow_hw_table_destroy,
+	.table_resize = flow_hw_table_resize,
 	.group_set_miss_actions = flow_hw_group_set_miss_actions,
 	.async_flow_create = flow_hw_async_flow_create,
 	.async_flow_create_by_index = flow_hw_async_flow_create_by_index,
 	.async_flow_update = flow_hw_async_flow_update,
 	.async_flow_destroy = flow_hw_async_flow_destroy,
+	.flow_update_resized = flow_hw_update_resized,
+	.table_resize_complete = flow_hw_table_resize_complete,
 	.pull = flow_hw_pull,
 	.push = flow_hw_push,
 	.async_action_create = flow_hw_action_handle_create,
diff --git a/drivers/net/mlx5/mlx5_host.c b/drivers/net/mlx5/mlx5_host.c
new file mode 100644
index 0000000000..4f3356d6e6
--- /dev/null
+++ b/drivers/net/mlx5/mlx5_host.c
@@ -0,0 +1,211 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2022 NVIDIA Corporation & Affiliates
+ */
+
+#include <stdint.h>
+#include <string.h>
+#include <stdlib.h>
+
+#include <rte_flow.h>
+#include <rte_pmd_mlx5.h>
+#include <mlx5_malloc.h>
+
+#include "mlx5_flow.h"
+#include "mlx5.h"
+
+#include "hws/host/mlx5dr_host.h"
+
+struct rte_pmd_mlx5_dr_action_cache {
+	enum rte_flow_action_type type;
+	void *release_data;
+	struct mlx5dr_dev_action *dr_dev_action;
+	LIST_ENTRY(rte_pmd_mlx5_dr_action_cache) next;
+};
+
+struct rte_pmd_mlx5_dev_process {
+	struct mlx5dr_dev_process *dr_dev_process;
+	struct mlx5dr_dev_context *dr_dev_ctx;
+	uint16_t port_id;
+	LIST_HEAD(action_head, rte_pmd_mlx5_dr_action_cache) head;
+};
+
+struct rte_pmd_mlx5_dev_process *
+rte_pmd_mlx5_host_process_open(uint16_t port_id,
+			       struct rte_pmd_mlx5_host_device_info *info)
+{
+	struct rte_pmd_mlx5_dev_process *dev_process;
+	struct mlx5dr_dev_context_attr dr_attr = {0};
+	struct mlx5dr_dev_process *dr_dev_process;
+	const struct mlx5_priv *priv;
+
+	dev_process = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO,
+				  sizeof(struct rte_pmd_mlx5_dev_process),
+				  MLX5_MALLOC_ALIGNMENT,
+				  SOCKET_ID_ANY);
+	if (!dev_process) {
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+
+	if (info->type == RTE_PMD_MLX5_DEVICE_TYPE_DPA)
+		dr_dev_process = mlx5dr_host_process_open(info->dpa.process, info->dpa.outbox);
+	else
+		dr_dev_process = mlx5dr_host_process_open(NULL, NULL);
+
+	if (!dr_dev_process)
+		goto free_dev_process;
+
+	dev_process->port_id = port_id;
+	dev_process->dr_dev_process = dr_dev_process;
+
+	priv = rte_eth_devices[port_id].data->dev_private;
+	dr_attr.queue_size = info->queue_size;
+	dr_attr.queues = info->queues;
+
+	dev_process->dr_dev_ctx =  mlx5dr_host_context_bind(dr_dev_process,
+							    priv->dr_ctx,
+							    &dr_attr);
+	if (!dev_process->dr_dev_ctx)
+		goto close_process;
+
+	return (struct rte_pmd_mlx5_dev_process *)dev_process;
+
+close_process:
+	mlx5dr_host_process_close(dr_dev_process);
+free_dev_process:
+	mlx5_free(dev_process);
+	return NULL;
+}
+
+int
+rte_pmd_mlx5_host_process_close(struct rte_pmd_mlx5_dev_process *dev_process)
+{
+	struct mlx5dr_dev_process *dr_dev_process = dev_process->dr_dev_process;
+
+	mlx5dr_host_context_unbind(dr_dev_process, dev_process->dr_dev_ctx);
+	mlx5dr_host_process_close(dr_dev_process);
+	mlx5_free(dev_process);
+	return 0;
+}
+
+struct rte_pmd_mlx5_dev_ctx *
+rte_pmd_mlx5_host_get_dev_ctx(struct rte_pmd_mlx5_dev_process *dev_process)
+{
+	return (struct rte_pmd_mlx5_dev_ctx *)dev_process->dr_dev_ctx;
+}
+
+struct rte_pmd_mlx5_dev_table *
+rte_pmd_mlx5_host_table_bind(struct rte_pmd_mlx5_dev_process *dev_process,
+			     struct rte_flow_template_table *table)
+{
+	struct mlx5dr_dev_process *dr_dev_process;
+	struct mlx5dr_dev_matcher *dr_dev_matcher;
+	struct mlx5dr_matcher *matcher;
+
+	if (rte_flow_table_resizable(&table->cfg.attr)) {
+		rte_errno = EINVAL;
+		return NULL;
+	}
+
+	dr_dev_process = dev_process->dr_dev_process;
+	matcher = table->matcher_info[0].matcher;
+
+	dr_dev_matcher = mlx5dr_host_matcher_bind(dr_dev_process, matcher);
+
+	return (struct rte_pmd_mlx5_dev_table *)dr_dev_matcher;
+}
+
+int
+rte_pmd_mlx5_host_table_unbind(struct rte_pmd_mlx5_dev_process *dev_process,
+			       struct rte_pmd_mlx5_dev_table *dev_table)
+{
+	struct mlx5dr_dev_process *dr_dev_process;
+	struct mlx5dr_dev_matcher *dr_dev_matcher;
+
+	dr_dev_process = dev_process->dr_dev_process;
+	dr_dev_matcher = (struct mlx5dr_dev_matcher *)dev_table;
+
+	return mlx5dr_host_matcher_unbind(dr_dev_process, dr_dev_matcher);
+}
+
+struct rte_pmd_mlx5_dev_action *
+rte_pmd_mlx5_host_action_bind(struct rte_pmd_mlx5_dev_process *dev_process,
+			      struct rte_pmd_mlx5_host_action *action)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[dev_process->port_id];
+	struct rte_pmd_mlx5_dr_action_cache *action_cache;
+	struct mlx5dr_dev_process *dr_dev_process;
+	struct mlx5dr_dev_action *dr_dev_action;
+	struct mlx5dr_action *dr_action;
+	void *release_data;
+
+	dr_dev_process = dev_process->dr_dev_process;
+
+	action_cache = mlx5_malloc(MLX5_MEM_SYS | MLX5_MEM_ZERO,
+				   sizeof(*action_cache),
+				   MLX5_MALLOC_ALIGNMENT,
+				   SOCKET_ID_ANY);
+	if (!action_cache) {
+		rte_errno = ENOMEM;
+		return NULL;
+	}
+
+	dr_action = mlx5_flow_hw_get_dr_action(dev, action, &release_data);
+	if (!dr_action) {
+		DRV_LOG(ERR, "Failed to get dr action type %d", action->type);
+		goto free_rte_host_action;
+	}
+
+	dr_dev_action = mlx5dr_host_action_bind(dr_dev_process, dr_action);
+	if (!dr_dev_action) {
+		DRV_LOG(ERR, "Failed to bind dr_action");
+		goto put_dr_action;
+	}
+
+	action_cache->type = action->type;
+	action_cache->release_data = release_data;
+	action_cache->dr_dev_action = dr_dev_action;
+	LIST_INSERT_HEAD(&dev_process->head, action_cache, next);
+
+	return (struct rte_pmd_mlx5_dev_action *)dr_dev_action;
+
+put_dr_action:
+	mlx5_flow_hw_put_dr_action(dev, action->type, release_data);
+free_rte_host_action:
+	mlx5_free(action_cache);
+	return NULL;
+}
+
+int
+rte_pmd_mlx5_host_action_unbind(struct rte_pmd_mlx5_dev_process *dev_process,
+				struct rte_pmd_mlx5_dev_action *dev_action)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[dev_process->port_id];
+	struct rte_pmd_mlx5_dr_action_cache *action_cache;
+	struct mlx5dr_dev_process *dr_dev_process;
+	struct mlx5dr_dev_action *dr_dev_action;
+
+	dr_dev_process = dev_process->dr_dev_process;
+	dr_dev_action = (struct mlx5dr_dev_action *)dev_action;
+
+	LIST_FOREACH(action_cache, &dev_process->head, next) {
+		if (action_cache->dr_dev_action == dr_dev_action) {
+			LIST_REMOVE(action_cache, next);
+			mlx5dr_host_action_unbind(dr_dev_process, dr_dev_action);
+			mlx5_flow_hw_put_dr_action(dev,
+						   action_cache->type,
+						   action_cache->release_data);
+			mlx5_free(action_cache);
+			return 0;
+		}
+	}
+
+	DRV_LOG(ERR, "Failed to find dr aciton to unbind");
+	rte_errno = EINVAL;
+	return rte_errno;
+}
+
+size_t rte_pmd_mlx5_host_get_dev_rule_handle_size(void)
+{
+	return mlx5dr_host_rule_get_dev_rule_handle_size();
+}
-- 
2.39.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 0/4] net/mlx5: add support for flow table resizing
  2024-02-02 11:56 ` [PATCH 1/5] net/mlx5/hws: add support for resizable matchers Gregory Etelson
@ 2024-02-28 10:25   ` Gregory Etelson
  2024-02-28 10:25     ` [PATCH v2 1/4] net/mlx5: add resize function to ipool Gregory Etelson
                       ` (3 more replies)
  2024-02-28 13:33   ` [PATCH v3 0/4] " Gregory Etelson
  1 sibling, 4 replies; 17+ messages in thread
From: Gregory Etelson @ 2024-02-28 10:25 UTC (permalink / raw)
  To: dev; +Cc: getelson, mkashani, rasland, Dariusz Sosnowski

Support template table resize API.

Gregory Etelson (3):
  net/mlx5: fix parameters verification in HWS table create
  net/mlx5: move multi-pattern actions management to table level
  net/mlx5: add support for flow table resizing

Maayan Kashani (1):
  net/mlx5: add resize function to ipool

 drivers/net/mlx5/mlx5.h         |   5 +
 drivers/net/mlx5/mlx5_flow.c    |  51 +++
 drivers/net/mlx5/mlx5_flow.h    | 101 ++++-
 drivers/net/mlx5/mlx5_flow_hw.c | 750 +++++++++++++++++++++++---------
 drivers/net/mlx5/mlx5_utils.c   |  29 ++
 drivers/net/mlx5/mlx5_utils.h   |  16 +
 6 files changed, 752 insertions(+), 200 deletions(-)

Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>

--
v2: Update PMD after DPDK API changes.
-- 
2.39.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 1/4] net/mlx5: add resize function to ipool
  2024-02-28 10:25   ` [PATCH v2 0/4] net/mlx5: add support for flow table resizing Gregory Etelson
@ 2024-02-28 10:25     ` Gregory Etelson
  2024-02-28 10:25     ` [PATCH v2 2/4] net/mlx5: fix parameters verification in HWS table create Gregory Etelson
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 17+ messages in thread
From: Gregory Etelson @ 2024-02-28 10:25 UTC (permalink / raw)
  To: dev
  Cc: getelson, mkashani, rasland, Dariusz Sosnowski,
	Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad

From: Maayan Kashani <mkashani@nvidia.com>

Before this patch, ipool size could be fixed by
setting max_idx in mlx5_indexed_pool_config upon
ipool creation. Or it can be auto resized to the
maximum limit by setting max_idx to zero upon
ipool creation and the saved value is the maximum
index possible.
This patch adds ipool_resize API that enables to
update the value of max_idx in case it is not set to
maximum, meaning not in auto resize mode. It
enables the allocation of new trunk when using
malloc/zmalloc up to the max_idx limit. Please
notice the resize number of entries should be divisible by trunk_size.

Signed-off-by: Maayan Kashani <mkashani@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
 drivers/net/mlx5/mlx5_utils.c | 29 +++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_utils.h | 16 ++++++++++++++++
 2 files changed, 45 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c
index 4db738785f..e28db2ec43 100644
--- a/drivers/net/mlx5/mlx5_utils.c
+++ b/drivers/net/mlx5/mlx5_utils.c
@@ -809,6 +809,35 @@ mlx5_ipool_get_next(struct mlx5_indexed_pool *pool, uint32_t *pos)
 	return NULL;
 }
 
+int
+mlx5_ipool_resize(struct mlx5_indexed_pool *pool, uint32_t num_entries)
+{
+	uint32_t cur_max_idx;
+	uint32_t max_index = mlx5_trunk_idx_offset_get(pool, TRUNK_MAX_IDX + 1);
+
+	if (num_entries % pool->cfg.trunk_size) {
+		DRV_LOG(ERR, "num_entries param should be trunk_size(=%u) multiplication\n",
+			pool->cfg.trunk_size);
+		return -EINVAL;
+	}
+
+	mlx5_ipool_lock(pool);
+	cur_max_idx = pool->cfg.max_idx + num_entries;
+	/* If the ipool max idx is above maximum or uint overflow occurred. */
+	if (cur_max_idx > max_index || cur_max_idx < num_entries) {
+		DRV_LOG(ERR, "Ipool resize failed\n");
+		DRV_LOG(ERR, "Adding %u entries to existing %u entries, will cross max limit(=%u)\n",
+		num_entries, cur_max_idx, max_index);
+		mlx5_ipool_unlock(pool);
+		return -EINVAL;
+	}
+
+	/* Update maximum entries number. */
+	pool->cfg.max_idx = cur_max_idx;
+	mlx5_ipool_unlock(pool);
+	return 0;
+}
+
 void
 mlx5_ipool_dump(struct mlx5_indexed_pool *pool)
 {
diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h
index 82e8298781..f3c0d76a6d 100644
--- a/drivers/net/mlx5/mlx5_utils.h
+++ b/drivers/net/mlx5/mlx5_utils.h
@@ -427,6 +427,22 @@ void mlx5_ipool_flush_cache(struct mlx5_indexed_pool *pool);
  */
 void *mlx5_ipool_get_next(struct mlx5_indexed_pool *pool, uint32_t *pos);
 
+/**
+ * This function resize the ipool.
+ *
+ * @param pool
+ *   Pointer to the index memory pool handler.
+ * @param num_entries
+ *   Number of entries to be added to the pool.
+ *   This number should be divisible by trunk_size.
+ *
+ * @return
+ *   - non-zero value on error.
+ *   - 0 on success.
+ *
+ */
+int mlx5_ipool_resize(struct mlx5_indexed_pool *pool, uint32_t num_entries);
+
 /**
  * This function allocates new empty Three-level table.
  *
-- 
2.39.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 2/4] net/mlx5: fix parameters verification in HWS table create
  2024-02-28 10:25   ` [PATCH v2 0/4] net/mlx5: add support for flow table resizing Gregory Etelson
  2024-02-28 10:25     ` [PATCH v2 1/4] net/mlx5: add resize function to ipool Gregory Etelson
@ 2024-02-28 10:25     ` Gregory Etelson
  2024-02-28 10:25     ` [PATCH v2 3/4] net/mlx5: move multi-pattern actions management to table level Gregory Etelson
  2024-02-28 10:25     ` [PATCH v2 4/4] net/mlx5: add support for flow table resizing Gregory Etelson
  3 siblings, 0 replies; 17+ messages in thread
From: Gregory Etelson @ 2024-02-28 10:25 UTC (permalink / raw)
  To: dev
  Cc: getelson, mkashani, rasland, Dariusz Sosnowski,
	Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad,
	Rongwei Liu

Modified the conditionals in `flow_hw_table_create()` to use bitwise
AND instead of equality checks when assessing
`table_cfg->attr->specialize` bitmask.
This will allow for greater flexibility as the bitmask may encapsulate
multiple flags.
The patch maintains the previous behavior with single flag values,
while providing support for multiple flags.

Fixes: 592d5367b5e4 ("net/mlx5: enable hint in async flow table")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_hw.c | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 783ad9e72a..5938d8b90c 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -4390,12 +4390,23 @@ flow_hw_table_create(struct rte_eth_dev *dev,
 	matcher_attr.rule.num_log = rte_log2_u32(nb_flows);
 	/* Parse hints information. */
 	if (attr->specialize) {
-		if (attr->specialize == RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG)
-			matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_WIRE;
-		else if (attr->specialize == RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG)
-			matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_VPORT;
-		else
-			DRV_LOG(INFO, "Unsupported hint value %x", attr->specialize);
+		uint32_t val = RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG |
+			       RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG;
+
+		if ((attr->specialize & val) == val) {
+			DRV_LOG(INFO, "Invalid hint value %x",
+				attr->specialize);
+			rte_errno = EINVAL;
+			goto it_error;
+		}
+		if (attr->specialize &
+		    RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG)
+			matcher_attr.optimize_flow_src =
+				MLX5DR_MATCHER_FLOW_SRC_WIRE;
+		else if (attr->specialize &
+			 RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG)
+			matcher_attr.optimize_flow_src =
+				MLX5DR_MATCHER_FLOW_SRC_VPORT;
 	}
 	/* Build the item template. */
 	for (i = 0; i < nb_item_templates; i++) {
-- 
2.39.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 3/4] net/mlx5: move multi-pattern actions management to table level
  2024-02-28 10:25   ` [PATCH v2 0/4] net/mlx5: add support for flow table resizing Gregory Etelson
  2024-02-28 10:25     ` [PATCH v2 1/4] net/mlx5: add resize function to ipool Gregory Etelson
  2024-02-28 10:25     ` [PATCH v2 2/4] net/mlx5: fix parameters verification in HWS table create Gregory Etelson
@ 2024-02-28 10:25     ` Gregory Etelson
  2024-02-28 10:25     ` [PATCH v2 4/4] net/mlx5: add support for flow table resizing Gregory Etelson
  3 siblings, 0 replies; 17+ messages in thread
From: Gregory Etelson @ 2024-02-28 10:25 UTC (permalink / raw)
  To: dev
  Cc: getelson, mkashani, rasland, Dariusz Sosnowski,
	Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad

The multi-pattern actions related structures and management code
have been moved to the table level.
That code refactor is required for the upcoming table resize feature.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |  73 ++++++++++-
 drivers/net/mlx5/mlx5_flow_hw.c | 225 +++++++++++++++-----------------
 2 files changed, 173 insertions(+), 125 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index a4d0ff7b13..9cc237c542 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1410,7 +1410,6 @@ struct mlx5_hw_encap_decap_action {
 	/* Is header_reformat action shared across flows in table. */
 	uint32_t shared:1;
 	uint32_t multi_pattern:1;
-	volatile uint32_t *multi_pattern_refcnt;
 	size_t data_size; /* Action metadata size. */
 	uint8_t data[]; /* Action data. */
 };
@@ -1433,7 +1432,6 @@ struct mlx5_hw_modify_header_action {
 	/* Is MODIFY_HEADER action shared across flows in table. */
 	uint32_t shared:1;
 	uint32_t multi_pattern:1;
-	volatile uint32_t *multi_pattern_refcnt;
 	/* Amount of modification commands stored in the precompiled buffer. */
 	uint32_t mhdr_cmds_num;
 	/* Precompiled modification commands. */
@@ -1487,6 +1485,76 @@ struct mlx5_flow_group {
 #define MLX5_HW_TBL_MAX_ITEM_TEMPLATE 2
 #define MLX5_HW_TBL_MAX_ACTION_TEMPLATE 32
 
+#define MLX5_MULTIPATTERN_ENCAP_NUM 5
+#define MLX5_MAX_TABLE_RESIZE_NUM 64
+
+struct mlx5_multi_pattern_segment {
+	uint32_t capacity;
+	uint32_t head_index;
+	struct mlx5dr_action *mhdr_action;
+	struct mlx5dr_action *reformat_action[MLX5_MULTIPATTERN_ENCAP_NUM];
+};
+
+struct mlx5_tbl_multi_pattern_ctx {
+	struct {
+		uint32_t elements_num;
+		struct mlx5dr_action_reformat_header reformat_hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+		/**
+		 * insert_header structure is larger than reformat_header.
+		 * Enclosing these structures with union will case a gap between
+		 * reformat_hdr array elements.
+		 * mlx5dr_action_create_reformat() expects adjacent array elements.
+		 */
+		struct mlx5dr_action_insert_header insert_hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+	} reformat[MLX5_MULTIPATTERN_ENCAP_NUM];
+
+	struct {
+		uint32_t elements_num;
+		struct mlx5dr_action_mh_pattern pattern[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+	} mh;
+	struct mlx5_multi_pattern_segment segments[MLX5_MAX_TABLE_RESIZE_NUM];
+};
+
+static __rte_always_inline void
+mlx5_multi_pattern_activate(struct mlx5_tbl_multi_pattern_ctx *mpctx)
+{
+	mpctx->segments[0].head_index = 1;
+}
+
+static __rte_always_inline bool
+mlx5_is_multi_pattern_active(const struct mlx5_tbl_multi_pattern_ctx *mpctx)
+{
+	return mpctx->segments[0].head_index == 1;
+}
+
+static __rte_always_inline struct mlx5_multi_pattern_segment *
+mlx5_multi_pattern_segment_get_next(struct mlx5_tbl_multi_pattern_ctx *mpctx)
+{
+	int i;
+
+	for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) {
+		if (!mpctx->segments[i].capacity)
+			return &mpctx->segments[i];
+	}
+	return NULL;
+}
+
+static __rte_always_inline struct mlx5_multi_pattern_segment *
+mlx5_multi_pattern_segment_find(struct mlx5_tbl_multi_pattern_ctx *mpctx,
+				uint32_t flow_resource_ix)
+{
+	int i;
+
+	for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) {
+		uint32_t limit = mpctx->segments[i].head_index +
+				 mpctx->segments[i].capacity;
+
+		if (flow_resource_ix < limit)
+			return &mpctx->segments[i];
+	}
+	return NULL;
+}
+
 struct mlx5_flow_template_table_cfg {
 	struct rte_flow_template_table_attr attr; /* Table attributes passed through flow API. */
 	bool external; /* True if created by flow API, false if table is internal to PMD. */
@@ -1507,6 +1575,7 @@ struct rte_flow_template_table {
 	uint8_t nb_item_templates; /* Item template number. */
 	uint8_t nb_action_templates; /* Action template number. */
 	uint32_t refcnt; /* Table reference counter. */
+	struct mlx5_tbl_multi_pattern_ctx mpctx;
 };
 
 #endif
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 5938d8b90c..38aed03970 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -78,41 +78,14 @@ struct mlx5_indlst_legacy {
 #define MLX5_CONST_ENCAP_ITEM(encap_type, ptr) \
 (((const struct encap_type *)(ptr))->definition)
 
-struct mlx5_multi_pattern_ctx {
-	union {
-		struct mlx5dr_action_reformat_header reformat_hdr;
-		struct mlx5dr_action_mh_pattern mh_pattern;
-	};
-	union {
-		/* action template auxiliary structures for object destruction */
-		struct mlx5_hw_encap_decap_action *encap;
-		struct mlx5_hw_modify_header_action *mhdr;
-	};
-	/* multi pattern action */
-	struct mlx5dr_rule_action *rule_action;
-};
-
-#define MLX5_MULTIPATTERN_ENCAP_NUM 4
-
-struct mlx5_tbl_multi_pattern_ctx {
-	struct {
-		uint32_t elements_num;
-		struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
-	} reformat[MLX5_MULTIPATTERN_ENCAP_NUM];
-
-	struct {
-		uint32_t elements_num;
-		struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
-	} mh;
-};
-
-#define MLX5_EMPTY_MULTI_PATTERN_CTX {{{0,}},}
-
 static int
 mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev,
 			       struct rte_flow_template_table *tbl,
-			       struct mlx5_tbl_multi_pattern_ctx *mpat,
+			       struct mlx5_multi_pattern_segment *segment,
+			       uint32_t bulk_size,
 			       struct rte_flow_error *error);
+static void
+mlx5_destroy_multi_pattern_segment(struct mlx5_multi_pattern_segment *segment);
 
 static __rte_always_inline int
 mlx5_multi_pattern_reformat_to_index(enum mlx5dr_action_type type)
@@ -577,28 +550,14 @@ flow_hw_ct_compile(struct rte_eth_dev *dev,
 static void
 flow_hw_template_destroy_reformat_action(struct mlx5_hw_encap_decap_action *encap_decap)
 {
-	if (encap_decap->multi_pattern) {
-		uint32_t refcnt = __atomic_sub_fetch(encap_decap->multi_pattern_refcnt,
-						     1, __ATOMIC_RELAXED);
-		if (refcnt)
-			return;
-		mlx5_free((void *)(uintptr_t)encap_decap->multi_pattern_refcnt);
-	}
-	if (encap_decap->action)
+	if (encap_decap->action && !encap_decap->multi_pattern)
 		mlx5dr_action_destroy(encap_decap->action);
 }
 
 static void
 flow_hw_template_destroy_mhdr_action(struct mlx5_hw_modify_header_action *mhdr)
 {
-	if (mhdr->multi_pattern) {
-		uint32_t refcnt = __atomic_sub_fetch(mhdr->multi_pattern_refcnt,
-						     1, __ATOMIC_RELAXED);
-		if (refcnt)
-			return;
-		mlx5_free((void *)(uintptr_t)mhdr->multi_pattern_refcnt);
-	}
-	if (mhdr->action)
+	if (mhdr->action && !mhdr->multi_pattern)
 		mlx5dr_action_destroy(mhdr->action);
 }
 
@@ -1924,21 +1883,22 @@ mlx5_tbl_translate_reformat(struct mlx5_priv *priv,
 		acts->encap_decap->shared = true;
 	} else {
 		uint32_t ix;
-		typeof(mp_ctx->reformat[0]) *reformat_ctx = mp_ctx->reformat +
-							    mp_reformat_ix;
+		typeof(mp_ctx->reformat[0]) *reformat = mp_ctx->reformat +
+							mp_reformat_ix;
 
-		ix = reformat_ctx->elements_num++;
-		reformat_ctx->ctx[ix].reformat_hdr = hdr;
-		reformat_ctx->ctx[ix].rule_action = &acts->rule_acts[at->reformat_off];
-		reformat_ctx->ctx[ix].encap = acts->encap_decap;
+		ix = reformat->elements_num++;
+		reformat->reformat_hdr[ix] = hdr;
 		acts->rule_acts[at->reformat_off].reformat.hdr_idx = ix;
 		acts->encap_decap_pos = at->reformat_off;
+		acts->encap_decap->multi_pattern = 1;
 		acts->encap_decap->data_size = data_size;
+		acts->encap_decap->action_type = refmt_type;
 		ret = __flow_hw_act_data_encap_append
 			(priv, acts, (at->actions + reformat_src)->type,
 			 reformat_src, at->reformat_off, data_size);
 		if (ret)
 			return -rte_errno;
+		mlx5_multi_pattern_activate(mp_ctx);
 	}
 	return 0;
 }
@@ -1987,12 +1947,11 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev,
 	} else {
 		typeof(mp_ctx->mh) *mh = &mp_ctx->mh;
 		uint32_t idx = mh->elements_num;
-		struct mlx5_multi_pattern_ctx *mh_ctx = mh->ctx + mh->elements_num++;
 
-		mh_ctx->mh_pattern = pattern;
-		mh_ctx->mhdr = acts->mhdr;
-		mh_ctx->rule_action = &acts->rule_acts[mhdr_ix];
+		mh->pattern[mh->elements_num++] = pattern;
+		acts->mhdr->multi_pattern = 1;
 		acts->rule_acts[mhdr_ix].modify_header.pattern_idx = idx;
+		mlx5_multi_pattern_activate(mp_ctx);
 	}
 	return 0;
 }
@@ -2552,16 +2511,17 @@ flow_hw_actions_translate(struct rte_eth_dev *dev,
 {
 	int ret;
 	uint32_t i;
-	struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX;
 
 	for (i = 0; i < tbl->nb_action_templates; i++) {
 		if (__flow_hw_actions_translate(dev, &tbl->cfg,
 						&tbl->ats[i].acts,
 						tbl->ats[i].action_template,
-						&mpat, error))
+						&tbl->mpctx, error))
 			goto err;
 	}
-	ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error);
+	ret = mlx5_tbl_multi_pattern_process(dev, tbl, &tbl->mpctx.segments[0],
+					     rte_log2_u32(tbl->cfg.attr.nb_flows),
+					     error);
 	if (ret)
 		goto err;
 	return 0;
@@ -2944,6 +2904,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 	int ret;
 	uint32_t age_idx = 0;
 	struct mlx5_aso_mtr *aso_mtr;
+	struct mlx5_multi_pattern_segment *mp_segment;
 
 	rte_memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * at->dr_actions_num);
 	attr.group = table->grp->group_id;
@@ -3074,6 +3035,10 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 			MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len);
 			break;
 		case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
+			mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx);
+			if (!mp_segment || !mp_segment->mhdr_action)
+				return -1;
+			rule_acts[hw_acts->mhdr->pos].action = mp_segment->mhdr_action;
 			if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID)
 				ret = flow_hw_set_vlan_vid_construct(dev, job,
 								     act_data,
@@ -3225,9 +3190,17 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 				     age_idx);
 	}
 	if (hw_acts->encap_decap && !hw_acts->encap_decap->shared) {
-		rule_acts[hw_acts->encap_decap_pos].reformat.offset =
-				job->flow->res_idx - 1;
-		rule_acts[hw_acts->encap_decap_pos].reformat.data = buf;
+		int ix = mlx5_multi_pattern_reformat_to_index(hw_acts->encap_decap->action_type);
+		struct mlx5dr_rule_action *ra = &rule_acts[hw_acts->encap_decap_pos];
+
+		if (ix < 0)
+			return -1;
+		mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx);
+		if (!mp_segment || !mp_segment->reformat_action[ix])
+			return -1;
+		ra->action = mp_segment->reformat_action[ix];
+		ra->reformat.offset = job->flow->res_idx - 1;
+		ra->reformat.data = buf;
 	}
 	if (hw_acts->push_remove && !hw_acts->push_remove->shared) {
 		rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset =
@@ -4133,86 +4106,65 @@ flow_hw_q_flow_flush(struct rte_eth_dev *dev,
 static int
 mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev,
 			       struct rte_flow_template_table *tbl,
-			       struct mlx5_tbl_multi_pattern_ctx *mpat,
+			       struct mlx5_multi_pattern_segment *segment,
+			       uint32_t bulk_size,
 			       struct rte_flow_error *error)
 {
+	int ret = 0;
 	uint32_t i;
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_tbl_multi_pattern_ctx *mpctx = &tbl->mpctx;
 	const struct rte_flow_template_table_attr *table_attr = &tbl->cfg.attr;
 	const struct rte_flow_attr *attr = &table_attr->flow_attr;
 	enum mlx5dr_table_type type = get_mlx5dr_table_type(attr);
 	uint32_t flags = mlx5_hw_act_flag[!!attr->group][type];
-	struct mlx5dr_action *dr_action;
-	uint32_t bulk_size = rte_log2_u32(table_attr->nb_flows);
+	struct mlx5dr_action *dr_action = NULL;
 
 	for (i = 0; i < MLX5_MULTIPATTERN_ENCAP_NUM; i++) {
-		uint32_t j;
-		uint32_t *reformat_refcnt;
-		typeof(mpat->reformat[0]) *reformat = mpat->reformat + i;
-		struct mlx5dr_action_reformat_header hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+		typeof(mpctx->reformat[0]) *reformat = mpctx->reformat + i;
 		enum mlx5dr_action_type reformat_type =
 			mlx5_multi_pattern_reformat_index_to_type(i);
 
 		if (!reformat->elements_num)
 			continue;
-		for (j = 0; j < reformat->elements_num; j++)
-			hdr[j] = reformat->ctx[j].reformat_hdr;
-		reformat_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t), 0,
-					      rte_socket_id());
-		if (!reformat_refcnt)
-			return rte_flow_error_set(error, ENOMEM,
-						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-						  NULL, "failed to allocate multi-pattern encap counter");
-		*reformat_refcnt = reformat->elements_num;
-		dr_action = mlx5dr_action_create_reformat
-			(priv->dr_ctx, reformat_type, reformat->elements_num, hdr,
-			 bulk_size, flags);
+		dr_action = reformat_type == MLX5DR_ACTION_TYP_INSERT_HEADER ?
+			mlx5dr_action_create_insert_header
+			(priv->dr_ctx, reformat->elements_num,
+			 reformat->insert_hdr, bulk_size, flags) :
+			mlx5dr_action_create_reformat
+			(priv->dr_ctx, reformat_type, reformat->elements_num,
+			 reformat->reformat_hdr, bulk_size, flags);
 		if (!dr_action) {
-			mlx5_free(reformat_refcnt);
-			return rte_flow_error_set(error, rte_errno,
-						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-						  NULL,
-						  "failed to create multi-pattern encap action");
-		}
-		for (j = 0; j < reformat->elements_num; j++) {
-			reformat->ctx[j].rule_action->action = dr_action;
-			reformat->ctx[j].encap->action = dr_action;
-			reformat->ctx[j].encap->multi_pattern = 1;
-			reformat->ctx[j].encap->multi_pattern_refcnt = reformat_refcnt;
+			ret = rte_flow_error_set(error, rte_errno,
+						 RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+						 NULL,
+						 "failed to create multi-pattern encap action");
+			goto error;
 		}
+		segment->reformat_action[i] = dr_action;
 	}
-	if (mpat->mh.elements_num) {
-		typeof(mpat->mh) *mh = &mpat->mh;
-		struct mlx5dr_action_mh_pattern pattern[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
-		uint32_t *mh_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t),
-						 0, rte_socket_id());
-
-		if (!mh_refcnt)
-			return rte_flow_error_set(error, ENOMEM,
-						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-						  NULL, "failed to allocate modify header counter");
-		*mh_refcnt = mpat->mh.elements_num;
-		for (i = 0; i < mpat->mh.elements_num; i++)
-			pattern[i] = mh->ctx[i].mh_pattern;
+	if (mpctx->mh.elements_num) {
+		typeof(mpctx->mh) *mh = &mpctx->mh;
 		dr_action = mlx5dr_action_create_modify_header
-			(priv->dr_ctx, mpat->mh.elements_num, pattern,
+			(priv->dr_ctx, mpctx->mh.elements_num, mh->pattern,
 			 bulk_size, flags);
 		if (!dr_action) {
-			mlx5_free(mh_refcnt);
-			return rte_flow_error_set(error, rte_errno,
+			ret = rte_flow_error_set(error, rte_errno,
 						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-						  NULL,
-						  "failed to create multi-pattern header modify action");
-		}
-		for (i = 0; i < mpat->mh.elements_num; i++) {
-			mh->ctx[i].rule_action->action = dr_action;
-			mh->ctx[i].mhdr->action = dr_action;
-			mh->ctx[i].mhdr->multi_pattern = 1;
-			mh->ctx[i].mhdr->multi_pattern_refcnt = mh_refcnt;
+						  NULL, "failed to create multi-pattern header modify action");
+			goto error;
 		}
+		segment->mhdr_action = dr_action;
+	}
+	if (dr_action) {
+		segment->capacity = RTE_BIT32(bulk_size);
+		if (segment != &mpctx->segments[MLX5_MAX_TABLE_RESIZE_NUM - 1])
+			segment[1].head_index = segment->head_index + segment->capacity;
 	}
-
 	return 0;
+error:
+	mlx5_destroy_multi_pattern_segment(segment);
+	return ret;
 }
 
 static int
@@ -4225,7 +4177,6 @@ mlx5_hw_build_template_table(struct rte_eth_dev *dev,
 {
 	int ret;
 	uint8_t i;
-	struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX;
 
 	for (i = 0; i < nb_action_templates; i++) {
 		uint32_t refcnt = __atomic_add_fetch(&action_templates[i]->refcnt, 1,
@@ -4246,16 +4197,21 @@ mlx5_hw_build_template_table(struct rte_eth_dev *dev,
 		ret = __flow_hw_actions_translate(dev, &tbl->cfg,
 						  &tbl->ats[i].acts,
 						  action_templates[i],
-						  &mpat, error);
+						  &tbl->mpctx, error);
 		if (ret) {
 			i++;
 			goto at_error;
 		}
 	}
 	tbl->nb_action_templates = nb_action_templates;
-	ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error);
-	if (ret)
-		goto at_error;
+	if (mlx5_is_multi_pattern_active(&tbl->mpctx)) {
+		ret = mlx5_tbl_multi_pattern_process(dev, tbl,
+						     &tbl->mpctx.segments[0],
+						     rte_log2_u32(tbl->cfg.attr.nb_flows),
+						     error);
+		if (ret)
+			goto at_error;
+	}
 	return 0;
 
 at_error:
@@ -4624,6 +4580,28 @@ flow_hw_template_table_create(struct rte_eth_dev *dev,
 				    action_templates, nb_action_templates, error);
 }
 
+static void
+mlx5_destroy_multi_pattern_segment(struct mlx5_multi_pattern_segment *segment)
+{
+	int i;
+
+	if (segment->mhdr_action)
+		mlx5dr_action_destroy(segment->mhdr_action);
+	for (i = 0; i < MLX5_MULTIPATTERN_ENCAP_NUM; i++) {
+		if (segment->reformat_action[i])
+			mlx5dr_action_destroy(segment->reformat_action[i]);
+	}
+	segment->capacity = 0;
+}
+
+static void
+flow_hw_destroy_table_multi_pattern_ctx(struct rte_flow_template_table *table)
+{
+	int sx;
+
+	for (sx = 0; sx < MLX5_MAX_TABLE_RESIZE_NUM; sx++)
+		mlx5_destroy_multi_pattern_segment(table->mpctx.segments + sx);
+}
 /**
  * Destroy flow table.
  *
@@ -4669,6 +4647,7 @@ flow_hw_table_destroy(struct rte_eth_dev *dev,
 		__atomic_fetch_sub(&table->ats[i].action_template->refcnt,
 				   1, __ATOMIC_RELAXED);
 	}
+	flow_hw_destroy_table_multi_pattern_ctx(table);
 	mlx5dr_matcher_destroy(table->matcher);
 	mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry);
 	mlx5_ipool_destroy(table->resource);
-- 
2.39.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 4/4] net/mlx5: add support for flow table resizing
  2024-02-28 10:25   ` [PATCH v2 0/4] net/mlx5: add support for flow table resizing Gregory Etelson
                       ` (2 preceding siblings ...)
  2024-02-28 10:25     ` [PATCH v2 3/4] net/mlx5: move multi-pattern actions management to table level Gregory Etelson
@ 2024-02-28 10:25     ` Gregory Etelson
  3 siblings, 0 replies; 17+ messages in thread
From: Gregory Etelson @ 2024-02-28 10:25 UTC (permalink / raw)
  To: dev
  Cc: getelson, mkashani, rasland, Dariusz Sosnowski,
	Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad

Support template table API in PMD.
The patch allows to increase existing table capacity.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
 drivers/net/mlx5/mlx5.h         |   5 +
 drivers/net/mlx5/mlx5_flow.c    |  51 ++++
 drivers/net/mlx5/mlx5_flow.h    |  84 ++++--
 drivers/net/mlx5/mlx5_flow_hw.c | 518 +++++++++++++++++++++++++++-----
 4 files changed, 553 insertions(+), 105 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 99850a58af..bb1853e797 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -380,6 +380,9 @@ enum mlx5_hw_job_type {
 	MLX5_HW_Q_JOB_TYPE_UPDATE, /* Flow update job type. */
 	MLX5_HW_Q_JOB_TYPE_QUERY, /* Flow query job type. */
 	MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY, /* Flow update and query job type. */
+	MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE, /* Non-optimized flow create job type. */
+	MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY, /* Non-optimized destroy create job type. */
+	MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE, /* Move flow after table resize. */
 };
 
 enum mlx5_hw_indirect_type {
@@ -422,6 +425,8 @@ struct mlx5_hw_q {
 	struct mlx5_hw_q_job **job; /* LIFO header. */
 	struct rte_ring *indir_cq; /* Indirect action SW completion queue. */
 	struct rte_ring *indir_iq; /* Indirect action SW in progress queue. */
+	struct rte_ring *flow_transfer_pending;
+	struct rte_ring *flow_transfer_completed;
 } __rte_cache_aligned;
 
 
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 3e179110a0..477b13e04d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1095,6 +1095,20 @@ mlx5_flow_calc_encap_hash(struct rte_eth_dev *dev,
 			  uint8_t *hash,
 			  struct rte_flow_error *error);
 
+static int
+mlx5_template_table_resize(struct rte_eth_dev *dev,
+			   struct rte_flow_template_table *table,
+			   uint32_t nb_rules, struct rte_flow_error *error);
+static int
+mlx5_flow_async_update_resized(struct rte_eth_dev *dev, uint32_t queue,
+			       const struct rte_flow_op_attr *attr,
+			       struct rte_flow *rule, void *user_data,
+			       struct rte_flow_error *error);
+static int
+mlx5_table_resize_complete(struct rte_eth_dev *dev,
+			   struct rte_flow_template_table *table,
+			   struct rte_flow_error *error);
+
 static const struct rte_flow_ops mlx5_flow_ops = {
 	.validate = mlx5_flow_validate,
 	.create = mlx5_flow_create,
@@ -1133,6 +1147,9 @@ static const struct rte_flow_ops mlx5_flow_ops = {
 		mlx5_flow_action_list_handle_query_update,
 	.flow_calc_table_hash = mlx5_flow_calc_table_hash,
 	.flow_calc_encap_hash = mlx5_flow_calc_encap_hash,
+	.flow_template_table_resize = mlx5_template_table_resize,
+	.flow_update_resized = mlx5_flow_async_update_resized,
+	.flow_template_table_resize_complete = mlx5_table_resize_complete,
 };
 
 /* Tunnel information. */
@@ -10548,6 +10565,40 @@ mlx5_flow_calc_encap_hash(struct rte_eth_dev *dev,
 	return fops->flow_calc_encap_hash(dev, pattern, dest_field, hash, error);
 }
 
+static int
+mlx5_template_table_resize(struct rte_eth_dev *dev,
+			   struct rte_flow_template_table *table,
+			   uint32_t nb_rules, struct rte_flow_error *error)
+{
+	const struct mlx5_flow_driver_ops *fops;
+
+	MLX5_DRV_FOPS_OR_ERR(dev, fops, table_resize, ENOTSUP);
+	return fops->table_resize(dev, table, nb_rules, error);
+}
+
+static int
+mlx5_table_resize_complete(struct rte_eth_dev *dev,
+			   struct rte_flow_template_table *table,
+			   struct rte_flow_error *error)
+{
+	const struct mlx5_flow_driver_ops *fops;
+
+	MLX5_DRV_FOPS_OR_ERR(dev, fops, table_resize_complete, ENOTSUP);
+	return fops->table_resize_complete(dev, table, error);
+}
+
+static int
+mlx5_flow_async_update_resized(struct rte_eth_dev *dev, uint32_t queue,
+			       const struct rte_flow_op_attr *op_attr,
+			       struct rte_flow *rule, void *user_data,
+			       struct rte_flow_error *error)
+{
+	const struct mlx5_flow_driver_ops *fops;
+
+	MLX5_DRV_FOPS_OR_ERR(dev, fops, flow_update_resized, ENOTSUP);
+	return fops->flow_update_resized(dev, queue, op_attr, rule, user_data, error);
+}
+
 /**
  * Destroy all indirect actions (shared RSS).
  *
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 9cc237c542..6c2944c21a 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1217,6 +1217,7 @@ struct rte_flow {
 	uint32_t tunnel:1;
 	uint32_t meter:24; /**< Holds flow meter id. */
 	uint32_t indirect_type:2; /**< Indirect action type. */
+	uint32_t matcher_selector:1; /**< Matcher index in resizable table. */
 	uint32_t rix_mreg_copy;
 	/**< Index to metadata register copy table resource. */
 	uint32_t counter; /**< Holds flow counter. */
@@ -1262,6 +1263,7 @@ struct rte_flow_hw {
 	};
 	struct rte_flow_template_table *table; /* The table flow allcated from. */
 	uint8_t mt_idx;
+	uint8_t matcher_selector:1;
 	uint32_t age_idx;
 	cnt_id_t cnt_id;
 	uint32_t mtr_id;
@@ -1489,6 +1491,11 @@ struct mlx5_flow_group {
 #define MLX5_MAX_TABLE_RESIZE_NUM 64
 
 struct mlx5_multi_pattern_segment {
+	/*
+	 * Modify Header Argument Objects number allocated for action in that
+	 * segment.
+	 * Capacity is always power of 2.
+	 */
 	uint32_t capacity;
 	uint32_t head_index;
 	struct mlx5dr_action *mhdr_action;
@@ -1527,43 +1534,22 @@ mlx5_is_multi_pattern_active(const struct mlx5_tbl_multi_pattern_ctx *mpctx)
 	return mpctx->segments[0].head_index == 1;
 }
 
-static __rte_always_inline struct mlx5_multi_pattern_segment *
-mlx5_multi_pattern_segment_get_next(struct mlx5_tbl_multi_pattern_ctx *mpctx)
-{
-	int i;
-
-	for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) {
-		if (!mpctx->segments[i].capacity)
-			return &mpctx->segments[i];
-	}
-	return NULL;
-}
-
-static __rte_always_inline struct mlx5_multi_pattern_segment *
-mlx5_multi_pattern_segment_find(struct mlx5_tbl_multi_pattern_ctx *mpctx,
-				uint32_t flow_resource_ix)
-{
-	int i;
-
-	for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) {
-		uint32_t limit = mpctx->segments[i].head_index +
-				 mpctx->segments[i].capacity;
-
-		if (flow_resource_ix < limit)
-			return &mpctx->segments[i];
-	}
-	return NULL;
-}
-
 struct mlx5_flow_template_table_cfg {
 	struct rte_flow_template_table_attr attr; /* Table attributes passed through flow API. */
 	bool external; /* True if created by flow API, false if table is internal to PMD. */
 };
 
+struct mlx5_matcher_info {
+	struct mlx5dr_matcher *matcher; /* Template matcher. */
+	uint32_t refcnt;
+};
+
 struct rte_flow_template_table {
 	LIST_ENTRY(rte_flow_template_table) next;
 	struct mlx5_flow_group *grp; /* The group rte_flow_template_table uses. */
-	struct mlx5dr_matcher *matcher; /* Template matcher. */
+	struct mlx5_matcher_info matcher_info[2];
+	uint32_t matcher_selector;
+	rte_rwlock_t matcher_replace_rwlk; /* RW lock for resizable tables */
 	/* Item templates bind to the table. */
 	struct rte_flow_pattern_template *its[MLX5_HW_TBL_MAX_ITEM_TEMPLATE];
 	/* Action templates bind to the table. */
@@ -1576,8 +1562,34 @@ struct rte_flow_template_table {
 	uint8_t nb_action_templates; /* Action template number. */
 	uint32_t refcnt; /* Table reference counter. */
 	struct mlx5_tbl_multi_pattern_ctx mpctx;
+	struct mlx5dr_matcher_attr matcher_attr;
 };
 
+static __rte_always_inline struct mlx5dr_matcher *
+mlx5_table_matcher(const struct rte_flow_template_table *table)
+{
+	return table->matcher_info[table->matcher_selector].matcher;
+}
+
+static __rte_always_inline struct mlx5_multi_pattern_segment *
+mlx5_multi_pattern_segment_find(struct rte_flow_template_table *table,
+				uint32_t flow_resource_ix)
+{
+	int i;
+	struct mlx5_tbl_multi_pattern_ctx *mpctx = &table->mpctx;
+
+	if (likely(!rte_flow_template_table_resizable(0, &table->cfg.attr)))
+		return &mpctx->segments[0];
+	for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) {
+		uint32_t limit = mpctx->segments[i].head_index +
+				 mpctx->segments[i].capacity;
+
+		if (flow_resource_ix < limit)
+			return &mpctx->segments[i];
+	}
+	return NULL;
+}
+
 #endif
 
 /*
@@ -2274,6 +2286,17 @@ typedef int
 			 enum rte_flow_encap_hash_field dest_field,
 			 uint8_t *hash,
 			 struct rte_flow_error *error);
+typedef int (*mlx5_table_resize_t)(struct rte_eth_dev *dev,
+				   struct rte_flow_template_table *table,
+				   uint32_t nb_rules, struct rte_flow_error *error);
+typedef int (*mlx5_flow_update_resized_t)
+			(struct rte_eth_dev *dev, uint32_t queue,
+			 const struct rte_flow_op_attr *attr,
+			 struct rte_flow *rule, void *user_data,
+			 struct rte_flow_error *error);
+typedef int (*table_resize_complete_t)(struct rte_eth_dev *dev,
+				       struct rte_flow_template_table *table,
+				       struct rte_flow_error *error);
 
 struct mlx5_flow_driver_ops {
 	mlx5_flow_validate_t validate;
@@ -2348,6 +2371,9 @@ struct mlx5_flow_driver_ops {
 		async_action_list_handle_query_update;
 	mlx5_flow_calc_table_hash_t flow_calc_table_hash;
 	mlx5_flow_calc_encap_hash_t flow_calc_encap_hash;
+	mlx5_table_resize_t table_resize;
+	mlx5_flow_update_resized_t flow_update_resized;
+	table_resize_complete_t table_resize_complete;
 };
 
 /* mlx5_flow.c */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 38aed03970..1bd29999f9 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2904,7 +2904,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 	int ret;
 	uint32_t age_idx = 0;
 	struct mlx5_aso_mtr *aso_mtr;
-	struct mlx5_multi_pattern_segment *mp_segment;
+	struct mlx5_multi_pattern_segment *mp_segment = NULL;
 
 	rte_memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * at->dr_actions_num);
 	attr.group = table->grp->group_id;
@@ -2918,17 +2918,20 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 	} else {
 		attr.ingress = 1;
 	}
-	if (hw_acts->mhdr && hw_acts->mhdr->mhdr_cmds_num > 0) {
+	if (hw_acts->mhdr && hw_acts->mhdr->mhdr_cmds_num > 0 && !hw_acts->mhdr->shared) {
 		uint16_t pos = hw_acts->mhdr->pos;
 
-		if (!hw_acts->mhdr->shared) {
-			rule_acts[pos].modify_header.offset =
-						job->flow->res_idx - 1;
-			rule_acts[pos].modify_header.data =
-						(uint8_t *)job->mhdr_cmd;
-			rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds,
-				   sizeof(*job->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num);
-		}
+		mp_segment = mlx5_multi_pattern_segment_find(table, job->flow->res_idx);
+		if (!mp_segment || !mp_segment->mhdr_action)
+			return -1;
+		rule_acts[pos].action = mp_segment->mhdr_action;
+		/* offset is relative to DR action */
+		rule_acts[pos].modify_header.offset =
+					job->flow->res_idx - mp_segment->head_index;
+		rule_acts[pos].modify_header.data =
+					(uint8_t *)job->mhdr_cmd;
+		rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds,
+			   sizeof(*job->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num);
 	}
 	LIST_FOREACH(act_data, &hw_acts->act_list, next) {
 		uint32_t jump_group;
@@ -3035,10 +3038,6 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 			MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len);
 			break;
 		case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
-			mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx);
-			if (!mp_segment || !mp_segment->mhdr_action)
-				return -1;
-			rule_acts[hw_acts->mhdr->pos].action = mp_segment->mhdr_action;
 			if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID)
 				ret = flow_hw_set_vlan_vid_construct(dev, job,
 								     act_data,
@@ -3195,11 +3194,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 
 		if (ix < 0)
 			return -1;
-		mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx);
+		if (!mp_segment)
+			mp_segment = mlx5_multi_pattern_segment_find(table, job->flow->res_idx);
 		if (!mp_segment || !mp_segment->reformat_action[ix])
 			return -1;
 		ra->action = mp_segment->reformat_action[ix];
-		ra->reformat.offset = job->flow->res_idx - 1;
+		/* reformat offset is relative to selected DR action */
+		ra->reformat.offset = job->flow->res_idx - mp_segment->head_index;
 		ra->reformat.data = buf;
 	}
 	if (hw_acts->push_remove && !hw_acts->push_remove->shared) {
@@ -3371,10 +3372,26 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
 					    pattern_template_index, job);
 	if (!rule_items)
 		goto error;
-	ret = mlx5dr_rule_create(table->matcher,
-				 pattern_template_index, rule_items,
-				 action_template_index, rule_acts,
-				 &rule_attr, (struct mlx5dr_rule *)flow->rule);
+	if (likely(!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr))) {
+		ret = mlx5dr_rule_create(table->matcher_info[0].matcher,
+					 pattern_template_index, rule_items,
+					 action_template_index, rule_acts,
+					 &rule_attr,
+					 (struct mlx5dr_rule *)flow->rule);
+	} else {
+		uint32_t selector;
+
+		job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE;
+		rte_rwlock_read_lock(&table->matcher_replace_rwlk);
+		selector = table->matcher_selector;
+		ret = mlx5dr_rule_create(table->matcher_info[selector].matcher,
+					 pattern_template_index, rule_items,
+					 action_template_index, rule_acts,
+					 &rule_attr,
+					 (struct mlx5dr_rule *)flow->rule);
+		rte_rwlock_read_unlock(&table->matcher_replace_rwlk);
+		flow->matcher_selector = selector;
+	}
 	if (likely(!ret))
 		return (struct rte_flow *)flow;
 error:
@@ -3491,9 +3508,23 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
 		rte_errno = EINVAL;
 		goto error;
 	}
-	ret = mlx5dr_rule_create(table->matcher,
-				 0, items, action_template_index, rule_acts,
-				 &rule_attr, (struct mlx5dr_rule *)flow->rule);
+	if (likely(!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr))) {
+		ret = mlx5dr_rule_create(table->matcher_info[0].matcher,
+					 0, items, action_template_index,
+					 rule_acts, &rule_attr,
+					 (struct mlx5dr_rule *)flow->rule);
+	} else {
+		uint32_t selector;
+
+		job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE;
+		rte_rwlock_read_lock(&table->matcher_replace_rwlk);
+		selector = table->matcher_selector;
+		ret = mlx5dr_rule_create(table->matcher_info[selector].matcher,
+					 0, items, action_template_index,
+					 rule_acts, &rule_attr,
+					 (struct mlx5dr_rule *)flow->rule);
+		rte_rwlock_read_unlock(&table->matcher_replace_rwlk);
+	}
 	if (likely(!ret))
 		return (struct rte_flow *)flow;
 error:
@@ -3673,7 +3704,8 @@ flow_hw_async_flow_destroy(struct rte_eth_dev *dev,
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
 					  "fail to destroy rte flow: flow queue full");
-	job->type = MLX5_HW_Q_JOB_TYPE_DESTROY;
+	job->type = !rte_flow_template_table_resizable(dev->data->port_id, &fh->table->cfg.attr) ?
+		    MLX5_HW_Q_JOB_TYPE_DESTROY : MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY;
 	job->user_data = user_data;
 	job->flow = fh;
 	rule_attr.user_data = job;
@@ -3785,6 +3817,26 @@ flow_hw_pull_legacy_indirect_comp(struct rte_eth_dev *dev, struct mlx5_hw_q_job
 	}
 }
 
+static __rte_always_inline int
+mlx5_hw_pull_flow_transfer_comp(struct rte_eth_dev *dev,
+				uint32_t queue, struct rte_flow_op_result res[],
+				uint16_t n_res)
+{
+	uint32_t size, i;
+	struct mlx5_hw_q_job *job = NULL;
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct rte_ring *ring = priv->hw_q[queue].flow_transfer_completed;
+
+	size = RTE_MIN(rte_ring_count(ring), n_res);
+	for (i = 0; i < size; i++) {
+		res[i].status = RTE_FLOW_OP_SUCCESS;
+		rte_ring_dequeue(ring, (void **)&job);
+		res[i].user_data = job->user_data;
+		flow_hw_job_put(priv, job, queue);
+	}
+	return (int)size;
+}
+
 static inline int
 __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev,
 				 uint32_t queue,
@@ -3833,6 +3885,79 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev,
 	return ret_comp;
 }
 
+static __rte_always_inline void
+hw_cmpl_flow_update_or_destroy(struct rte_eth_dev *dev,
+			       struct mlx5_hw_q_job *job,
+			       uint32_t queue, struct rte_flow_error *error)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_aso_mtr_pool *pool = priv->hws_mpool;
+	struct rte_flow_hw *flow = job->flow;
+	struct rte_flow_template_table *table = flow->table;
+	/* Release the original resource index in case of update. */
+	uint32_t res_idx = flow->res_idx;
+
+	if (flow->fate_type == MLX5_FLOW_FATE_JUMP)
+		flow_hw_jump_release(dev, flow->jump);
+	else if (flow->fate_type == MLX5_FLOW_FATE_QUEUE)
+		mlx5_hrxq_obj_release(dev, flow->hrxq);
+	if (mlx5_hws_cnt_id_valid(flow->cnt_id))
+		flow_hw_age_count_release(priv, queue,
+					  flow, error);
+	if (flow->mtr_id) {
+		mlx5_ipool_free(pool->idx_pool,	flow->mtr_id);
+		flow->mtr_id = 0;
+	}
+	if (job->type != MLX5_HW_Q_JOB_TYPE_UPDATE) {
+		if (table) {
+			mlx5_ipool_free(table->resource, res_idx);
+			mlx5_ipool_free(table->flow, flow->idx);
+		}
+	} else {
+		rte_memcpy(flow, job->upd_flow,
+			   offsetof(struct rte_flow_hw, rule));
+		mlx5_ipool_free(table->resource, res_idx);
+	}
+}
+
+static __rte_always_inline void
+hw_cmpl_resizable_tbl(struct rte_eth_dev *dev,
+		      struct mlx5_hw_q_job *job,
+		      uint32_t queue, enum rte_flow_op_status status,
+		      struct rte_flow_error *error)
+{
+	struct rte_flow_hw *flow = job->flow;
+	struct rte_flow_template_table *table = flow->table;
+	uint32_t selector = flow->matcher_selector;
+	uint32_t other_selector = (selector + 1) & 1;
+	uint32_t __rte_unused refcnt;
+
+	switch (job->type) {
+	case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE:
+		__atomic_add_fetch(&table->matcher_info[selector].refcnt,
+				   1, __ATOMIC_RELAXED);
+		break;
+	case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY:
+		refcnt = __atomic_sub_fetch(&table->matcher_info[selector].refcnt, 1,
+					    __ATOMIC_RELAXED);
+		MLX5_ASSERT((int)refcnt >= 0);
+		hw_cmpl_flow_update_or_destroy(dev, job, queue, error);
+		break;
+	case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE:
+		if (status == RTE_FLOW_OP_SUCCESS) {
+			refcnt = __atomic_sub_fetch(&table->matcher_info[selector].refcnt,
+						    1, __ATOMIC_RELAXED);
+			MLX5_ASSERT((int)refcnt >= 0);
+			__atomic_add_fetch(&table->matcher_info[other_selector].refcnt,
+					   1, __ATOMIC_RELAXED);
+			flow->matcher_selector = other_selector;
+		}
+		break;
+	default:
+		break;
+	}
+}
+
 /**
  * Pull the enqueued flows.
  *
@@ -3861,9 +3986,7 @@ flow_hw_pull(struct rte_eth_dev *dev,
 	     struct rte_flow_error *error)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_aso_mtr_pool *pool = priv->hws_mpool;
 	struct mlx5_hw_q_job *job;
-	uint32_t res_idx;
 	int ret, i;
 
 	/* 1. Pull the flow completion. */
@@ -3874,31 +3997,20 @@ flow_hw_pull(struct rte_eth_dev *dev,
 				"fail to query flow queue");
 	for (i = 0; i <  ret; i++) {
 		job = (struct mlx5_hw_q_job *)res[i].user_data;
-		/* Release the original resource index in case of update. */
-		res_idx = job->flow->res_idx;
 		/* Restore user data. */
 		res[i].user_data = job->user_data;
-		if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY ||
-		    job->type == MLX5_HW_Q_JOB_TYPE_UPDATE) {
-			if (job->flow->fate_type == MLX5_FLOW_FATE_JUMP)
-				flow_hw_jump_release(dev, job->flow->jump);
-			else if (job->flow->fate_type == MLX5_FLOW_FATE_QUEUE)
-				mlx5_hrxq_obj_release(dev, job->flow->hrxq);
-			if (mlx5_hws_cnt_id_valid(job->flow->cnt_id))
-				flow_hw_age_count_release(priv, queue,
-							  job->flow, error);
-			if (job->flow->mtr_id) {
-				mlx5_ipool_free(pool->idx_pool,	job->flow->mtr_id);
-				job->flow->mtr_id = 0;
-			}
-			if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) {
-				mlx5_ipool_free(job->flow->table->resource, res_idx);
-				mlx5_ipool_free(job->flow->table->flow, job->flow->idx);
-			} else {
-				rte_memcpy(job->flow, job->upd_flow,
-					offsetof(struct rte_flow_hw, rule));
-				mlx5_ipool_free(job->flow->table->resource, res_idx);
-			}
+		switch (job->type) {
+		case MLX5_HW_Q_JOB_TYPE_DESTROY:
+		case MLX5_HW_Q_JOB_TYPE_UPDATE:
+			hw_cmpl_flow_update_or_destroy(dev, job, queue, error);
+			break;
+		case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE:
+		case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE:
+		case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY:
+			hw_cmpl_resizable_tbl(dev, job, queue, res[i].status, error);
+			break;
+		default:
+			break;
 		}
 		flow_hw_job_put(priv, job, queue);
 	}
@@ -3906,24 +4018,36 @@ flow_hw_pull(struct rte_eth_dev *dev,
 	if (ret < n_res)
 		ret += __flow_hw_pull_indir_action_comp(dev, queue, &res[ret],
 							n_res - ret);
+	if (ret < n_res)
+		ret += mlx5_hw_pull_flow_transfer_comp(dev, queue, &res[ret],
+						       n_res - ret);
+
 	return ret;
 }
 
+static uint32_t
+mlx5_hw_push_queue(struct rte_ring *pending_q, struct rte_ring *cmpl_q)
+{
+	void *job = NULL;
+	uint32_t i, size = rte_ring_count(pending_q);
+
+	for (i = 0; i < size; i++) {
+		rte_ring_dequeue(pending_q, &job);
+		rte_ring_enqueue(cmpl_q, job);
+	}
+	return size;
+}
+
 static inline uint32_t
 __flow_hw_push_action(struct rte_eth_dev *dev,
 		    uint32_t queue)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct rte_ring *iq = priv->hw_q[queue].indir_iq;
-	struct rte_ring *cq = priv->hw_q[queue].indir_cq;
-	void *job = NULL;
-	uint32_t ret, i;
+	struct mlx5_hw_q *hw_q = &priv->hw_q[queue];
 
-	ret = rte_ring_count(iq);
-	for (i = 0; i < ret; i++) {
-		rte_ring_dequeue(iq, &job);
-		rte_ring_enqueue(cq, job);
-	}
+	mlx5_hw_push_queue(hw_q->indir_iq, hw_q->indir_cq);
+	mlx5_hw_push_queue(hw_q->flow_transfer_pending,
+			   hw_q->flow_transfer_completed);
 	if (!priv->shared_host) {
 		if (priv->hws_ctpool)
 			mlx5_aso_push_wqe(priv->sh,
@@ -4332,6 +4456,7 @@ flow_hw_table_create(struct rte_eth_dev *dev,
 	grp = container_of(ge, struct mlx5_flow_group, entry);
 	tbl->grp = grp;
 	/* Prepare matcher information. */
+	matcher_attr.resizable = !!rte_flow_template_table_resizable(dev->data->port_id, &table_cfg->attr);
 	matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_ANY;
 	matcher_attr.priority = attr->flow_attr.priority;
 	matcher_attr.optimize_using_rule_idx = true;
@@ -4350,7 +4475,7 @@ flow_hw_table_create(struct rte_eth_dev *dev,
 			       RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG;
 
 		if ((attr->specialize & val) == val) {
-			DRV_LOG(INFO, "Invalid hint value %x",
+			DRV_LOG(ERR, "Invalid hint value %x",
 				attr->specialize);
 			rte_errno = EINVAL;
 			goto it_error;
@@ -4394,10 +4519,11 @@ flow_hw_table_create(struct rte_eth_dev *dev,
 		i = nb_item_templates;
 		goto it_error;
 	}
-	tbl->matcher = mlx5dr_matcher_create
+	tbl->matcher_info[0].matcher = mlx5dr_matcher_create
 		(tbl->grp->tbl, mt, nb_item_templates, at, nb_action_templates, &matcher_attr);
-	if (!tbl->matcher)
+	if (!tbl->matcher_info[0].matcher)
 		goto at_error;
+	tbl->matcher_attr = matcher_attr;
 	tbl->type = attr->flow_attr.transfer ? MLX5DR_TABLE_TYPE_FDB :
 		    (attr->flow_attr.egress ? MLX5DR_TABLE_TYPE_NIC_TX :
 		    MLX5DR_TABLE_TYPE_NIC_RX);
@@ -4405,6 +4531,7 @@ flow_hw_table_create(struct rte_eth_dev *dev,
 		LIST_INSERT_HEAD(&priv->flow_hw_tbl, tbl, next);
 	else
 		LIST_INSERT_HEAD(&priv->flow_hw_tbl_ongo, tbl, next);
+	rte_rwlock_init(&tbl->matcher_replace_rwlk);
 	return tbl;
 at_error:
 	for (i = 0; i < nb_action_templates; i++) {
@@ -4576,6 +4703,11 @@ flow_hw_template_table_create(struct rte_eth_dev *dev,
 
 	if (flow_hw_translate_group(dev, &cfg, group, &cfg.attr.flow_attr.group, error))
 		return NULL;
+	if (!cfg.attr.flow_attr.group && rte_flow_template_table_resizable(dev->data->port_id, attr)) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "table cannot be resized: invalid group");
+		return NULL;
+	}
 	return flow_hw_table_create(dev, &cfg, item_templates, nb_item_templates,
 				    action_templates, nb_action_templates, error);
 }
@@ -4648,7 +4780,10 @@ flow_hw_table_destroy(struct rte_eth_dev *dev,
 				   1, __ATOMIC_RELAXED);
 	}
 	flow_hw_destroy_table_multi_pattern_ctx(table);
-	mlx5dr_matcher_destroy(table->matcher);
+	if (table->matcher_info[0].matcher)
+		mlx5dr_matcher_destroy(table->matcher_info[0].matcher);
+	if (table->matcher_info[1].matcher)
+		mlx5dr_matcher_destroy(table->matcher_info[1].matcher);
 	mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry);
 	mlx5_ipool_destroy(table->resource);
 	mlx5_ipool_destroy(table->flow);
@@ -9642,6 +9777,16 @@ action_template_drop_init(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static __rte_always_inline struct rte_ring *
+mlx5_hwq_ring_create(uint16_t port_id, uint32_t queue, uint32_t size, const char *str)
+{
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+
+	snprintf(mz_name, sizeof(mz_name), "port_%u_%s_%u", port_id, str, queue);
+	return rte_ring_create(mz_name, size, SOCKET_ID_ANY,
+			       RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
+}
+
 /**
  * Configure port HWS resources.
  *
@@ -9769,7 +9914,6 @@ flow_hw_configure(struct rte_eth_dev *dev,
 		goto err;
 	}
 	for (i = 0; i < nb_q_updated; i++) {
-		char mz_name[RTE_MEMZONE_NAMESIZE];
 		uint8_t *encap = NULL, *push = NULL;
 		struct mlx5_modification_cmd *mhdr_cmd = NULL;
 		struct rte_flow_item *items = NULL;
@@ -9803,22 +9947,23 @@ flow_hw_configure(struct rte_eth_dev *dev,
 			job[j].upd_flow = &upd_flow[j];
 			priv->hw_q[i].job[j] = &job[j];
 		}
-		snprintf(mz_name, sizeof(mz_name), "port_%u_indir_act_cq_%u",
-			 dev->data->port_id, i);
-		priv->hw_q[i].indir_cq = rte_ring_create(mz_name,
-				_queue_attr[i]->size, SOCKET_ID_ANY,
-				RING_F_SP_ENQ | RING_F_SC_DEQ |
-				RING_F_EXACT_SZ);
+		/* Notice ring name length is limited. */
+		priv->hw_q[i].indir_cq = mlx5_hwq_ring_create
+			(dev->data->port_id, i, _queue_attr[i]->size, "indir_act_cq");
 		if (!priv->hw_q[i].indir_cq)
 			goto err;
-		snprintf(mz_name, sizeof(mz_name), "port_%u_indir_act_iq_%u",
-			 dev->data->port_id, i);
-		priv->hw_q[i].indir_iq = rte_ring_create(mz_name,
-				_queue_attr[i]->size, SOCKET_ID_ANY,
-				RING_F_SP_ENQ | RING_F_SC_DEQ |
-				RING_F_EXACT_SZ);
+		priv->hw_q[i].indir_iq = mlx5_hwq_ring_create
+			(dev->data->port_id, i, _queue_attr[i]->size, "indir_act_iq");
 		if (!priv->hw_q[i].indir_iq)
 			goto err;
+		priv->hw_q[i].flow_transfer_pending = mlx5_hwq_ring_create
+			(dev->data->port_id, i, _queue_attr[i]->size, "tx_pending");
+		if (!priv->hw_q[i].flow_transfer_pending)
+			goto err;
+		priv->hw_q[i].flow_transfer_completed = mlx5_hwq_ring_create
+			(dev->data->port_id, i, _queue_attr[i]->size, "tx_done");
+		if (!priv->hw_q[i].flow_transfer_completed)
+			goto err;
 	}
 	dr_ctx_attr.pd = priv->sh->cdev->pd;
 	dr_ctx_attr.queues = nb_q_updated;
@@ -10039,6 +10184,8 @@ flow_hw_configure(struct rte_eth_dev *dev,
 	for (i = 0; i < nb_q_updated; i++) {
 		rte_ring_free(priv->hw_q[i].indir_iq);
 		rte_ring_free(priv->hw_q[i].indir_cq);
+		rte_ring_free(priv->hw_q[i].flow_transfer_pending);
+		rte_ring_free(priv->hw_q[i].flow_transfer_completed);
 	}
 	mlx5_free(priv->hw_q);
 	priv->hw_q = NULL;
@@ -10139,6 +10286,8 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
 	for (i = 0; i < priv->nb_queue; i++) {
 		rte_ring_free(priv->hw_q[i].indir_iq);
 		rte_ring_free(priv->hw_q[i].indir_cq);
+		rte_ring_free(priv->hw_q[i].flow_transfer_pending);
+		rte_ring_free(priv->hw_q[i].flow_transfer_completed);
 	}
 	mlx5_free(priv->hw_q);
 	priv->hw_q = NULL;
@@ -11969,7 +12118,7 @@ flow_hw_calc_table_hash(struct rte_eth_dev *dev,
 	items = flow_hw_get_rule_items(dev, table, pattern,
 				       pattern_template_index,
 				       &job);
-	res = mlx5dr_rule_hash_calculate(table->matcher, items,
+	res = mlx5dr_rule_hash_calculate(mlx5_table_matcher(table), items,
 					 pattern_template_index,
 					 MLX5DR_RULE_HASH_CALC_MODE_RAW,
 					 hash);
@@ -12046,6 +12195,220 @@ flow_hw_calc_encap_hash(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+flow_hw_table_resize_multi_pattern_actions(struct rte_eth_dev *dev,
+					   struct rte_flow_template_table *table,
+					   uint32_t nb_flows,
+					   struct rte_flow_error *error)
+{
+	struct mlx5_multi_pattern_segment *segment = table->mpctx.segments;
+	uint32_t bulk_size;
+	int i, ret;
+
+	/**
+	 * Segment always allocates Modify Header Argument Objects number in
+	 * powers of 2.
+	 * On resize, PMD adds minimal required argument objects number.
+	 * For example, if table size was 10, it allocated 16 argument objects.
+	 * Resize to 15 will not add new objects.
+	 */
+	for (i = 1;
+	     i < MLX5_MAX_TABLE_RESIZE_NUM && segment->capacity;
+	     i++, segment++);
+	if (i == MLX5_MAX_TABLE_RESIZE_NUM)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "too many resizes");
+	if (segment->head_index - 1 >= nb_flows)
+		return 0;
+	bulk_size = rte_align32pow2(nb_flows - segment->head_index + 1);
+	ret = mlx5_tbl_multi_pattern_process(dev, table, segment,
+					     rte_log2_u32(bulk_size),
+					     error);
+	if (ret)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "too many resizes");
+	return i;
+}
+
+static int
+flow_hw_table_resize(struct rte_eth_dev *dev,
+		     struct rte_flow_template_table *table,
+		     uint32_t nb_flows,
+		     struct rte_flow_error *error)
+{
+	struct mlx5dr_action_template *at[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+	struct mlx5dr_match_template *mt[MLX5_HW_TBL_MAX_ITEM_TEMPLATE];
+	struct mlx5dr_matcher_attr matcher_attr = table->matcher_attr;
+	struct mlx5_multi_pattern_segment *segment = NULL;
+	struct mlx5dr_matcher *matcher = NULL;
+	uint32_t i, selector = table->matcher_selector;
+	uint32_t other_selector = (selector + 1) & 1;
+	int ret;
+
+	if (!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "no resizable attribute");
+	if (table->matcher_info[other_selector].matcher)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "last table resize was not completed");
+	if (nb_flows <= table->cfg.attr.nb_flows)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "shrinking table is not supported");
+	ret = mlx5_ipool_resize(table->flow, nb_flows);
+	if (ret)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "cannot resize flows pool");
+	ret = mlx5_ipool_resize(table->resource, nb_flows);
+	if (ret)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "cannot resize resources pool");
+	if (mlx5_is_multi_pattern_active(&table->mpctx)) {
+		ret = flow_hw_table_resize_multi_pattern_actions(dev, table, nb_flows, error);
+		if (ret < 0)
+			return ret;
+		if (ret > 0)
+			segment = table->mpctx.segments + ret;
+	}
+	for (i = 0; i < table->nb_item_templates; i++)
+		mt[i] = table->its[i]->mt;
+	for (i = 0; i < table->nb_action_templates; i++)
+		at[i] = table->ats[i].action_template->tmpl;
+	nb_flows = rte_align32pow2(nb_flows);
+	matcher_attr.rule.num_log = rte_log2_u32(nb_flows);
+	matcher = mlx5dr_matcher_create(table->grp->tbl, mt,
+					table->nb_item_templates, at,
+					table->nb_action_templates,
+					&matcher_attr);
+	if (!matcher) {
+		ret = rte_flow_error_set(error, rte_errno,
+					 RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					 table, "failed to create new matcher");
+		goto error;
+	}
+	rte_rwlock_write_lock(&table->matcher_replace_rwlk);
+	ret = mlx5dr_matcher_resize_set_target
+			(table->matcher_info[selector].matcher, matcher);
+	if (ret) {
+		rte_rwlock_write_unlock(&table->matcher_replace_rwlk);
+		ret = rte_flow_error_set(error, rte_errno,
+					 RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					 table, "failed to initiate matcher swap");
+		goto error;
+	}
+	table->cfg.attr.nb_flows = nb_flows;
+	table->matcher_info[other_selector].matcher = matcher;
+	table->matcher_info[other_selector].refcnt = 0;
+	table->matcher_selector = other_selector;
+	rte_rwlock_write_unlock(&table->matcher_replace_rwlk);
+	return 0;
+error:
+	if (segment)
+		mlx5_destroy_multi_pattern_segment(segment);
+	if (matcher) {
+		ret = mlx5dr_matcher_destroy(matcher);
+		return rte_flow_error_set(error, rte_errno,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "failed to destroy new matcher");
+	}
+	return ret;
+}
+
+static int
+flow_hw_table_resize_complete(__rte_unused struct rte_eth_dev *dev,
+			      struct rte_flow_template_table *table,
+			      struct rte_flow_error *error)
+{
+	int ret;
+	uint32_t selector = table->matcher_selector;
+	uint32_t other_selector = (selector + 1) & 1;
+	struct mlx5_matcher_info *matcher_info = &table->matcher_info[other_selector];
+
+	if (!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "no resizable attribute");
+	if (!matcher_info->matcher || matcher_info->refcnt)
+		return rte_flow_error_set(error, EBUSY,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "cannot complete table resize");
+	ret = mlx5dr_matcher_destroy(matcher_info->matcher);
+	if (ret)
+		return rte_flow_error_set(error, rte_errno,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "failed to destroy retired matcher");
+	matcher_info->matcher = NULL;
+	return 0;
+}
+
+static int
+flow_hw_update_resized(struct rte_eth_dev *dev, uint32_t queue,
+		       const struct rte_flow_op_attr *attr,
+		       struct rte_flow *flow, void *user_data,
+		       struct rte_flow_error *error)
+{
+	int ret;
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_hw_q_job *job;
+	struct rte_flow_hw *hw_flow = (struct rte_flow_hw *)flow;
+	struct rte_flow_template_table *table = hw_flow->table;
+	uint32_t table_selector = table->matcher_selector;
+	uint32_t rule_selector = hw_flow->matcher_selector;
+	uint32_t other_selector;
+	struct mlx5dr_matcher *other_matcher;
+	struct mlx5dr_rule_attr rule_attr = {
+		.queue_id = queue,
+		.burst = attr->postpone,
+	};
+
+	/**
+	 * mlx5dr_matcher_resize_rule_move() accepts original table matcher -
+	 * the one that was used BEFORE table resize.
+	 * Since the function is called AFTER table resize,
+	 * `table->matcher_selector` always points to the new matcher and
+	 * `hw_flow->matcher_selector` points to a matcher used to create the flow.
+	 */
+	other_selector = rule_selector == table_selector ?
+			 (rule_selector + 1) & 1 : rule_selector;
+	other_matcher = table->matcher_info[other_selector].matcher;
+	if (!other_matcher)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "no active table resize");
+	job = flow_hw_job_get(priv, queue);
+	if (!job)
+		return rte_flow_error_set(error, ENOMEM,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "queue is full");
+	job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE;
+	job->user_data = user_data;
+	job->flow = hw_flow;
+	rule_attr.user_data = job;
+	if (rule_selector == table_selector) {
+		struct rte_ring *ring = !attr->postpone ?
+					priv->hw_q[queue].flow_transfer_completed :
+					priv->hw_q[queue].flow_transfer_pending;
+		rte_ring_enqueue(ring, job);
+		return 0;
+	}
+	ret = mlx5dr_matcher_resize_rule_move(other_matcher,
+					      (struct mlx5dr_rule *)hw_flow->rule,
+					      &rule_attr);
+	if (ret) {
+		flow_hw_job_put(priv, job, queue);
+		return rte_flow_error_set(error, rte_errno,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "flow transfer failed");
+	}
+	return 0;
+}
+
 const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = {
 	.info_get = flow_hw_info_get,
 	.configure = flow_hw_configure,
@@ -12057,11 +12420,14 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = {
 	.actions_template_destroy = flow_hw_actions_template_destroy,
 	.template_table_create = flow_hw_template_table_create,
 	.template_table_destroy = flow_hw_table_destroy,
+	.table_resize = flow_hw_table_resize,
 	.group_set_miss_actions = flow_hw_group_set_miss_actions,
 	.async_flow_create = flow_hw_async_flow_create,
 	.async_flow_create_by_index = flow_hw_async_flow_create_by_index,
 	.async_flow_update = flow_hw_async_flow_update,
 	.async_flow_destroy = flow_hw_async_flow_destroy,
+	.flow_update_resized = flow_hw_update_resized,
+	.table_resize_complete = flow_hw_table_resize_complete,
 	.pull = flow_hw_pull,
 	.push = flow_hw_push,
 	.async_action_create = flow_hw_action_handle_create,
-- 
2.39.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 0/4] net/mlx5: add support for flow table resizing
  2024-02-02 11:56 ` [PATCH 1/5] net/mlx5/hws: add support for resizable matchers Gregory Etelson
  2024-02-28 10:25   ` [PATCH v2 0/4] net/mlx5: add support for flow table resizing Gregory Etelson
@ 2024-02-28 13:33   ` Gregory Etelson
  2024-02-28 13:33     ` [PATCH v3 1/4] net/mlx5: add resize function to ipool Gregory Etelson
                       ` (4 more replies)
  1 sibling, 5 replies; 17+ messages in thread
From: Gregory Etelson @ 2024-02-28 13:33 UTC (permalink / raw)
  To: dev; +Cc: getelson, mkashani, rasland, Dariusz Sosnowski

Support template table resize API.

Gregory Etelson (3):
  net/mlx5: fix parameters verification in HWS table create
  net/mlx5: move multi-pattern actions management to table level
  net/mlx5: add support for flow table resizing

Maayan Kashani (1):
  net/mlx5: add resize function to ipool

 drivers/net/mlx5/mlx5.h         |   5 +
 drivers/net/mlx5/mlx5_flow.c    |  51 +++
 drivers/net/mlx5/mlx5_flow.h    | 101 ++++-
 drivers/net/mlx5/mlx5_flow_hw.c | 761 +++++++++++++++++++++++---------
 drivers/net/mlx5/mlx5_utils.c   |  29 ++
 drivers/net/mlx5/mlx5_utils.h   |  16 +
 6 files changed, 763 insertions(+), 200 deletions(-)

Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>

--
v2: Update PMD after DPDK API changes.
v3: Use RTE atomic API.
-- 
2.39.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 1/4] net/mlx5: add resize function to ipool
  2024-02-28 13:33   ` [PATCH v3 0/4] " Gregory Etelson
@ 2024-02-28 13:33     ` Gregory Etelson
  2024-02-28 13:33     ` [PATCH v3 2/4] net/mlx5: fix parameters verification in HWS table create Gregory Etelson
                       ` (3 subsequent siblings)
  4 siblings, 0 replies; 17+ messages in thread
From: Gregory Etelson @ 2024-02-28 13:33 UTC (permalink / raw)
  To: dev
  Cc: getelson, mkashani, rasland, Dariusz Sosnowski,
	Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad

From: Maayan Kashani <mkashani@nvidia.com>

Before this patch, ipool size could be fixed by
setting max_idx in mlx5_indexed_pool_config upon
ipool creation. Or it can be auto resized to the
maximum limit by setting max_idx to zero upon
ipool creation and the saved value is the maximum
index possible.
This patch adds ipool_resize API that enables to
update the value of max_idx in case it is not set to
maximum, meaning not in auto resize mode. It
enables the allocation of new trunk when using
malloc/zmalloc up to the max_idx limit. Please
notice the resize number of entries should be divisible by trunk_size.

Signed-off-by: Maayan Kashani <mkashani@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
 drivers/net/mlx5/mlx5_utils.c | 29 +++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_utils.h | 16 ++++++++++++++++
 2 files changed, 45 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c
index 4db738785f..e28db2ec43 100644
--- a/drivers/net/mlx5/mlx5_utils.c
+++ b/drivers/net/mlx5/mlx5_utils.c
@@ -809,6 +809,35 @@ mlx5_ipool_get_next(struct mlx5_indexed_pool *pool, uint32_t *pos)
 	return NULL;
 }
 
+int
+mlx5_ipool_resize(struct mlx5_indexed_pool *pool, uint32_t num_entries)
+{
+	uint32_t cur_max_idx;
+	uint32_t max_index = mlx5_trunk_idx_offset_get(pool, TRUNK_MAX_IDX + 1);
+
+	if (num_entries % pool->cfg.trunk_size) {
+		DRV_LOG(ERR, "num_entries param should be trunk_size(=%u) multiplication\n",
+			pool->cfg.trunk_size);
+		return -EINVAL;
+	}
+
+	mlx5_ipool_lock(pool);
+	cur_max_idx = pool->cfg.max_idx + num_entries;
+	/* If the ipool max idx is above maximum or uint overflow occurred. */
+	if (cur_max_idx > max_index || cur_max_idx < num_entries) {
+		DRV_LOG(ERR, "Ipool resize failed\n");
+		DRV_LOG(ERR, "Adding %u entries to existing %u entries, will cross max limit(=%u)\n",
+		num_entries, cur_max_idx, max_index);
+		mlx5_ipool_unlock(pool);
+		return -EINVAL;
+	}
+
+	/* Update maximum entries number. */
+	pool->cfg.max_idx = cur_max_idx;
+	mlx5_ipool_unlock(pool);
+	return 0;
+}
+
 void
 mlx5_ipool_dump(struct mlx5_indexed_pool *pool)
 {
diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h
index 82e8298781..f3c0d76a6d 100644
--- a/drivers/net/mlx5/mlx5_utils.h
+++ b/drivers/net/mlx5/mlx5_utils.h
@@ -427,6 +427,22 @@ void mlx5_ipool_flush_cache(struct mlx5_indexed_pool *pool);
  */
 void *mlx5_ipool_get_next(struct mlx5_indexed_pool *pool, uint32_t *pos);
 
+/**
+ * This function resize the ipool.
+ *
+ * @param pool
+ *   Pointer to the index memory pool handler.
+ * @param num_entries
+ *   Number of entries to be added to the pool.
+ *   This number should be divisible by trunk_size.
+ *
+ * @return
+ *   - non-zero value on error.
+ *   - 0 on success.
+ *
+ */
+int mlx5_ipool_resize(struct mlx5_indexed_pool *pool, uint32_t num_entries);
+
 /**
  * This function allocates new empty Three-level table.
  *
-- 
2.39.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 2/4] net/mlx5: fix parameters verification in HWS table create
  2024-02-28 13:33   ` [PATCH v3 0/4] " Gregory Etelson
  2024-02-28 13:33     ` [PATCH v3 1/4] net/mlx5: add resize function to ipool Gregory Etelson
@ 2024-02-28 13:33     ` Gregory Etelson
  2024-02-28 13:33     ` [PATCH v3 3/4] net/mlx5: move multi-pattern actions management to table level Gregory Etelson
                       ` (2 subsequent siblings)
  4 siblings, 0 replies; 17+ messages in thread
From: Gregory Etelson @ 2024-02-28 13:33 UTC (permalink / raw)
  To: dev
  Cc: getelson, mkashani, rasland, stable, Dariusz Sosnowski,
	Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad,
	Rongwei Liu

Modified the conditionals in `flow_hw_table_create()` to use bitwise
AND instead of equality checks when assessing
`table_cfg->attr->specialize` bitmask.
This will allow for greater flexibility as the bitmask may encapsulate
multiple flags.
The patch maintains the previous behavior with single flag values,
while providing support for multiple flags.

Fixes: 240b77cfcba5 ("net/mlx5: enable hint in async flow table")

Cc: stable@dpdk.org

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_hw.c | 23 +++++++++++++++++------
 1 file changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 783ad9e72a..5938d8b90c 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -4390,12 +4390,23 @@ flow_hw_table_create(struct rte_eth_dev *dev,
 	matcher_attr.rule.num_log = rte_log2_u32(nb_flows);
 	/* Parse hints information. */
 	if (attr->specialize) {
-		if (attr->specialize == RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG)
-			matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_WIRE;
-		else if (attr->specialize == RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG)
-			matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_VPORT;
-		else
-			DRV_LOG(INFO, "Unsupported hint value %x", attr->specialize);
+		uint32_t val = RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG |
+			       RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG;
+
+		if ((attr->specialize & val) == val) {
+			DRV_LOG(INFO, "Invalid hint value %x",
+				attr->specialize);
+			rte_errno = EINVAL;
+			goto it_error;
+		}
+		if (attr->specialize &
+		    RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_WIRE_ORIG)
+			matcher_attr.optimize_flow_src =
+				MLX5DR_MATCHER_FLOW_SRC_WIRE;
+		else if (attr->specialize &
+			 RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG)
+			matcher_attr.optimize_flow_src =
+				MLX5DR_MATCHER_FLOW_SRC_VPORT;
 	}
 	/* Build the item template. */
 	for (i = 0; i < nb_item_templates; i++) {
-- 
2.39.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 3/4] net/mlx5: move multi-pattern actions management to table level
  2024-02-28 13:33   ` [PATCH v3 0/4] " Gregory Etelson
  2024-02-28 13:33     ` [PATCH v3 1/4] net/mlx5: add resize function to ipool Gregory Etelson
  2024-02-28 13:33     ` [PATCH v3 2/4] net/mlx5: fix parameters verification in HWS table create Gregory Etelson
@ 2024-02-28 13:33     ` Gregory Etelson
  2024-02-28 13:33     ` [PATCH v3 4/4] net/mlx5: add support for flow table resizing Gregory Etelson
  2024-02-28 15:50     ` [PATCH v3 0/4] " Raslan Darawsheh
  4 siblings, 0 replies; 17+ messages in thread
From: Gregory Etelson @ 2024-02-28 13:33 UTC (permalink / raw)
  To: dev
  Cc: getelson, mkashani, rasland, Dariusz Sosnowski,
	Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad

The multi-pattern actions related structures and management code
have been moved to the table level.
That code refactor is required for the upcoming table resize feature.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    |  73 ++++++++++-
 drivers/net/mlx5/mlx5_flow_hw.c | 226 +++++++++++++++-----------------
 2 files changed, 174 insertions(+), 125 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index a4d0ff7b13..9cc237c542 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1410,7 +1410,6 @@ struct mlx5_hw_encap_decap_action {
 	/* Is header_reformat action shared across flows in table. */
 	uint32_t shared:1;
 	uint32_t multi_pattern:1;
-	volatile uint32_t *multi_pattern_refcnt;
 	size_t data_size; /* Action metadata size. */
 	uint8_t data[]; /* Action data. */
 };
@@ -1433,7 +1432,6 @@ struct mlx5_hw_modify_header_action {
 	/* Is MODIFY_HEADER action shared across flows in table. */
 	uint32_t shared:1;
 	uint32_t multi_pattern:1;
-	volatile uint32_t *multi_pattern_refcnt;
 	/* Amount of modification commands stored in the precompiled buffer. */
 	uint32_t mhdr_cmds_num;
 	/* Precompiled modification commands. */
@@ -1487,6 +1485,76 @@ struct mlx5_flow_group {
 #define MLX5_HW_TBL_MAX_ITEM_TEMPLATE 2
 #define MLX5_HW_TBL_MAX_ACTION_TEMPLATE 32
 
+#define MLX5_MULTIPATTERN_ENCAP_NUM 5
+#define MLX5_MAX_TABLE_RESIZE_NUM 64
+
+struct mlx5_multi_pattern_segment {
+	uint32_t capacity;
+	uint32_t head_index;
+	struct mlx5dr_action *mhdr_action;
+	struct mlx5dr_action *reformat_action[MLX5_MULTIPATTERN_ENCAP_NUM];
+};
+
+struct mlx5_tbl_multi_pattern_ctx {
+	struct {
+		uint32_t elements_num;
+		struct mlx5dr_action_reformat_header reformat_hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+		/**
+		 * insert_header structure is larger than reformat_header.
+		 * Enclosing these structures with union will case a gap between
+		 * reformat_hdr array elements.
+		 * mlx5dr_action_create_reformat() expects adjacent array elements.
+		 */
+		struct mlx5dr_action_insert_header insert_hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+	} reformat[MLX5_MULTIPATTERN_ENCAP_NUM];
+
+	struct {
+		uint32_t elements_num;
+		struct mlx5dr_action_mh_pattern pattern[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+	} mh;
+	struct mlx5_multi_pattern_segment segments[MLX5_MAX_TABLE_RESIZE_NUM];
+};
+
+static __rte_always_inline void
+mlx5_multi_pattern_activate(struct mlx5_tbl_multi_pattern_ctx *mpctx)
+{
+	mpctx->segments[0].head_index = 1;
+}
+
+static __rte_always_inline bool
+mlx5_is_multi_pattern_active(const struct mlx5_tbl_multi_pattern_ctx *mpctx)
+{
+	return mpctx->segments[0].head_index == 1;
+}
+
+static __rte_always_inline struct mlx5_multi_pattern_segment *
+mlx5_multi_pattern_segment_get_next(struct mlx5_tbl_multi_pattern_ctx *mpctx)
+{
+	int i;
+
+	for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) {
+		if (!mpctx->segments[i].capacity)
+			return &mpctx->segments[i];
+	}
+	return NULL;
+}
+
+static __rte_always_inline struct mlx5_multi_pattern_segment *
+mlx5_multi_pattern_segment_find(struct mlx5_tbl_multi_pattern_ctx *mpctx,
+				uint32_t flow_resource_ix)
+{
+	int i;
+
+	for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) {
+		uint32_t limit = mpctx->segments[i].head_index +
+				 mpctx->segments[i].capacity;
+
+		if (flow_resource_ix < limit)
+			return &mpctx->segments[i];
+	}
+	return NULL;
+}
+
 struct mlx5_flow_template_table_cfg {
 	struct rte_flow_template_table_attr attr; /* Table attributes passed through flow API. */
 	bool external; /* True if created by flow API, false if table is internal to PMD. */
@@ -1507,6 +1575,7 @@ struct rte_flow_template_table {
 	uint8_t nb_item_templates; /* Item template number. */
 	uint8_t nb_action_templates; /* Action template number. */
 	uint32_t refcnt; /* Table reference counter. */
+	struct mlx5_tbl_multi_pattern_ctx mpctx;
 };
 
 #endif
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 5938d8b90c..05442f0bd3 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -78,41 +78,14 @@ struct mlx5_indlst_legacy {
 #define MLX5_CONST_ENCAP_ITEM(encap_type, ptr) \
 (((const struct encap_type *)(ptr))->definition)
 
-struct mlx5_multi_pattern_ctx {
-	union {
-		struct mlx5dr_action_reformat_header reformat_hdr;
-		struct mlx5dr_action_mh_pattern mh_pattern;
-	};
-	union {
-		/* action template auxiliary structures for object destruction */
-		struct mlx5_hw_encap_decap_action *encap;
-		struct mlx5_hw_modify_header_action *mhdr;
-	};
-	/* multi pattern action */
-	struct mlx5dr_rule_action *rule_action;
-};
-
-#define MLX5_MULTIPATTERN_ENCAP_NUM 4
-
-struct mlx5_tbl_multi_pattern_ctx {
-	struct {
-		uint32_t elements_num;
-		struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
-	} reformat[MLX5_MULTIPATTERN_ENCAP_NUM];
-
-	struct {
-		uint32_t elements_num;
-		struct mlx5_multi_pattern_ctx ctx[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
-	} mh;
-};
-
-#define MLX5_EMPTY_MULTI_PATTERN_CTX {{{0,}},}
-
 static int
 mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev,
 			       struct rte_flow_template_table *tbl,
-			       struct mlx5_tbl_multi_pattern_ctx *mpat,
+			       struct mlx5_multi_pattern_segment *segment,
+			       uint32_t bulk_size,
 			       struct rte_flow_error *error);
+static void
+mlx5_destroy_multi_pattern_segment(struct mlx5_multi_pattern_segment *segment);
 
 static __rte_always_inline int
 mlx5_multi_pattern_reformat_to_index(enum mlx5dr_action_type type)
@@ -577,28 +550,14 @@ flow_hw_ct_compile(struct rte_eth_dev *dev,
 static void
 flow_hw_template_destroy_reformat_action(struct mlx5_hw_encap_decap_action *encap_decap)
 {
-	if (encap_decap->multi_pattern) {
-		uint32_t refcnt = __atomic_sub_fetch(encap_decap->multi_pattern_refcnt,
-						     1, __ATOMIC_RELAXED);
-		if (refcnt)
-			return;
-		mlx5_free((void *)(uintptr_t)encap_decap->multi_pattern_refcnt);
-	}
-	if (encap_decap->action)
+	if (encap_decap->action && !encap_decap->multi_pattern)
 		mlx5dr_action_destroy(encap_decap->action);
 }
 
 static void
 flow_hw_template_destroy_mhdr_action(struct mlx5_hw_modify_header_action *mhdr)
 {
-	if (mhdr->multi_pattern) {
-		uint32_t refcnt = __atomic_sub_fetch(mhdr->multi_pattern_refcnt,
-						     1, __ATOMIC_RELAXED);
-		if (refcnt)
-			return;
-		mlx5_free((void *)(uintptr_t)mhdr->multi_pattern_refcnt);
-	}
-	if (mhdr->action)
+	if (mhdr->action && !mhdr->multi_pattern)
 		mlx5dr_action_destroy(mhdr->action);
 }
 
@@ -1924,21 +1883,22 @@ mlx5_tbl_translate_reformat(struct mlx5_priv *priv,
 		acts->encap_decap->shared = true;
 	} else {
 		uint32_t ix;
-		typeof(mp_ctx->reformat[0]) *reformat_ctx = mp_ctx->reformat +
-							    mp_reformat_ix;
+		typeof(mp_ctx->reformat[0]) *reformat = mp_ctx->reformat +
+							mp_reformat_ix;
 
-		ix = reformat_ctx->elements_num++;
-		reformat_ctx->ctx[ix].reformat_hdr = hdr;
-		reformat_ctx->ctx[ix].rule_action = &acts->rule_acts[at->reformat_off];
-		reformat_ctx->ctx[ix].encap = acts->encap_decap;
+		ix = reformat->elements_num++;
+		reformat->reformat_hdr[ix] = hdr;
 		acts->rule_acts[at->reformat_off].reformat.hdr_idx = ix;
 		acts->encap_decap_pos = at->reformat_off;
+		acts->encap_decap->multi_pattern = 1;
 		acts->encap_decap->data_size = data_size;
+		acts->encap_decap->action_type = refmt_type;
 		ret = __flow_hw_act_data_encap_append
 			(priv, acts, (at->actions + reformat_src)->type,
 			 reformat_src, at->reformat_off, data_size);
 		if (ret)
 			return -rte_errno;
+		mlx5_multi_pattern_activate(mp_ctx);
 	}
 	return 0;
 }
@@ -1987,12 +1947,11 @@ mlx5_tbl_translate_modify_header(struct rte_eth_dev *dev,
 	} else {
 		typeof(mp_ctx->mh) *mh = &mp_ctx->mh;
 		uint32_t idx = mh->elements_num;
-		struct mlx5_multi_pattern_ctx *mh_ctx = mh->ctx + mh->elements_num++;
 
-		mh_ctx->mh_pattern = pattern;
-		mh_ctx->mhdr = acts->mhdr;
-		mh_ctx->rule_action = &acts->rule_acts[mhdr_ix];
+		mh->pattern[mh->elements_num++] = pattern;
+		acts->mhdr->multi_pattern = 1;
 		acts->rule_acts[mhdr_ix].modify_header.pattern_idx = idx;
+		mlx5_multi_pattern_activate(mp_ctx);
 	}
 	return 0;
 }
@@ -2552,16 +2511,17 @@ flow_hw_actions_translate(struct rte_eth_dev *dev,
 {
 	int ret;
 	uint32_t i;
-	struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX;
 
 	for (i = 0; i < tbl->nb_action_templates; i++) {
 		if (__flow_hw_actions_translate(dev, &tbl->cfg,
 						&tbl->ats[i].acts,
 						tbl->ats[i].action_template,
-						&mpat, error))
+						&tbl->mpctx, error))
 			goto err;
 	}
-	ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error);
+	ret = mlx5_tbl_multi_pattern_process(dev, tbl, &tbl->mpctx.segments[0],
+					     rte_log2_u32(tbl->cfg.attr.nb_flows),
+					     error);
 	if (ret)
 		goto err;
 	return 0;
@@ -2944,6 +2904,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 	int ret;
 	uint32_t age_idx = 0;
 	struct mlx5_aso_mtr *aso_mtr;
+	struct mlx5_multi_pattern_segment *mp_segment;
 
 	rte_memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * at->dr_actions_num);
 	attr.group = table->grp->group_id;
@@ -3074,6 +3035,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 			MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len);
 			break;
 		case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
+			mp_segment = mlx5_multi_pattern_segment_find
+					(&table->mpctx, job->flow->res_idx);
+			if (!mp_segment || !mp_segment->mhdr_action)
+				return -1;
+			rule_acts[hw_acts->mhdr->pos].action = mp_segment->mhdr_action;
 			if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID)
 				ret = flow_hw_set_vlan_vid_construct(dev, job,
 								     act_data,
@@ -3225,9 +3191,17 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 				     age_idx);
 	}
 	if (hw_acts->encap_decap && !hw_acts->encap_decap->shared) {
-		rule_acts[hw_acts->encap_decap_pos].reformat.offset =
-				job->flow->res_idx - 1;
-		rule_acts[hw_acts->encap_decap_pos].reformat.data = buf;
+		int ix = mlx5_multi_pattern_reformat_to_index(hw_acts->encap_decap->action_type);
+		struct mlx5dr_rule_action *ra = &rule_acts[hw_acts->encap_decap_pos];
+
+		if (ix < 0)
+			return -1;
+		mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx);
+		if (!mp_segment || !mp_segment->reformat_action[ix])
+			return -1;
+		ra->action = mp_segment->reformat_action[ix];
+		ra->reformat.offset = job->flow->res_idx - 1;
+		ra->reformat.data = buf;
 	}
 	if (hw_acts->push_remove && !hw_acts->push_remove->shared) {
 		rule_acts[hw_acts->push_remove_pos].ipv6_ext.offset =
@@ -4133,86 +4107,65 @@ flow_hw_q_flow_flush(struct rte_eth_dev *dev,
 static int
 mlx5_tbl_multi_pattern_process(struct rte_eth_dev *dev,
 			       struct rte_flow_template_table *tbl,
-			       struct mlx5_tbl_multi_pattern_ctx *mpat,
+			       struct mlx5_multi_pattern_segment *segment,
+			       uint32_t bulk_size,
 			       struct rte_flow_error *error)
 {
+	int ret = 0;
 	uint32_t i;
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_tbl_multi_pattern_ctx *mpctx = &tbl->mpctx;
 	const struct rte_flow_template_table_attr *table_attr = &tbl->cfg.attr;
 	const struct rte_flow_attr *attr = &table_attr->flow_attr;
 	enum mlx5dr_table_type type = get_mlx5dr_table_type(attr);
 	uint32_t flags = mlx5_hw_act_flag[!!attr->group][type];
-	struct mlx5dr_action *dr_action;
-	uint32_t bulk_size = rte_log2_u32(table_attr->nb_flows);
+	struct mlx5dr_action *dr_action = NULL;
 
 	for (i = 0; i < MLX5_MULTIPATTERN_ENCAP_NUM; i++) {
-		uint32_t j;
-		uint32_t *reformat_refcnt;
-		typeof(mpat->reformat[0]) *reformat = mpat->reformat + i;
-		struct mlx5dr_action_reformat_header hdr[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+		typeof(mpctx->reformat[0]) *reformat = mpctx->reformat + i;
 		enum mlx5dr_action_type reformat_type =
 			mlx5_multi_pattern_reformat_index_to_type(i);
 
 		if (!reformat->elements_num)
 			continue;
-		for (j = 0; j < reformat->elements_num; j++)
-			hdr[j] = reformat->ctx[j].reformat_hdr;
-		reformat_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t), 0,
-					      rte_socket_id());
-		if (!reformat_refcnt)
-			return rte_flow_error_set(error, ENOMEM,
-						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-						  NULL, "failed to allocate multi-pattern encap counter");
-		*reformat_refcnt = reformat->elements_num;
-		dr_action = mlx5dr_action_create_reformat
-			(priv->dr_ctx, reformat_type, reformat->elements_num, hdr,
-			 bulk_size, flags);
+		dr_action = reformat_type == MLX5DR_ACTION_TYP_INSERT_HEADER ?
+			mlx5dr_action_create_insert_header
+			(priv->dr_ctx, reformat->elements_num,
+			 reformat->insert_hdr, bulk_size, flags) :
+			mlx5dr_action_create_reformat
+			(priv->dr_ctx, reformat_type, reformat->elements_num,
+			 reformat->reformat_hdr, bulk_size, flags);
 		if (!dr_action) {
-			mlx5_free(reformat_refcnt);
-			return rte_flow_error_set(error, rte_errno,
-						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-						  NULL,
-						  "failed to create multi-pattern encap action");
-		}
-		for (j = 0; j < reformat->elements_num; j++) {
-			reformat->ctx[j].rule_action->action = dr_action;
-			reformat->ctx[j].encap->action = dr_action;
-			reformat->ctx[j].encap->multi_pattern = 1;
-			reformat->ctx[j].encap->multi_pattern_refcnt = reformat_refcnt;
+			ret = rte_flow_error_set(error, rte_errno,
+						 RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+						 NULL,
+						 "failed to create multi-pattern encap action");
+			goto error;
 		}
+		segment->reformat_action[i] = dr_action;
 	}
-	if (mpat->mh.elements_num) {
-		typeof(mpat->mh) *mh = &mpat->mh;
-		struct mlx5dr_action_mh_pattern pattern[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
-		uint32_t *mh_refcnt = mlx5_malloc(MLX5_MEM_ZERO, sizeof(uint32_t),
-						 0, rte_socket_id());
-
-		if (!mh_refcnt)
-			return rte_flow_error_set(error, ENOMEM,
-						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-						  NULL, "failed to allocate modify header counter");
-		*mh_refcnt = mpat->mh.elements_num;
-		for (i = 0; i < mpat->mh.elements_num; i++)
-			pattern[i] = mh->ctx[i].mh_pattern;
+	if (mpctx->mh.elements_num) {
+		typeof(mpctx->mh) *mh = &mpctx->mh;
 		dr_action = mlx5dr_action_create_modify_header
-			(priv->dr_ctx, mpat->mh.elements_num, pattern,
+			(priv->dr_ctx, mpctx->mh.elements_num, mh->pattern,
 			 bulk_size, flags);
 		if (!dr_action) {
-			mlx5_free(mh_refcnt);
-			return rte_flow_error_set(error, rte_errno,
+			ret = rte_flow_error_set(error, rte_errno,
 						  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-						  NULL,
-						  "failed to create multi-pattern header modify action");
-		}
-		for (i = 0; i < mpat->mh.elements_num; i++) {
-			mh->ctx[i].rule_action->action = dr_action;
-			mh->ctx[i].mhdr->action = dr_action;
-			mh->ctx[i].mhdr->multi_pattern = 1;
-			mh->ctx[i].mhdr->multi_pattern_refcnt = mh_refcnt;
+						  NULL, "failed to create multi-pattern header modify action");
+			goto error;
 		}
+		segment->mhdr_action = dr_action;
+	}
+	if (dr_action) {
+		segment->capacity = RTE_BIT32(bulk_size);
+		if (segment != &mpctx->segments[MLX5_MAX_TABLE_RESIZE_NUM - 1])
+			segment[1].head_index = segment->head_index + segment->capacity;
 	}
-
 	return 0;
+error:
+	mlx5_destroy_multi_pattern_segment(segment);
+	return ret;
 }
 
 static int
@@ -4225,7 +4178,6 @@ mlx5_hw_build_template_table(struct rte_eth_dev *dev,
 {
 	int ret;
 	uint8_t i;
-	struct mlx5_tbl_multi_pattern_ctx mpat = MLX5_EMPTY_MULTI_PATTERN_CTX;
 
 	for (i = 0; i < nb_action_templates; i++) {
 		uint32_t refcnt = __atomic_add_fetch(&action_templates[i]->refcnt, 1,
@@ -4246,16 +4198,21 @@ mlx5_hw_build_template_table(struct rte_eth_dev *dev,
 		ret = __flow_hw_actions_translate(dev, &tbl->cfg,
 						  &tbl->ats[i].acts,
 						  action_templates[i],
-						  &mpat, error);
+						  &tbl->mpctx, error);
 		if (ret) {
 			i++;
 			goto at_error;
 		}
 	}
 	tbl->nb_action_templates = nb_action_templates;
-	ret = mlx5_tbl_multi_pattern_process(dev, tbl, &mpat, error);
-	if (ret)
-		goto at_error;
+	if (mlx5_is_multi_pattern_active(&tbl->mpctx)) {
+		ret = mlx5_tbl_multi_pattern_process(dev, tbl,
+						     &tbl->mpctx.segments[0],
+						     rte_log2_u32(tbl->cfg.attr.nb_flows),
+						     error);
+		if (ret)
+			goto at_error;
+	}
 	return 0;
 
 at_error:
@@ -4624,6 +4581,28 @@ flow_hw_template_table_create(struct rte_eth_dev *dev,
 				    action_templates, nb_action_templates, error);
 }
 
+static void
+mlx5_destroy_multi_pattern_segment(struct mlx5_multi_pattern_segment *segment)
+{
+	int i;
+
+	if (segment->mhdr_action)
+		mlx5dr_action_destroy(segment->mhdr_action);
+	for (i = 0; i < MLX5_MULTIPATTERN_ENCAP_NUM; i++) {
+		if (segment->reformat_action[i])
+			mlx5dr_action_destroy(segment->reformat_action[i]);
+	}
+	segment->capacity = 0;
+}
+
+static void
+flow_hw_destroy_table_multi_pattern_ctx(struct rte_flow_template_table *table)
+{
+	int sx;
+
+	for (sx = 0; sx < MLX5_MAX_TABLE_RESIZE_NUM; sx++)
+		mlx5_destroy_multi_pattern_segment(table->mpctx.segments + sx);
+}
 /**
  * Destroy flow table.
  *
@@ -4669,6 +4648,7 @@ flow_hw_table_destroy(struct rte_eth_dev *dev,
 		__atomic_fetch_sub(&table->ats[i].action_template->refcnt,
 				   1, __ATOMIC_RELAXED);
 	}
+	flow_hw_destroy_table_multi_pattern_ctx(table);
 	mlx5dr_matcher_destroy(table->matcher);
 	mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry);
 	mlx5_ipool_destroy(table->resource);
-- 
2.39.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 4/4] net/mlx5: add support for flow table resizing
  2024-02-28 13:33   ` [PATCH v3 0/4] " Gregory Etelson
                       ` (2 preceding siblings ...)
  2024-02-28 13:33     ` [PATCH v3 3/4] net/mlx5: move multi-pattern actions management to table level Gregory Etelson
@ 2024-02-28 13:33     ` Gregory Etelson
  2024-02-28 15:50     ` [PATCH v3 0/4] " Raslan Darawsheh
  4 siblings, 0 replies; 17+ messages in thread
From: Gregory Etelson @ 2024-02-28 13:33 UTC (permalink / raw)
  To: dev
  Cc: getelson, mkashani, rasland, Dariusz Sosnowski,
	Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad

Support template table API in PMD.
The patch allows to increase existing table capacity.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
 drivers/net/mlx5/mlx5.h         |   5 +
 drivers/net/mlx5/mlx5_flow.c    |  51 +++
 drivers/net/mlx5/mlx5_flow.h    |  84 +++--
 drivers/net/mlx5/mlx5_flow_hw.c | 530 +++++++++++++++++++++++++++-----
 4 files changed, 564 insertions(+), 106 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 99850a58af..bb1853e797 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -380,6 +380,9 @@ enum mlx5_hw_job_type {
 	MLX5_HW_Q_JOB_TYPE_UPDATE, /* Flow update job type. */
 	MLX5_HW_Q_JOB_TYPE_QUERY, /* Flow query job type. */
 	MLX5_HW_Q_JOB_TYPE_UPDATE_QUERY, /* Flow update and query job type. */
+	MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE, /* Non-optimized flow create job type. */
+	MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY, /* Non-optimized destroy create job type. */
+	MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE, /* Move flow after table resize. */
 };
 
 enum mlx5_hw_indirect_type {
@@ -422,6 +425,8 @@ struct mlx5_hw_q {
 	struct mlx5_hw_q_job **job; /* LIFO header. */
 	struct rte_ring *indir_cq; /* Indirect action SW completion queue. */
 	struct rte_ring *indir_iq; /* Indirect action SW in progress queue. */
+	struct rte_ring *flow_transfer_pending;
+	struct rte_ring *flow_transfer_completed;
 } __rte_cache_aligned;
 
 
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 3e179110a0..477b13e04d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1095,6 +1095,20 @@ mlx5_flow_calc_encap_hash(struct rte_eth_dev *dev,
 			  uint8_t *hash,
 			  struct rte_flow_error *error);
 
+static int
+mlx5_template_table_resize(struct rte_eth_dev *dev,
+			   struct rte_flow_template_table *table,
+			   uint32_t nb_rules, struct rte_flow_error *error);
+static int
+mlx5_flow_async_update_resized(struct rte_eth_dev *dev, uint32_t queue,
+			       const struct rte_flow_op_attr *attr,
+			       struct rte_flow *rule, void *user_data,
+			       struct rte_flow_error *error);
+static int
+mlx5_table_resize_complete(struct rte_eth_dev *dev,
+			   struct rte_flow_template_table *table,
+			   struct rte_flow_error *error);
+
 static const struct rte_flow_ops mlx5_flow_ops = {
 	.validate = mlx5_flow_validate,
 	.create = mlx5_flow_create,
@@ -1133,6 +1147,9 @@ static const struct rte_flow_ops mlx5_flow_ops = {
 		mlx5_flow_action_list_handle_query_update,
 	.flow_calc_table_hash = mlx5_flow_calc_table_hash,
 	.flow_calc_encap_hash = mlx5_flow_calc_encap_hash,
+	.flow_template_table_resize = mlx5_template_table_resize,
+	.flow_update_resized = mlx5_flow_async_update_resized,
+	.flow_template_table_resize_complete = mlx5_table_resize_complete,
 };
 
 /* Tunnel information. */
@@ -10548,6 +10565,40 @@ mlx5_flow_calc_encap_hash(struct rte_eth_dev *dev,
 	return fops->flow_calc_encap_hash(dev, pattern, dest_field, hash, error);
 }
 
+static int
+mlx5_template_table_resize(struct rte_eth_dev *dev,
+			   struct rte_flow_template_table *table,
+			   uint32_t nb_rules, struct rte_flow_error *error)
+{
+	const struct mlx5_flow_driver_ops *fops;
+
+	MLX5_DRV_FOPS_OR_ERR(dev, fops, table_resize, ENOTSUP);
+	return fops->table_resize(dev, table, nb_rules, error);
+}
+
+static int
+mlx5_table_resize_complete(struct rte_eth_dev *dev,
+			   struct rte_flow_template_table *table,
+			   struct rte_flow_error *error)
+{
+	const struct mlx5_flow_driver_ops *fops;
+
+	MLX5_DRV_FOPS_OR_ERR(dev, fops, table_resize_complete, ENOTSUP);
+	return fops->table_resize_complete(dev, table, error);
+}
+
+static int
+mlx5_flow_async_update_resized(struct rte_eth_dev *dev, uint32_t queue,
+			       const struct rte_flow_op_attr *op_attr,
+			       struct rte_flow *rule, void *user_data,
+			       struct rte_flow_error *error)
+{
+	const struct mlx5_flow_driver_ops *fops;
+
+	MLX5_DRV_FOPS_OR_ERR(dev, fops, flow_update_resized, ENOTSUP);
+	return fops->flow_update_resized(dev, queue, op_attr, rule, user_data, error);
+}
+
 /**
  * Destroy all indirect actions (shared RSS).
  *
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 9cc237c542..6c2944c21a 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1217,6 +1217,7 @@ struct rte_flow {
 	uint32_t tunnel:1;
 	uint32_t meter:24; /**< Holds flow meter id. */
 	uint32_t indirect_type:2; /**< Indirect action type. */
+	uint32_t matcher_selector:1; /**< Matcher index in resizable table. */
 	uint32_t rix_mreg_copy;
 	/**< Index to metadata register copy table resource. */
 	uint32_t counter; /**< Holds flow counter. */
@@ -1262,6 +1263,7 @@ struct rte_flow_hw {
 	};
 	struct rte_flow_template_table *table; /* The table flow allcated from. */
 	uint8_t mt_idx;
+	uint8_t matcher_selector:1;
 	uint32_t age_idx;
 	cnt_id_t cnt_id;
 	uint32_t mtr_id;
@@ -1489,6 +1491,11 @@ struct mlx5_flow_group {
 #define MLX5_MAX_TABLE_RESIZE_NUM 64
 
 struct mlx5_multi_pattern_segment {
+	/*
+	 * Modify Header Argument Objects number allocated for action in that
+	 * segment.
+	 * Capacity is always power of 2.
+	 */
 	uint32_t capacity;
 	uint32_t head_index;
 	struct mlx5dr_action *mhdr_action;
@@ -1527,43 +1534,22 @@ mlx5_is_multi_pattern_active(const struct mlx5_tbl_multi_pattern_ctx *mpctx)
 	return mpctx->segments[0].head_index == 1;
 }
 
-static __rte_always_inline struct mlx5_multi_pattern_segment *
-mlx5_multi_pattern_segment_get_next(struct mlx5_tbl_multi_pattern_ctx *mpctx)
-{
-	int i;
-
-	for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) {
-		if (!mpctx->segments[i].capacity)
-			return &mpctx->segments[i];
-	}
-	return NULL;
-}
-
-static __rte_always_inline struct mlx5_multi_pattern_segment *
-mlx5_multi_pattern_segment_find(struct mlx5_tbl_multi_pattern_ctx *mpctx,
-				uint32_t flow_resource_ix)
-{
-	int i;
-
-	for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) {
-		uint32_t limit = mpctx->segments[i].head_index +
-				 mpctx->segments[i].capacity;
-
-		if (flow_resource_ix < limit)
-			return &mpctx->segments[i];
-	}
-	return NULL;
-}
-
 struct mlx5_flow_template_table_cfg {
 	struct rte_flow_template_table_attr attr; /* Table attributes passed through flow API. */
 	bool external; /* True if created by flow API, false if table is internal to PMD. */
 };
 
+struct mlx5_matcher_info {
+	struct mlx5dr_matcher *matcher; /* Template matcher. */
+	uint32_t refcnt;
+};
+
 struct rte_flow_template_table {
 	LIST_ENTRY(rte_flow_template_table) next;
 	struct mlx5_flow_group *grp; /* The group rte_flow_template_table uses. */
-	struct mlx5dr_matcher *matcher; /* Template matcher. */
+	struct mlx5_matcher_info matcher_info[2];
+	uint32_t matcher_selector;
+	rte_rwlock_t matcher_replace_rwlk; /* RW lock for resizable tables */
 	/* Item templates bind to the table. */
 	struct rte_flow_pattern_template *its[MLX5_HW_TBL_MAX_ITEM_TEMPLATE];
 	/* Action templates bind to the table. */
@@ -1576,8 +1562,34 @@ struct rte_flow_template_table {
 	uint8_t nb_action_templates; /* Action template number. */
 	uint32_t refcnt; /* Table reference counter. */
 	struct mlx5_tbl_multi_pattern_ctx mpctx;
+	struct mlx5dr_matcher_attr matcher_attr;
 };
 
+static __rte_always_inline struct mlx5dr_matcher *
+mlx5_table_matcher(const struct rte_flow_template_table *table)
+{
+	return table->matcher_info[table->matcher_selector].matcher;
+}
+
+static __rte_always_inline struct mlx5_multi_pattern_segment *
+mlx5_multi_pattern_segment_find(struct rte_flow_template_table *table,
+				uint32_t flow_resource_ix)
+{
+	int i;
+	struct mlx5_tbl_multi_pattern_ctx *mpctx = &table->mpctx;
+
+	if (likely(!rte_flow_template_table_resizable(0, &table->cfg.attr)))
+		return &mpctx->segments[0];
+	for (i = 0; i < MLX5_MAX_TABLE_RESIZE_NUM; i++) {
+		uint32_t limit = mpctx->segments[i].head_index +
+				 mpctx->segments[i].capacity;
+
+		if (flow_resource_ix < limit)
+			return &mpctx->segments[i];
+	}
+	return NULL;
+}
+
 #endif
 
 /*
@@ -2274,6 +2286,17 @@ typedef int
 			 enum rte_flow_encap_hash_field dest_field,
 			 uint8_t *hash,
 			 struct rte_flow_error *error);
+typedef int (*mlx5_table_resize_t)(struct rte_eth_dev *dev,
+				   struct rte_flow_template_table *table,
+				   uint32_t nb_rules, struct rte_flow_error *error);
+typedef int (*mlx5_flow_update_resized_t)
+			(struct rte_eth_dev *dev, uint32_t queue,
+			 const struct rte_flow_op_attr *attr,
+			 struct rte_flow *rule, void *user_data,
+			 struct rte_flow_error *error);
+typedef int (*table_resize_complete_t)(struct rte_eth_dev *dev,
+				       struct rte_flow_template_table *table,
+				       struct rte_flow_error *error);
 
 struct mlx5_flow_driver_ops {
 	mlx5_flow_validate_t validate;
@@ -2348,6 +2371,9 @@ struct mlx5_flow_driver_ops {
 		async_action_list_handle_query_update;
 	mlx5_flow_calc_table_hash_t flow_calc_table_hash;
 	mlx5_flow_calc_encap_hash_t flow_calc_encap_hash;
+	mlx5_table_resize_t table_resize;
+	mlx5_flow_update_resized_t flow_update_resized;
+	table_resize_complete_t table_resize_complete;
 };
 
 /* mlx5_flow.c */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 05442f0bd3..51b37753d6 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -4,6 +4,7 @@
 
 #include <rte_flow.h>
 #include <rte_flow_driver.h>
+#include <rte_stdatomic.h>
 
 #include <mlx5_malloc.h>
 
@@ -2904,7 +2905,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 	int ret;
 	uint32_t age_idx = 0;
 	struct mlx5_aso_mtr *aso_mtr;
-	struct mlx5_multi_pattern_segment *mp_segment;
+	struct mlx5_multi_pattern_segment *mp_segment = NULL;
 
 	rte_memcpy(rule_acts, hw_acts->rule_acts, sizeof(*rule_acts) * at->dr_actions_num);
 	attr.group = table->grp->group_id;
@@ -2918,17 +2919,20 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 	} else {
 		attr.ingress = 1;
 	}
-	if (hw_acts->mhdr && hw_acts->mhdr->mhdr_cmds_num > 0) {
+	if (hw_acts->mhdr && hw_acts->mhdr->mhdr_cmds_num > 0 && !hw_acts->mhdr->shared) {
 		uint16_t pos = hw_acts->mhdr->pos;
 
-		if (!hw_acts->mhdr->shared) {
-			rule_acts[pos].modify_header.offset =
-						job->flow->res_idx - 1;
-			rule_acts[pos].modify_header.data =
-						(uint8_t *)job->mhdr_cmd;
-			rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds,
-				   sizeof(*job->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num);
-		}
+		mp_segment = mlx5_multi_pattern_segment_find(table, job->flow->res_idx);
+		if (!mp_segment || !mp_segment->mhdr_action)
+			return -1;
+		rule_acts[pos].action = mp_segment->mhdr_action;
+		/* offset is relative to DR action */
+		rule_acts[pos].modify_header.offset =
+					job->flow->res_idx - mp_segment->head_index;
+		rule_acts[pos].modify_header.data =
+					(uint8_t *)job->mhdr_cmd;
+		rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds,
+			   sizeof(*job->mhdr_cmd) * hw_acts->mhdr->mhdr_cmds_num);
 	}
 	LIST_FOREACH(act_data, &hw_acts->act_list, next) {
 		uint32_t jump_group;
@@ -3035,11 +3039,6 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 			MLX5_ASSERT(ipv6_push->size == act_data->ipv6_ext.len);
 			break;
 		case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
-			mp_segment = mlx5_multi_pattern_segment_find
-					(&table->mpctx, job->flow->res_idx);
-			if (!mp_segment || !mp_segment->mhdr_action)
-				return -1;
-			rule_acts[hw_acts->mhdr->pos].action = mp_segment->mhdr_action;
 			if (action->type == RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID)
 				ret = flow_hw_set_vlan_vid_construct(dev, job,
 								     act_data,
@@ -3196,11 +3195,13 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 
 		if (ix < 0)
 			return -1;
-		mp_segment = mlx5_multi_pattern_segment_find(&table->mpctx, job->flow->res_idx);
+		if (!mp_segment)
+			mp_segment = mlx5_multi_pattern_segment_find(table, job->flow->res_idx);
 		if (!mp_segment || !mp_segment->reformat_action[ix])
 			return -1;
 		ra->action = mp_segment->reformat_action[ix];
-		ra->reformat.offset = job->flow->res_idx - 1;
+		/* reformat offset is relative to selected DR action */
+		ra->reformat.offset = job->flow->res_idx - mp_segment->head_index;
 		ra->reformat.data = buf;
 	}
 	if (hw_acts->push_remove && !hw_acts->push_remove->shared) {
@@ -3372,10 +3373,26 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
 					    pattern_template_index, job);
 	if (!rule_items)
 		goto error;
-	ret = mlx5dr_rule_create(table->matcher,
-				 pattern_template_index, rule_items,
-				 action_template_index, rule_acts,
-				 &rule_attr, (struct mlx5dr_rule *)flow->rule);
+	if (likely(!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr))) {
+		ret = mlx5dr_rule_create(table->matcher_info[0].matcher,
+					 pattern_template_index, rule_items,
+					 action_template_index, rule_acts,
+					 &rule_attr,
+					 (struct mlx5dr_rule *)flow->rule);
+	} else {
+		uint32_t selector;
+
+		job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE;
+		rte_rwlock_read_lock(&table->matcher_replace_rwlk);
+		selector = table->matcher_selector;
+		ret = mlx5dr_rule_create(table->matcher_info[selector].matcher,
+					 pattern_template_index, rule_items,
+					 action_template_index, rule_acts,
+					 &rule_attr,
+					 (struct mlx5dr_rule *)flow->rule);
+		rte_rwlock_read_unlock(&table->matcher_replace_rwlk);
+		flow->matcher_selector = selector;
+	}
 	if (likely(!ret))
 		return (struct rte_flow *)flow;
 error:
@@ -3492,9 +3509,23 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
 		rte_errno = EINVAL;
 		goto error;
 	}
-	ret = mlx5dr_rule_create(table->matcher,
-				 0, items, action_template_index, rule_acts,
-				 &rule_attr, (struct mlx5dr_rule *)flow->rule);
+	if (likely(!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr))) {
+		ret = mlx5dr_rule_create(table->matcher_info[0].matcher,
+					 0, items, action_template_index,
+					 rule_acts, &rule_attr,
+					 (struct mlx5dr_rule *)flow->rule);
+	} else {
+		uint32_t selector;
+
+		job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE;
+		rte_rwlock_read_lock(&table->matcher_replace_rwlk);
+		selector = table->matcher_selector;
+		ret = mlx5dr_rule_create(table->matcher_info[selector].matcher,
+					 0, items, action_template_index,
+					 rule_acts, &rule_attr,
+					 (struct mlx5dr_rule *)flow->rule);
+		rte_rwlock_read_unlock(&table->matcher_replace_rwlk);
+	}
 	if (likely(!ret))
 		return (struct rte_flow *)flow;
 error:
@@ -3674,7 +3705,8 @@ flow_hw_async_flow_destroy(struct rte_eth_dev *dev,
 		return rte_flow_error_set(error, ENOMEM,
 					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
 					  "fail to destroy rte flow: flow queue full");
-	job->type = MLX5_HW_Q_JOB_TYPE_DESTROY;
+	job->type = !rte_flow_template_table_resizable(dev->data->port_id, &fh->table->cfg.attr) ?
+		    MLX5_HW_Q_JOB_TYPE_DESTROY : MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY;
 	job->user_data = user_data;
 	job->flow = fh;
 	rule_attr.user_data = job;
@@ -3786,6 +3818,26 @@ flow_hw_pull_legacy_indirect_comp(struct rte_eth_dev *dev, struct mlx5_hw_q_job
 	}
 }
 
+static __rte_always_inline int
+mlx5_hw_pull_flow_transfer_comp(struct rte_eth_dev *dev,
+				uint32_t queue, struct rte_flow_op_result res[],
+				uint16_t n_res)
+{
+	uint32_t size, i;
+	struct mlx5_hw_q_job *job = NULL;
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct rte_ring *ring = priv->hw_q[queue].flow_transfer_completed;
+
+	size = RTE_MIN(rte_ring_count(ring), n_res);
+	for (i = 0; i < size; i++) {
+		res[i].status = RTE_FLOW_OP_SUCCESS;
+		rte_ring_dequeue(ring, (void **)&job);
+		res[i].user_data = job->user_data;
+		flow_hw_job_put(priv, job, queue);
+	}
+	return (int)size;
+}
+
 static inline int
 __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev,
 				 uint32_t queue,
@@ -3834,6 +3886,80 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev,
 	return ret_comp;
 }
 
+static __rte_always_inline void
+hw_cmpl_flow_update_or_destroy(struct rte_eth_dev *dev,
+			       struct mlx5_hw_q_job *job,
+			       uint32_t queue, struct rte_flow_error *error)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_aso_mtr_pool *pool = priv->hws_mpool;
+	struct rte_flow_hw *flow = job->flow;
+	struct rte_flow_template_table *table = flow->table;
+	/* Release the original resource index in case of update. */
+	uint32_t res_idx = flow->res_idx;
+
+	if (flow->fate_type == MLX5_FLOW_FATE_JUMP)
+		flow_hw_jump_release(dev, flow->jump);
+	else if (flow->fate_type == MLX5_FLOW_FATE_QUEUE)
+		mlx5_hrxq_obj_release(dev, flow->hrxq);
+	if (mlx5_hws_cnt_id_valid(flow->cnt_id))
+		flow_hw_age_count_release(priv, queue,
+					  flow, error);
+	if (flow->mtr_id) {
+		mlx5_ipool_free(pool->idx_pool,	flow->mtr_id);
+		flow->mtr_id = 0;
+	}
+	if (job->type != MLX5_HW_Q_JOB_TYPE_UPDATE) {
+		if (table) {
+			mlx5_ipool_free(table->resource, res_idx);
+			mlx5_ipool_free(table->flow, flow->idx);
+		}
+	} else {
+		rte_memcpy(flow, job->upd_flow,
+			   offsetof(struct rte_flow_hw, rule));
+		mlx5_ipool_free(table->resource, res_idx);
+	}
+}
+
+static __rte_always_inline void
+hw_cmpl_resizable_tbl(struct rte_eth_dev *dev,
+		      struct mlx5_hw_q_job *job,
+		      uint32_t queue, enum rte_flow_op_status status,
+		      struct rte_flow_error *error)
+{
+	struct rte_flow_hw *flow = job->flow;
+	struct rte_flow_template_table *table = flow->table;
+	uint32_t selector = flow->matcher_selector;
+	uint32_t other_selector = (selector + 1) & 1;
+
+	switch (job->type) {
+	case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE:
+		rte_atomic_fetch_add_explicit
+			(&table->matcher_info[selector].refcnt, 1,
+			 rte_memory_order_relaxed);
+		break;
+	case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY:
+		rte_atomic_fetch_sub_explicit
+			(&table->matcher_info[selector].refcnt, 1,
+			 rte_memory_order_relaxed);
+		hw_cmpl_flow_update_or_destroy(dev, job, queue, error);
+		break;
+	case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE:
+		if (status == RTE_FLOW_OP_SUCCESS) {
+			rte_atomic_fetch_sub_explicit
+				(&table->matcher_info[selector].refcnt, 1,
+				 rte_memory_order_relaxed);
+			rte_atomic_fetch_add_explicit
+				(&table->matcher_info[other_selector].refcnt, 1,
+				 rte_memory_order_relaxed);
+			flow->matcher_selector = other_selector;
+		}
+		break;
+	default:
+		break;
+	}
+}
+
 /**
  * Pull the enqueued flows.
  *
@@ -3862,9 +3988,7 @@ flow_hw_pull(struct rte_eth_dev *dev,
 	     struct rte_flow_error *error)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_aso_mtr_pool *pool = priv->hws_mpool;
 	struct mlx5_hw_q_job *job;
-	uint32_t res_idx;
 	int ret, i;
 
 	/* 1. Pull the flow completion. */
@@ -3875,31 +3999,20 @@ flow_hw_pull(struct rte_eth_dev *dev,
 				"fail to query flow queue");
 	for (i = 0; i <  ret; i++) {
 		job = (struct mlx5_hw_q_job *)res[i].user_data;
-		/* Release the original resource index in case of update. */
-		res_idx = job->flow->res_idx;
 		/* Restore user data. */
 		res[i].user_data = job->user_data;
-		if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY ||
-		    job->type == MLX5_HW_Q_JOB_TYPE_UPDATE) {
-			if (job->flow->fate_type == MLX5_FLOW_FATE_JUMP)
-				flow_hw_jump_release(dev, job->flow->jump);
-			else if (job->flow->fate_type == MLX5_FLOW_FATE_QUEUE)
-				mlx5_hrxq_obj_release(dev, job->flow->hrxq);
-			if (mlx5_hws_cnt_id_valid(job->flow->cnt_id))
-				flow_hw_age_count_release(priv, queue,
-							  job->flow, error);
-			if (job->flow->mtr_id) {
-				mlx5_ipool_free(pool->idx_pool,	job->flow->mtr_id);
-				job->flow->mtr_id = 0;
-			}
-			if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) {
-				mlx5_ipool_free(job->flow->table->resource, res_idx);
-				mlx5_ipool_free(job->flow->table->flow, job->flow->idx);
-			} else {
-				rte_memcpy(job->flow, job->upd_flow,
-					offsetof(struct rte_flow_hw, rule));
-				mlx5_ipool_free(job->flow->table->resource, res_idx);
-			}
+		switch (job->type) {
+		case MLX5_HW_Q_JOB_TYPE_DESTROY:
+		case MLX5_HW_Q_JOB_TYPE_UPDATE:
+			hw_cmpl_flow_update_or_destroy(dev, job, queue, error);
+			break;
+		case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_CREATE:
+		case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE:
+		case MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_DESTROY:
+			hw_cmpl_resizable_tbl(dev, job, queue, res[i].status, error);
+			break;
+		default:
+			break;
 		}
 		flow_hw_job_put(priv, job, queue);
 	}
@@ -3907,24 +4020,36 @@ flow_hw_pull(struct rte_eth_dev *dev,
 	if (ret < n_res)
 		ret += __flow_hw_pull_indir_action_comp(dev, queue, &res[ret],
 							n_res - ret);
+	if (ret < n_res)
+		ret += mlx5_hw_pull_flow_transfer_comp(dev, queue, &res[ret],
+						       n_res - ret);
+
 	return ret;
 }
 
+static uint32_t
+mlx5_hw_push_queue(struct rte_ring *pending_q, struct rte_ring *cmpl_q)
+{
+	void *job = NULL;
+	uint32_t i, size = rte_ring_count(pending_q);
+
+	for (i = 0; i < size; i++) {
+		rte_ring_dequeue(pending_q, &job);
+		rte_ring_enqueue(cmpl_q, job);
+	}
+	return size;
+}
+
 static inline uint32_t
 __flow_hw_push_action(struct rte_eth_dev *dev,
 		    uint32_t queue)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct rte_ring *iq = priv->hw_q[queue].indir_iq;
-	struct rte_ring *cq = priv->hw_q[queue].indir_cq;
-	void *job = NULL;
-	uint32_t ret, i;
+	struct mlx5_hw_q *hw_q = &priv->hw_q[queue];
 
-	ret = rte_ring_count(iq);
-	for (i = 0; i < ret; i++) {
-		rte_ring_dequeue(iq, &job);
-		rte_ring_enqueue(cq, job);
-	}
+	mlx5_hw_push_queue(hw_q->indir_iq, hw_q->indir_cq);
+	mlx5_hw_push_queue(hw_q->flow_transfer_pending,
+			   hw_q->flow_transfer_completed);
 	if (!priv->shared_host) {
 		if (priv->hws_ctpool)
 			mlx5_aso_push_wqe(priv->sh,
@@ -4333,6 +4458,8 @@ flow_hw_table_create(struct rte_eth_dev *dev,
 	grp = container_of(ge, struct mlx5_flow_group, entry);
 	tbl->grp = grp;
 	/* Prepare matcher information. */
+	matcher_attr.resizable = !!rte_flow_template_table_resizable
+					(dev->data->port_id, &table_cfg->attr);
 	matcher_attr.optimize_flow_src = MLX5DR_MATCHER_FLOW_SRC_ANY;
 	matcher_attr.priority = attr->flow_attr.priority;
 	matcher_attr.optimize_using_rule_idx = true;
@@ -4351,7 +4478,7 @@ flow_hw_table_create(struct rte_eth_dev *dev,
 			       RTE_FLOW_TABLE_SPECIALIZE_TRANSFER_VPORT_ORIG;
 
 		if ((attr->specialize & val) == val) {
-			DRV_LOG(INFO, "Invalid hint value %x",
+			DRV_LOG(ERR, "Invalid hint value %x",
 				attr->specialize);
 			rte_errno = EINVAL;
 			goto it_error;
@@ -4395,10 +4522,11 @@ flow_hw_table_create(struct rte_eth_dev *dev,
 		i = nb_item_templates;
 		goto it_error;
 	}
-	tbl->matcher = mlx5dr_matcher_create
+	tbl->matcher_info[0].matcher = mlx5dr_matcher_create
 		(tbl->grp->tbl, mt, nb_item_templates, at, nb_action_templates, &matcher_attr);
-	if (!tbl->matcher)
+	if (!tbl->matcher_info[0].matcher)
 		goto at_error;
+	tbl->matcher_attr = matcher_attr;
 	tbl->type = attr->flow_attr.transfer ? MLX5DR_TABLE_TYPE_FDB :
 		    (attr->flow_attr.egress ? MLX5DR_TABLE_TYPE_NIC_TX :
 		    MLX5DR_TABLE_TYPE_NIC_RX);
@@ -4406,6 +4534,7 @@ flow_hw_table_create(struct rte_eth_dev *dev,
 		LIST_INSERT_HEAD(&priv->flow_hw_tbl, tbl, next);
 	else
 		LIST_INSERT_HEAD(&priv->flow_hw_tbl_ongo, tbl, next);
+	rte_rwlock_init(&tbl->matcher_replace_rwlk);
 	return tbl;
 at_error:
 	for (i = 0; i < nb_action_templates; i++) {
@@ -4577,6 +4706,13 @@ flow_hw_template_table_create(struct rte_eth_dev *dev,
 
 	if (flow_hw_translate_group(dev, &cfg, group, &cfg.attr.flow_attr.group, error))
 		return NULL;
+	if (!cfg.attr.flow_attr.group &&
+	    rte_flow_template_table_resizable(dev->data->port_id, attr)) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "table cannot be resized: invalid group");
+		return NULL;
+	}
 	return flow_hw_table_create(dev, &cfg, item_templates, nb_item_templates,
 				    action_templates, nb_action_templates, error);
 }
@@ -4649,7 +4785,10 @@ flow_hw_table_destroy(struct rte_eth_dev *dev,
 				   1, __ATOMIC_RELAXED);
 	}
 	flow_hw_destroy_table_multi_pattern_ctx(table);
-	mlx5dr_matcher_destroy(table->matcher);
+	if (table->matcher_info[0].matcher)
+		mlx5dr_matcher_destroy(table->matcher_info[0].matcher);
+	if (table->matcher_info[1].matcher)
+		mlx5dr_matcher_destroy(table->matcher_info[1].matcher);
 	mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry);
 	mlx5_ipool_destroy(table->resource);
 	mlx5_ipool_destroy(table->flow);
@@ -9643,6 +9782,16 @@ action_template_drop_init(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static __rte_always_inline struct rte_ring *
+mlx5_hwq_ring_create(uint16_t port_id, uint32_t queue, uint32_t size, const char *str)
+{
+	char mz_name[RTE_MEMZONE_NAMESIZE];
+
+	snprintf(mz_name, sizeof(mz_name), "port_%u_%s_%u", port_id, str, queue);
+	return rte_ring_create(mz_name, size, SOCKET_ID_ANY,
+			       RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
+}
+
 /**
  * Configure port HWS resources.
  *
@@ -9770,7 +9919,6 @@ flow_hw_configure(struct rte_eth_dev *dev,
 		goto err;
 	}
 	for (i = 0; i < nb_q_updated; i++) {
-		char mz_name[RTE_MEMZONE_NAMESIZE];
 		uint8_t *encap = NULL, *push = NULL;
 		struct mlx5_modification_cmd *mhdr_cmd = NULL;
 		struct rte_flow_item *items = NULL;
@@ -9804,22 +9952,23 @@ flow_hw_configure(struct rte_eth_dev *dev,
 			job[j].upd_flow = &upd_flow[j];
 			priv->hw_q[i].job[j] = &job[j];
 		}
-		snprintf(mz_name, sizeof(mz_name), "port_%u_indir_act_cq_%u",
-			 dev->data->port_id, i);
-		priv->hw_q[i].indir_cq = rte_ring_create(mz_name,
-				_queue_attr[i]->size, SOCKET_ID_ANY,
-				RING_F_SP_ENQ | RING_F_SC_DEQ |
-				RING_F_EXACT_SZ);
+		/* Notice ring name length is limited. */
+		priv->hw_q[i].indir_cq = mlx5_hwq_ring_create
+			(dev->data->port_id, i, _queue_attr[i]->size, "indir_act_cq");
 		if (!priv->hw_q[i].indir_cq)
 			goto err;
-		snprintf(mz_name, sizeof(mz_name), "port_%u_indir_act_iq_%u",
-			 dev->data->port_id, i);
-		priv->hw_q[i].indir_iq = rte_ring_create(mz_name,
-				_queue_attr[i]->size, SOCKET_ID_ANY,
-				RING_F_SP_ENQ | RING_F_SC_DEQ |
-				RING_F_EXACT_SZ);
+		priv->hw_q[i].indir_iq = mlx5_hwq_ring_create
+			(dev->data->port_id, i, _queue_attr[i]->size, "indir_act_iq");
 		if (!priv->hw_q[i].indir_iq)
 			goto err;
+		priv->hw_q[i].flow_transfer_pending = mlx5_hwq_ring_create
+			(dev->data->port_id, i, _queue_attr[i]->size, "tx_pending");
+		if (!priv->hw_q[i].flow_transfer_pending)
+			goto err;
+		priv->hw_q[i].flow_transfer_completed = mlx5_hwq_ring_create
+			(dev->data->port_id, i, _queue_attr[i]->size, "tx_done");
+		if (!priv->hw_q[i].flow_transfer_completed)
+			goto err;
 	}
 	dr_ctx_attr.pd = priv->sh->cdev->pd;
 	dr_ctx_attr.queues = nb_q_updated;
@@ -10040,6 +10189,8 @@ flow_hw_configure(struct rte_eth_dev *dev,
 	for (i = 0; i < nb_q_updated; i++) {
 		rte_ring_free(priv->hw_q[i].indir_iq);
 		rte_ring_free(priv->hw_q[i].indir_cq);
+		rte_ring_free(priv->hw_q[i].flow_transfer_pending);
+		rte_ring_free(priv->hw_q[i].flow_transfer_completed);
 	}
 	mlx5_free(priv->hw_q);
 	priv->hw_q = NULL;
@@ -10140,6 +10291,8 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
 	for (i = 0; i < priv->nb_queue; i++) {
 		rte_ring_free(priv->hw_q[i].indir_iq);
 		rte_ring_free(priv->hw_q[i].indir_cq);
+		rte_ring_free(priv->hw_q[i].flow_transfer_pending);
+		rte_ring_free(priv->hw_q[i].flow_transfer_completed);
 	}
 	mlx5_free(priv->hw_q);
 	priv->hw_q = NULL;
@@ -11970,7 +12123,7 @@ flow_hw_calc_table_hash(struct rte_eth_dev *dev,
 	items = flow_hw_get_rule_items(dev, table, pattern,
 				       pattern_template_index,
 				       &job);
-	res = mlx5dr_rule_hash_calculate(table->matcher, items,
+	res = mlx5dr_rule_hash_calculate(mlx5_table_matcher(table), items,
 					 pattern_template_index,
 					 MLX5DR_RULE_HASH_CALC_MODE_RAW,
 					 hash);
@@ -12047,6 +12200,226 @@ flow_hw_calc_encap_hash(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static int
+flow_hw_table_resize_multi_pattern_actions(struct rte_eth_dev *dev,
+					   struct rte_flow_template_table *table,
+					   uint32_t nb_flows,
+					   struct rte_flow_error *error)
+{
+	struct mlx5_multi_pattern_segment *segment = table->mpctx.segments;
+	uint32_t bulk_size;
+	int i, ret;
+
+	/**
+	 * Segment always allocates Modify Header Argument Objects number in
+	 * powers of 2.
+	 * On resize, PMD adds minimal required argument objects number.
+	 * For example, if table size was 10, it allocated 16 argument objects.
+	 * Resize to 15 will not add new objects.
+	 */
+	for (i = 1;
+	     i < MLX5_MAX_TABLE_RESIZE_NUM && segment->capacity;
+	     i++, segment++) {
+		/* keep the devtools/checkpatches.sh happy */
+	}
+	if (i == MLX5_MAX_TABLE_RESIZE_NUM)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "too many resizes");
+	if (segment->head_index - 1 >= nb_flows)
+		return 0;
+	bulk_size = rte_align32pow2(nb_flows - segment->head_index + 1);
+	ret = mlx5_tbl_multi_pattern_process(dev, table, segment,
+					     rte_log2_u32(bulk_size),
+					     error);
+	if (ret)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "too many resizes");
+	return i;
+}
+
+static int
+flow_hw_table_resize(struct rte_eth_dev *dev,
+		     struct rte_flow_template_table *table,
+		     uint32_t nb_flows,
+		     struct rte_flow_error *error)
+{
+	struct mlx5dr_action_template *at[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
+	struct mlx5dr_match_template *mt[MLX5_HW_TBL_MAX_ITEM_TEMPLATE];
+	struct mlx5dr_matcher_attr matcher_attr = table->matcher_attr;
+	struct mlx5_multi_pattern_segment *segment = NULL;
+	struct mlx5dr_matcher *matcher = NULL;
+	uint32_t i, selector = table->matcher_selector;
+	uint32_t other_selector = (selector + 1) & 1;
+	int ret;
+
+	if (!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "no resizable attribute");
+	if (table->matcher_info[other_selector].matcher)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "last table resize was not completed");
+	if (nb_flows <= table->cfg.attr.nb_flows)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "shrinking table is not supported");
+	ret = mlx5_ipool_resize(table->flow, nb_flows);
+	if (ret)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "cannot resize flows pool");
+	ret = mlx5_ipool_resize(table->resource, nb_flows);
+	if (ret)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "cannot resize resources pool");
+	if (mlx5_is_multi_pattern_active(&table->mpctx)) {
+		ret = flow_hw_table_resize_multi_pattern_actions(dev, table, nb_flows, error);
+		if (ret < 0)
+			return ret;
+		if (ret > 0)
+			segment = table->mpctx.segments + ret;
+	}
+	for (i = 0; i < table->nb_item_templates; i++)
+		mt[i] = table->its[i]->mt;
+	for (i = 0; i < table->nb_action_templates; i++)
+		at[i] = table->ats[i].action_template->tmpl;
+	nb_flows = rte_align32pow2(nb_flows);
+	matcher_attr.rule.num_log = rte_log2_u32(nb_flows);
+	matcher = mlx5dr_matcher_create(table->grp->tbl, mt,
+					table->nb_item_templates, at,
+					table->nb_action_templates,
+					&matcher_attr);
+	if (!matcher) {
+		ret = rte_flow_error_set(error, rte_errno,
+					 RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					 table, "failed to create new matcher");
+		goto error;
+	}
+	rte_rwlock_write_lock(&table->matcher_replace_rwlk);
+	ret = mlx5dr_matcher_resize_set_target
+			(table->matcher_info[selector].matcher, matcher);
+	if (ret) {
+		rte_rwlock_write_unlock(&table->matcher_replace_rwlk);
+		ret = rte_flow_error_set(error, rte_errno,
+					 RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					 table, "failed to initiate matcher swap");
+		goto error;
+	}
+	table->cfg.attr.nb_flows = nb_flows;
+	table->matcher_info[other_selector].matcher = matcher;
+	table->matcher_selector = other_selector;
+	rte_atomic_store_explicit(&table->matcher_info[other_selector].refcnt,
+				  0, rte_memory_order_relaxed);
+	rte_rwlock_write_unlock(&table->matcher_replace_rwlk);
+	return 0;
+error:
+	if (segment)
+		mlx5_destroy_multi_pattern_segment(segment);
+	if (matcher) {
+		ret = mlx5dr_matcher_destroy(matcher);
+		return rte_flow_error_set(error, rte_errno,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "failed to destroy new matcher");
+	}
+	return ret;
+}
+
+static int
+flow_hw_table_resize_complete(__rte_unused struct rte_eth_dev *dev,
+			      struct rte_flow_template_table *table,
+			      struct rte_flow_error *error)
+{
+	int ret;
+	uint32_t selector = table->matcher_selector;
+	uint32_t other_selector = (selector + 1) & 1;
+	struct mlx5_matcher_info *matcher_info = &table->matcher_info[other_selector];
+	uint32_t matcher_refcnt;
+
+	if (!rte_flow_template_table_resizable(dev->data->port_id, &table->cfg.attr))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "no resizable attribute");
+	matcher_refcnt = rte_atomic_load_explicit(&matcher_info->refcnt,
+						  rte_memory_order_relaxed);
+	if (!matcher_info->matcher || matcher_refcnt)
+		return rte_flow_error_set(error, EBUSY,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "cannot complete table resize");
+	ret = mlx5dr_matcher_destroy(matcher_info->matcher);
+	if (ret)
+		return rte_flow_error_set(error, rte_errno,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+					  table, "failed to destroy retired matcher");
+	matcher_info->matcher = NULL;
+	return 0;
+}
+
+static int
+flow_hw_update_resized(struct rte_eth_dev *dev, uint32_t queue,
+		       const struct rte_flow_op_attr *attr,
+		       struct rte_flow *flow, void *user_data,
+		       struct rte_flow_error *error)
+{
+	int ret;
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_hw_q_job *job;
+	struct rte_flow_hw *hw_flow = (struct rte_flow_hw *)flow;
+	struct rte_flow_template_table *table = hw_flow->table;
+	uint32_t table_selector = table->matcher_selector;
+	uint32_t rule_selector = hw_flow->matcher_selector;
+	uint32_t other_selector;
+	struct mlx5dr_matcher *other_matcher;
+	struct mlx5dr_rule_attr rule_attr = {
+		.queue_id = queue,
+		.burst = attr->postpone,
+	};
+
+	/**
+	 * mlx5dr_matcher_resize_rule_move() accepts original table matcher -
+	 * the one that was used BEFORE table resize.
+	 * Since the function is called AFTER table resize,
+	 * `table->matcher_selector` always points to the new matcher and
+	 * `hw_flow->matcher_selector` points to a matcher used to create the flow.
+	 */
+	other_selector = rule_selector == table_selector ?
+			 (rule_selector + 1) & 1 : rule_selector;
+	other_matcher = table->matcher_info[other_selector].matcher;
+	if (!other_matcher)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "no active table resize");
+	job = flow_hw_job_get(priv, queue);
+	if (!job)
+		return rte_flow_error_set(error, ENOMEM,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "queue is full");
+	job->type = MLX5_HW_Q_JOB_TYPE_RSZTBL_FLOW_MOVE;
+	job->user_data = user_data;
+	job->flow = hw_flow;
+	rule_attr.user_data = job;
+	if (rule_selector == table_selector) {
+		struct rte_ring *ring = !attr->postpone ?
+					priv->hw_q[queue].flow_transfer_completed :
+					priv->hw_q[queue].flow_transfer_pending;
+		rte_ring_enqueue(ring, job);
+		return 0;
+	}
+	ret = mlx5dr_matcher_resize_rule_move(other_matcher,
+					      (struct mlx5dr_rule *)hw_flow->rule,
+					      &rule_attr);
+	if (ret) {
+		flow_hw_job_put(priv, job, queue);
+		return rte_flow_error_set(error, rte_errno,
+					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "flow transfer failed");
+	}
+	return 0;
+}
+
 const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = {
 	.info_get = flow_hw_info_get,
 	.configure = flow_hw_configure,
@@ -12058,11 +12431,14 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = {
 	.actions_template_destroy = flow_hw_actions_template_destroy,
 	.template_table_create = flow_hw_template_table_create,
 	.template_table_destroy = flow_hw_table_destroy,
+	.table_resize = flow_hw_table_resize,
 	.group_set_miss_actions = flow_hw_group_set_miss_actions,
 	.async_flow_create = flow_hw_async_flow_create,
 	.async_flow_create_by_index = flow_hw_async_flow_create_by_index,
 	.async_flow_update = flow_hw_async_flow_update,
 	.async_flow_destroy = flow_hw_async_flow_destroy,
+	.flow_update_resized = flow_hw_update_resized,
+	.table_resize_complete = flow_hw_table_resize_complete,
 	.pull = flow_hw_pull,
 	.push = flow_hw_push,
 	.async_action_create = flow_hw_action_handle_create,
-- 
2.39.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH v3 0/4] net/mlx5: add support for flow table resizing
  2024-02-28 13:33   ` [PATCH v3 0/4] " Gregory Etelson
                       ` (3 preceding siblings ...)
  2024-02-28 13:33     ` [PATCH v3 4/4] net/mlx5: add support for flow table resizing Gregory Etelson
@ 2024-02-28 15:50     ` Raslan Darawsheh
  4 siblings, 0 replies; 17+ messages in thread
From: Raslan Darawsheh @ 2024-02-28 15:50 UTC (permalink / raw)
  To: Gregory Etelson, dev; +Cc: Maayan Kashani, Dariusz Sosnowski

Hi,

> -----Original Message-----
> From: Gregory Etelson <getelson@nvidia.com>
> Sent: Wednesday, February 28, 2024 3:33 PM
> To: dev@dpdk.org
> Cc: Gregory Etelson <getelson@nvidia.com>; Maayan Kashani
> <mkashani@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>; Dariusz
> Sosnowski <dsosnowski@nvidia.com>
> Subject: [PATCH v3 0/4] net/mlx5: add support for flow table resizing
> 
> Support template table resize API.
> 
> Gregory Etelson (3):
>   net/mlx5: fix parameters verification in HWS table create
>   net/mlx5: move multi-pattern actions management to table level
>   net/mlx5: add support for flow table resizing
> 
> Maayan Kashani (1):
>   net/mlx5: add resize function to ipool
> 
>  drivers/net/mlx5/mlx5.h         |   5 +
>  drivers/net/mlx5/mlx5_flow.c    |  51 +++
>  drivers/net/mlx5/mlx5_flow.h    | 101 ++++-
>  drivers/net/mlx5/mlx5_flow_hw.c | 761 +++++++++++++++++++++++-------
> --
>  drivers/net/mlx5/mlx5_utils.c   |  29 ++
>  drivers/net/mlx5/mlx5_utils.h   |  16 +
>  6 files changed, 763 insertions(+), 200 deletions(-)
> 
> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
> 
> --
> v2: Update PMD after DPDK API changes.
> v3: Use RTE atomic API.
> --
> 2.39.2
Series applied to next-net-mlx,

Raslan Darawsheh

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2024-02-28 15:50 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-02 11:56 [PATCH 0/5] net/mlx5: add support for flow table resizing Gregory Etelson
2024-02-02 11:56 ` [PATCH 1/5] net/mlx5/hws: add support for resizable matchers Gregory Etelson
2024-02-28 10:25   ` [PATCH v2 0/4] net/mlx5: add support for flow table resizing Gregory Etelson
2024-02-28 10:25     ` [PATCH v2 1/4] net/mlx5: add resize function to ipool Gregory Etelson
2024-02-28 10:25     ` [PATCH v2 2/4] net/mlx5: fix parameters verification in HWS table create Gregory Etelson
2024-02-28 10:25     ` [PATCH v2 3/4] net/mlx5: move multi-pattern actions management to table level Gregory Etelson
2024-02-28 10:25     ` [PATCH v2 4/4] net/mlx5: add support for flow table resizing Gregory Etelson
2024-02-28 13:33   ` [PATCH v3 0/4] " Gregory Etelson
2024-02-28 13:33     ` [PATCH v3 1/4] net/mlx5: add resize function to ipool Gregory Etelson
2024-02-28 13:33     ` [PATCH v3 2/4] net/mlx5: fix parameters verification in HWS table create Gregory Etelson
2024-02-28 13:33     ` [PATCH v3 3/4] net/mlx5: move multi-pattern actions management to table level Gregory Etelson
2024-02-28 13:33     ` [PATCH v3 4/4] net/mlx5: add support for flow table resizing Gregory Etelson
2024-02-28 15:50     ` [PATCH v3 0/4] " Raslan Darawsheh
2024-02-02 11:56 ` [PATCH 2/5] net/mlx5: add resize function to ipool Gregory Etelson
2024-02-02 11:56 ` [PATCH 3/5] net/mlx5: fix parameters verification in HWS table create Gregory Etelson
2024-02-02 11:56 ` [PATCH 4/5] net/mlx5: move multi-pattern actions management to table level Gregory Etelson
2024-02-02 11:56 ` [PATCH 5/5] net/mlx5: add support for flow table resizing Gregory Etelson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).