DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 1/4] net/mlx5/hws: fix direct index insert on dep wqe
@ 2024-03-06 20:21 Dariusz Sosnowski
  2024-03-06 20:21 ` [PATCH 2/4] net/mlx5: fix templates clean up of FDB control flow rules Dariusz Sosnowski
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Dariusz Sosnowski @ 2024-03-06 20:21 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad,
	Yevgeny Kliteynik
  Cc: dev, Alex Vesker, stable

From: Alex Vesker <valex@nvidia.com>

In case a depend WQE was required and direct index was
needed we would not set the direct index on the dep_wqe.
This leads to incorrect insertion to index zero.

Fixes: 38b5bf6452a6 ("net/mlx5/hws: support insert/distribute RTC properties")
Cc: stable@dpdk.org

Signed-off-by: Alex Vesker <valex@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_rule.c | 15 ++++++++-------
 drivers/net/mlx5/hws/mlx5dr_send.c |  1 +
 drivers/net/mlx5/hws/mlx5dr_send.h |  1 +
 3 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c
index aa00c54e53..f14e1e6ecd 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.c
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.c
@@ -58,14 +58,16 @@ static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe,
 				     struct mlx5dr_rule *rule,
 				     const struct rte_flow_item *items,
 				     struct mlx5dr_match_template *mt,
-				     void *user_data)
+				     struct mlx5dr_rule_attr *attr)
 {
 	struct mlx5dr_matcher *matcher = rule->matcher;
 	struct mlx5dr_table *tbl = matcher->tbl;
 	bool skip_rx, skip_tx;
 
 	dep_wqe->rule = rule;
-	dep_wqe->user_data = user_data;
+	dep_wqe->user_data = attr->user_data;
+	dep_wqe->direct_index = mlx5dr_matcher_is_insert_by_idx(matcher) ?
+		attr->rule_idx : 0;
 
 	if (!items) { /* rule update */
 		dep_wqe->rtc_0 = rule->rtc_0;
@@ -374,8 +376,8 @@ static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule,
 	}
 
 	mlx5dr_rule_create_init(rule, &ste_attr, &apply, false);
-	mlx5dr_rule_init_dep_wqe(&match_wqe, rule, items, mt, attr->user_data);
-	mlx5dr_rule_init_dep_wqe(&range_wqe, rule, items, mt, attr->user_data);
+	mlx5dr_rule_init_dep_wqe(&match_wqe, rule, items, mt, attr);
+	mlx5dr_rule_init_dep_wqe(&range_wqe, rule, items, mt, attr);
 
 	ste_attr.direct_index = 0;
 	ste_attr.rtc_0 = match_wqe.rtc_0;
@@ -482,7 +484,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule,
 	 * dep_wqe buffers (ctrl, data) are also reused for all STE writes.
 	 */
 	dep_wqe = mlx5dr_send_add_new_dep_wqe(queue);
-	mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, mt, attr->user_data);
+	mlx5dr_rule_init_dep_wqe(dep_wqe, rule, items, mt, attr);
 
 	ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl;
 	ste_attr.wqe_data = &dep_wqe->wqe_data;
@@ -544,8 +546,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule,
 			ste_attr.used_id_rtc_1 = &rule->rtc_1;
 			ste_attr.retry_rtc_0 = dep_wqe->retry_rtc_0;
 			ste_attr.retry_rtc_1 = dep_wqe->retry_rtc_1;
-			ste_attr.direct_index = mlx5dr_matcher_is_insert_by_idx(matcher) ?
-						attr->rule_idx : 0;
+			ste_attr.direct_index = dep_wqe->direct_index;
 		} else {
 			apply.next_direct_idx = --ste_attr.direct_index;
 		}
diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c
index 64138279a1..f749401c6f 100644
--- a/drivers/net/mlx5/hws/mlx5dr_send.c
+++ b/drivers/net/mlx5/hws/mlx5dr_send.c
@@ -50,6 +50,7 @@ void mlx5dr_send_all_dep_wqe(struct mlx5dr_send_engine *queue)
 		ste_attr.used_id_rtc_1 = &dep_wqe->rule->rtc_1;
 		ste_attr.wqe_ctrl = &dep_wqe->wqe_ctrl;
 		ste_attr.wqe_data = &dep_wqe->wqe_data;
+		ste_attr.direct_index = dep_wqe->direct_index;
 
 		mlx5dr_send_ste(queue, &ste_attr);
 
diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h
index c1e8616f7e..c4eaea52ab 100644
--- a/drivers/net/mlx5/hws/mlx5dr_send.h
+++ b/drivers/net/mlx5/hws/mlx5dr_send.h
@@ -106,6 +106,7 @@ struct mlx5dr_send_ring_dep_wqe {
 	uint32_t rtc_1;
 	uint32_t retry_rtc_0;
 	uint32_t retry_rtc_1;
+	uint32_t direct_index;
 	void *user_data;
 };
 
-- 
2.39.2


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 2/4] net/mlx5: fix templates clean up of FDB control flow rules
  2024-03-06 20:21 [PATCH 1/4] net/mlx5/hws: fix direct index insert on dep wqe Dariusz Sosnowski
@ 2024-03-06 20:21 ` Dariusz Sosnowski
  2024-03-06 20:21 ` [PATCH 3/4] net/mlx5: fix rollback on failed flow configure Dariusz Sosnowski
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Dariusz Sosnowski @ 2024-03-06 20:21 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad, Bing Zhao
  Cc: dev, stable

This patch refactors the creation and clean up of templates used for
FDB control flow rules, when HWS is enabled.
All pattern and actions templates, and template tables are stored
in a separate structure, `mlx5_flow_hw_ctrl_fdb`. It is allocated
if and only if E-Switch is enabled.
During HWS clean up, all of these templates are explicitly destroyed,
instead of relying on templates general templates clean up.

Fixes: 1939eb6f660c ("net/mlx5: support flow port action with HWS")
Fixes: 49dffadf4b0c ("net/mlx5: fix LACP redirection in Rx domain")
Cc: stable@dpdk.org

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5.h         |   6 +-
 drivers/net/mlx5/mlx5_flow.h    |  19 +++
 drivers/net/mlx5/mlx5_flow_hw.c | 255 ++++++++++++++++++--------------
 3 files changed, 166 insertions(+), 114 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 2fb3bb65cc..db68c8f884 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1894,11 +1894,7 @@ struct mlx5_priv {
 	rte_spinlock_t hw_ctrl_lock;
 	LIST_HEAD(hw_ctrl_flow, mlx5_hw_ctrl_flow) hw_ctrl_flows;
 	LIST_HEAD(hw_ext_ctrl_flow, mlx5_hw_ctrl_flow) hw_ext_ctrl_flows;
-	struct rte_flow_template_table *hw_esw_sq_miss_root_tbl;
-	struct rte_flow_template_table *hw_esw_sq_miss_tbl;
-	struct rte_flow_template_table *hw_esw_zero_tbl;
-	struct rte_flow_template_table *hw_tx_meta_cpy_tbl;
-	struct rte_flow_template_table *hw_lacp_rx_tbl;
+	struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb;
 	struct rte_flow_pattern_template *hw_tx_repr_tagging_pt;
 	struct rte_flow_actions_template *hw_tx_repr_tagging_at;
 	struct rte_flow_template_table *hw_tx_repr_tagging_tbl;
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 714a41e997..d58290e5b4 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -2775,6 +2775,25 @@ struct mlx5_flow_hw_ctrl_rx {
 						[MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX];
 };
 
+/* Contains all templates required for control flow rules in FDB with HWS. */
+struct mlx5_flow_hw_ctrl_fdb {
+	struct rte_flow_pattern_template *esw_mgr_items_tmpl;
+	struct rte_flow_actions_template *regc_jump_actions_tmpl;
+	struct rte_flow_template_table *hw_esw_sq_miss_root_tbl;
+	struct rte_flow_pattern_template *regc_sq_items_tmpl;
+	struct rte_flow_actions_template *port_actions_tmpl;
+	struct rte_flow_template_table *hw_esw_sq_miss_tbl;
+	struct rte_flow_pattern_template *port_items_tmpl;
+	struct rte_flow_actions_template *jump_one_actions_tmpl;
+	struct rte_flow_template_table *hw_esw_zero_tbl;
+	struct rte_flow_pattern_template *tx_meta_items_tmpl;
+	struct rte_flow_actions_template *tx_meta_actions_tmpl;
+	struct rte_flow_template_table *hw_tx_meta_cpy_tbl;
+	struct rte_flow_pattern_template *lacp_rx_items_tmpl;
+	struct rte_flow_actions_template *lacp_rx_actions_tmpl;
+	struct rte_flow_template_table *hw_lacp_rx_tbl;
+};
+
 #define MLX5_CTRL_PROMISCUOUS    (RTE_BIT32(0))
 #define MLX5_CTRL_ALL_MULTICAST  (RTE_BIT32(1))
 #define MLX5_CTRL_BROADCAST      (RTE_BIT32(2))
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 4216433c6e..21c37b7539 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -9327,6 +9327,72 @@ flow_hw_create_ctrl_jump_table(struct rte_eth_dev *dev,
 	return flow_hw_table_create(dev, &cfg, &it, 1, &at, 1, error);
 }
 
+/**
+ * Cleans up all template tables and pattern, and actions templates used for
+ * FDB control flow rules.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ */
+static void
+flow_hw_cleanup_ctrl_fdb_tables(struct rte_eth_dev *dev)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb;
+
+	if (!priv->hw_ctrl_fdb)
+		return;
+	hw_ctrl_fdb = priv->hw_ctrl_fdb;
+	/* Clean up templates used for LACP default miss table. */
+	if (hw_ctrl_fdb->hw_lacp_rx_tbl)
+		claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_lacp_rx_tbl, NULL));
+	if (hw_ctrl_fdb->lacp_rx_actions_tmpl)
+		claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->lacp_rx_actions_tmpl,
+			   NULL));
+	if (hw_ctrl_fdb->lacp_rx_items_tmpl)
+		claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->lacp_rx_items_tmpl,
+			   NULL));
+	/* Clean up templates used for default Tx metadata copy. */
+	if (hw_ctrl_fdb->hw_tx_meta_cpy_tbl)
+		claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_tx_meta_cpy_tbl, NULL));
+	if (hw_ctrl_fdb->tx_meta_actions_tmpl)
+		claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->tx_meta_actions_tmpl,
+			   NULL));
+	if (hw_ctrl_fdb->tx_meta_items_tmpl)
+		claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->tx_meta_items_tmpl,
+			   NULL));
+	/* Clean up templates used for default FDB jump rule. */
+	if (hw_ctrl_fdb->hw_esw_zero_tbl)
+		claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_zero_tbl, NULL));
+	if (hw_ctrl_fdb->jump_one_actions_tmpl)
+		claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->jump_one_actions_tmpl,
+			   NULL));
+	if (hw_ctrl_fdb->port_items_tmpl)
+		claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->port_items_tmpl,
+			   NULL));
+	/* Clean up templates used for default SQ miss flow rules - non-root table. */
+	if (hw_ctrl_fdb->hw_esw_sq_miss_tbl)
+		claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_sq_miss_tbl, NULL));
+	if (hw_ctrl_fdb->regc_sq_items_tmpl)
+		claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->regc_sq_items_tmpl,
+			   NULL));
+	if (hw_ctrl_fdb->port_actions_tmpl)
+		claim_zero(flow_hw_actions_template_destroy(dev, hw_ctrl_fdb->port_actions_tmpl,
+			   NULL));
+	/* Clean up templates used for default SQ miss flow rules - root table. */
+	if (hw_ctrl_fdb->hw_esw_sq_miss_root_tbl)
+		claim_zero(flow_hw_table_destroy(dev, hw_ctrl_fdb->hw_esw_sq_miss_root_tbl, NULL));
+	if (hw_ctrl_fdb->regc_jump_actions_tmpl)
+		claim_zero(flow_hw_actions_template_destroy(dev,
+			   hw_ctrl_fdb->regc_jump_actions_tmpl, NULL));
+	if (hw_ctrl_fdb->esw_mgr_items_tmpl)
+		claim_zero(flow_hw_pattern_template_destroy(dev, hw_ctrl_fdb->esw_mgr_items_tmpl,
+			   NULL));
+	/* Clean up templates structure for FDB control flow rules. */
+	mlx5_free(hw_ctrl_fdb);
+	priv->hw_ctrl_fdb = NULL;
+}
+
 /*
  * Create a table on the root group to for the LACP traffic redirecting.
  *
@@ -9376,110 +9442,109 @@ flow_hw_create_lacp_rx_table(struct rte_eth_dev *dev,
  * @return
  *   0 on success, negative values otherwise
  */
-static __rte_unused int
+static int
 flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct rte_flow_pattern_template *esw_mgr_items_tmpl = NULL;
-	struct rte_flow_pattern_template *regc_sq_items_tmpl = NULL;
-	struct rte_flow_pattern_template *port_items_tmpl = NULL;
-	struct rte_flow_pattern_template *tx_meta_items_tmpl = NULL;
-	struct rte_flow_pattern_template *lacp_rx_items_tmpl = NULL;
-	struct rte_flow_actions_template *regc_jump_actions_tmpl = NULL;
-	struct rte_flow_actions_template *port_actions_tmpl = NULL;
-	struct rte_flow_actions_template *jump_one_actions_tmpl = NULL;
-	struct rte_flow_actions_template *tx_meta_actions_tmpl = NULL;
-	struct rte_flow_actions_template *lacp_rx_actions_tmpl = NULL;
+	struct mlx5_flow_hw_ctrl_fdb *hw_ctrl_fdb;
 	uint32_t xmeta = priv->sh->config.dv_xmeta_en;
 	uint32_t repr_matching = priv->sh->config.repr_matching;
-	int ret;
 
+	MLX5_ASSERT(priv->hw_ctrl_fdb == NULL);
+	hw_ctrl_fdb = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*hw_ctrl_fdb), 0, SOCKET_ID_ANY);
+	if (!hw_ctrl_fdb) {
+		DRV_LOG(ERR, "port %u failed to allocate memory for FDB control flow templates",
+			dev->data->port_id);
+		rte_errno = ENOMEM;
+		goto err;
+	}
+	priv->hw_ctrl_fdb = hw_ctrl_fdb;
 	/* Create templates and table for default SQ miss flow rules - root table. */
-	esw_mgr_items_tmpl = flow_hw_create_ctrl_esw_mgr_pattern_template(dev, error);
-	if (!esw_mgr_items_tmpl) {
+	hw_ctrl_fdb->esw_mgr_items_tmpl = flow_hw_create_ctrl_esw_mgr_pattern_template(dev, error);
+	if (!hw_ctrl_fdb->esw_mgr_items_tmpl) {
 		DRV_LOG(ERR, "port %u failed to create E-Switch Manager item"
 			" template for control flows", dev->data->port_id);
 		goto err;
 	}
-	regc_jump_actions_tmpl = flow_hw_create_ctrl_regc_jump_actions_template(dev, error);
-	if (!regc_jump_actions_tmpl) {
+	hw_ctrl_fdb->regc_jump_actions_tmpl = flow_hw_create_ctrl_regc_jump_actions_template
+			(dev, error);
+	if (!hw_ctrl_fdb->regc_jump_actions_tmpl) {
 		DRV_LOG(ERR, "port %u failed to create REG_C set and jump action template"
 			" for control flows", dev->data->port_id);
 		goto err;
 	}
-	MLX5_ASSERT(priv->hw_esw_sq_miss_root_tbl == NULL);
-	priv->hw_esw_sq_miss_root_tbl = flow_hw_create_ctrl_sq_miss_root_table
-			(dev, esw_mgr_items_tmpl, regc_jump_actions_tmpl, error);
-	if (!priv->hw_esw_sq_miss_root_tbl) {
+	hw_ctrl_fdb->hw_esw_sq_miss_root_tbl = flow_hw_create_ctrl_sq_miss_root_table
+			(dev, hw_ctrl_fdb->esw_mgr_items_tmpl, hw_ctrl_fdb->regc_jump_actions_tmpl,
+			 error);
+	if (!hw_ctrl_fdb->hw_esw_sq_miss_root_tbl) {
 		DRV_LOG(ERR, "port %u failed to create table for default sq miss (root table)"
 			" for control flows", dev->data->port_id);
 		goto err;
 	}
 	/* Create templates and table for default SQ miss flow rules - non-root table. */
-	regc_sq_items_tmpl = flow_hw_create_ctrl_regc_sq_pattern_template(dev, error);
-	if (!regc_sq_items_tmpl) {
+	hw_ctrl_fdb->regc_sq_items_tmpl = flow_hw_create_ctrl_regc_sq_pattern_template(dev, error);
+	if (!hw_ctrl_fdb->regc_sq_items_tmpl) {
 		DRV_LOG(ERR, "port %u failed to create SQ item template for"
 			" control flows", dev->data->port_id);
 		goto err;
 	}
-	port_actions_tmpl = flow_hw_create_ctrl_port_actions_template(dev, error);
-	if (!port_actions_tmpl) {
+	hw_ctrl_fdb->port_actions_tmpl = flow_hw_create_ctrl_port_actions_template(dev, error);
+	if (!hw_ctrl_fdb->port_actions_tmpl) {
 		DRV_LOG(ERR, "port %u failed to create port action template"
 			" for control flows", dev->data->port_id);
 		goto err;
 	}
-	MLX5_ASSERT(priv->hw_esw_sq_miss_tbl == NULL);
-	priv->hw_esw_sq_miss_tbl = flow_hw_create_ctrl_sq_miss_table(dev, regc_sq_items_tmpl,
-								     port_actions_tmpl, error);
-	if (!priv->hw_esw_sq_miss_tbl) {
+	hw_ctrl_fdb->hw_esw_sq_miss_tbl = flow_hw_create_ctrl_sq_miss_table
+			(dev, hw_ctrl_fdb->regc_sq_items_tmpl, hw_ctrl_fdb->port_actions_tmpl,
+			 error);
+	if (!hw_ctrl_fdb->hw_esw_sq_miss_tbl) {
 		DRV_LOG(ERR, "port %u failed to create table for default sq miss (non-root table)"
 			" for control flows", dev->data->port_id);
 		goto err;
 	}
 	/* Create templates and table for default FDB jump flow rules. */
-	port_items_tmpl = flow_hw_create_ctrl_port_pattern_template(dev, error);
-	if (!port_items_tmpl) {
+	hw_ctrl_fdb->port_items_tmpl = flow_hw_create_ctrl_port_pattern_template(dev, error);
+	if (!hw_ctrl_fdb->port_items_tmpl) {
 		DRV_LOG(ERR, "port %u failed to create SQ item template for"
 			" control flows", dev->data->port_id);
 		goto err;
 	}
-	jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template
+	hw_ctrl_fdb->jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template
 			(dev, MLX5_HW_LOWEST_USABLE_GROUP, error);
-	if (!jump_one_actions_tmpl) {
+	if (!hw_ctrl_fdb->jump_one_actions_tmpl) {
 		DRV_LOG(ERR, "port %u failed to create jump action template"
 			" for control flows", dev->data->port_id);
 		goto err;
 	}
-	MLX5_ASSERT(priv->hw_esw_zero_tbl == NULL);
-	priv->hw_esw_zero_tbl = flow_hw_create_ctrl_jump_table(dev, port_items_tmpl,
-							       jump_one_actions_tmpl,
-							       error);
-	if (!priv->hw_esw_zero_tbl) {
+	hw_ctrl_fdb->hw_esw_zero_tbl = flow_hw_create_ctrl_jump_table
+			(dev, hw_ctrl_fdb->port_items_tmpl, hw_ctrl_fdb->jump_one_actions_tmpl,
+			 error);
+	if (!hw_ctrl_fdb->hw_esw_zero_tbl) {
 		DRV_LOG(ERR, "port %u failed to create table for default jump to group 1"
 			" for control flows", dev->data->port_id);
 		goto err;
 	}
 	/* Create templates and table for default Tx metadata copy flow rule. */
 	if (!repr_matching && xmeta == MLX5_XMETA_MODE_META32_HWS) {
-		tx_meta_items_tmpl =
+		hw_ctrl_fdb->tx_meta_items_tmpl =
 			flow_hw_create_tx_default_mreg_copy_pattern_template(dev, error);
-		if (!tx_meta_items_tmpl) {
+		if (!hw_ctrl_fdb->tx_meta_items_tmpl) {
 			DRV_LOG(ERR, "port %u failed to Tx metadata copy pattern"
 				" template for control flows", dev->data->port_id);
 			goto err;
 		}
-		tx_meta_actions_tmpl =
+		hw_ctrl_fdb->tx_meta_actions_tmpl =
 			flow_hw_create_tx_default_mreg_copy_actions_template(dev, error);
-		if (!tx_meta_actions_tmpl) {
+		if (!hw_ctrl_fdb->tx_meta_actions_tmpl) {
 			DRV_LOG(ERR, "port %u failed to Tx metadata copy actions"
 				" template for control flows", dev->data->port_id);
 			goto err;
 		}
-		MLX5_ASSERT(priv->hw_tx_meta_cpy_tbl == NULL);
-		priv->hw_tx_meta_cpy_tbl =
-			flow_hw_create_tx_default_mreg_copy_table(dev, tx_meta_items_tmpl,
-								  tx_meta_actions_tmpl, error);
-		if (!priv->hw_tx_meta_cpy_tbl) {
+		hw_ctrl_fdb->hw_tx_meta_cpy_tbl =
+			flow_hw_create_tx_default_mreg_copy_table
+				(dev, hw_ctrl_fdb->tx_meta_items_tmpl,
+				 hw_ctrl_fdb->tx_meta_actions_tmpl, error);
+		if (!hw_ctrl_fdb->hw_tx_meta_cpy_tbl) {
 			DRV_LOG(ERR, "port %u failed to create table for default"
 				" Tx metadata copy flow rule", dev->data->port_id);
 			goto err;
@@ -9487,71 +9552,34 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev, struct rte_flow_error *error
 	}
 	/* Create LACP default miss table. */
 	if (!priv->sh->config.lacp_by_user && priv->pf_bond >= 0 && priv->master) {
-		lacp_rx_items_tmpl = flow_hw_create_lacp_rx_pattern_template(dev, error);
-		if (!lacp_rx_items_tmpl) {
+		hw_ctrl_fdb->lacp_rx_items_tmpl =
+				flow_hw_create_lacp_rx_pattern_template(dev, error);
+		if (!hw_ctrl_fdb->lacp_rx_items_tmpl) {
 			DRV_LOG(ERR, "port %u failed to create pattern template"
 				" for LACP Rx traffic", dev->data->port_id);
 			goto err;
 		}
-		lacp_rx_actions_tmpl = flow_hw_create_lacp_rx_actions_template(dev, error);
-		if (!lacp_rx_actions_tmpl) {
+		hw_ctrl_fdb->lacp_rx_actions_tmpl =
+				flow_hw_create_lacp_rx_actions_template(dev, error);
+		if (!hw_ctrl_fdb->lacp_rx_actions_tmpl) {
 			DRV_LOG(ERR, "port %u failed to create actions template"
 				" for LACP Rx traffic", dev->data->port_id);
 			goto err;
 		}
-		priv->hw_lacp_rx_tbl = flow_hw_create_lacp_rx_table(dev, lacp_rx_items_tmpl,
-								    lacp_rx_actions_tmpl, error);
-		if (!priv->hw_lacp_rx_tbl) {
+		hw_ctrl_fdb->hw_lacp_rx_tbl = flow_hw_create_lacp_rx_table
+				(dev, hw_ctrl_fdb->lacp_rx_items_tmpl,
+				 hw_ctrl_fdb->lacp_rx_actions_tmpl, error);
+		if (!hw_ctrl_fdb->hw_lacp_rx_tbl) {
 			DRV_LOG(ERR, "port %u failed to create template table for"
 				" for LACP Rx traffic", dev->data->port_id);
 			goto err;
 		}
 	}
 	return 0;
+
 err:
-	/* Do not overwrite the rte_errno. */
-	ret = -rte_errno;
-	if (ret == 0)
-		ret = rte_flow_error_set(error, EINVAL,
-					 RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
-					 "Failed to create control tables.");
-	if (priv->hw_tx_meta_cpy_tbl) {
-		flow_hw_table_destroy(dev, priv->hw_tx_meta_cpy_tbl, NULL);
-		priv->hw_tx_meta_cpy_tbl = NULL;
-	}
-	if (priv->hw_esw_zero_tbl) {
-		flow_hw_table_destroy(dev, priv->hw_esw_zero_tbl, NULL);
-		priv->hw_esw_zero_tbl = NULL;
-	}
-	if (priv->hw_esw_sq_miss_tbl) {
-		flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_tbl, NULL);
-		priv->hw_esw_sq_miss_tbl = NULL;
-	}
-	if (priv->hw_esw_sq_miss_root_tbl) {
-		flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_root_tbl, NULL);
-		priv->hw_esw_sq_miss_root_tbl = NULL;
-	}
-	if (lacp_rx_actions_tmpl)
-		flow_hw_actions_template_destroy(dev, lacp_rx_actions_tmpl, NULL);
-	if (tx_meta_actions_tmpl)
-		flow_hw_actions_template_destroy(dev, tx_meta_actions_tmpl, NULL);
-	if (jump_one_actions_tmpl)
-		flow_hw_actions_template_destroy(dev, jump_one_actions_tmpl, NULL);
-	if (port_actions_tmpl)
-		flow_hw_actions_template_destroy(dev, port_actions_tmpl, NULL);
-	if (regc_jump_actions_tmpl)
-		flow_hw_actions_template_destroy(dev, regc_jump_actions_tmpl, NULL);
-	if (lacp_rx_items_tmpl)
-		flow_hw_pattern_template_destroy(dev, lacp_rx_items_tmpl, NULL);
-	if (tx_meta_items_tmpl)
-		flow_hw_pattern_template_destroy(dev, tx_meta_items_tmpl, NULL);
-	if (port_items_tmpl)
-		flow_hw_pattern_template_destroy(dev, port_items_tmpl, NULL);
-	if (regc_sq_items_tmpl)
-		flow_hw_pattern_template_destroy(dev, regc_sq_items_tmpl, NULL);
-	if (esw_mgr_items_tmpl)
-		flow_hw_pattern_template_destroy(dev, esw_mgr_items_tmpl, NULL);
-	return ret;
+	flow_hw_cleanup_ctrl_fdb_tables(dev);
+	return -EINVAL;
 }
 
 static void
@@ -10583,6 +10611,7 @@ flow_hw_configure(struct rte_eth_dev *dev,
 	action_template_drop_release(dev);
 	mlx5_flow_quota_destroy(dev);
 	flow_hw_destroy_send_to_kernel_action(priv);
+	flow_hw_cleanup_ctrl_fdb_tables(dev);
 	flow_hw_free_vport_actions(priv);
 	for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) {
 		if (priv->hw_drop[i])
@@ -10645,6 +10674,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
 	dev->flow_fp_ops = &rte_flow_fp_default_ops;
 	flow_hw_rxq_flag_set(dev, false);
 	flow_hw_flush_all_ctrl_flows(dev);
+	flow_hw_cleanup_ctrl_fdb_tables(dev);
 	flow_hw_cleanup_tx_repr_tagging(dev);
 	flow_hw_cleanup_ctrl_rx_tables(dev);
 	action_template_drop_release(dev);
@@ -13211,8 +13241,9 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool
 			       proxy_port_id, port_id);
 		return 0;
 	}
-	if (!proxy_priv->hw_esw_sq_miss_root_tbl ||
-	    !proxy_priv->hw_esw_sq_miss_tbl) {
+	if (!proxy_priv->hw_ctrl_fdb ||
+	    !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl ||
+	    !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl) {
 		DRV_LOG(ERR, "Transfer proxy port (port %u) of port %u was configured, but "
 			     "default flow tables were not created.",
 			     proxy_port_id, port_id);
@@ -13244,7 +13275,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool
 	actions[2] = (struct rte_flow_action) {
 		.type = RTE_FLOW_ACTION_TYPE_END,
 	};
-	ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_root_tbl,
+	ret = flow_hw_create_ctrl_flow(dev, proxy_dev,
+				       proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl,
 				       items, 0, actions, 0, &flow_info, external);
 	if (ret) {
 		DRV_LOG(ERR, "Port %u failed to create root SQ miss flow rule for SQ %u, ret %d",
@@ -13275,7 +13307,8 @@ mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn, bool
 		.type = RTE_FLOW_ACTION_TYPE_END,
 	};
 	flow_info.type = MLX5_HW_CTRL_FLOW_TYPE_SQ_MISS;
-	ret = flow_hw_create_ctrl_flow(dev, proxy_dev, proxy_priv->hw_esw_sq_miss_tbl,
+	ret = flow_hw_create_ctrl_flow(dev, proxy_dev,
+				       proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl,
 				       items, 0, actions, 0, &flow_info, external);
 	if (ret) {
 		DRV_LOG(ERR, "Port %u failed to create HWS SQ miss flow rule for SQ %u, ret %d",
@@ -13321,8 +13354,9 @@ mlx5_flow_hw_esw_destroy_sq_miss_flow(struct rte_eth_dev *dev, uint32_t sqn)
 	proxy_priv = proxy_dev->data->dev_private;
 	if (!proxy_priv->dr_ctx)
 		return 0;
-	if (!proxy_priv->hw_esw_sq_miss_root_tbl ||
-	    !proxy_priv->hw_esw_sq_miss_tbl)
+	if (!proxy_priv->hw_ctrl_fdb ||
+	    !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_root_tbl ||
+	    !proxy_priv->hw_ctrl_fdb->hw_esw_sq_miss_tbl)
 		return 0;
 	cf = LIST_FIRST(&proxy_priv->hw_ctrl_flows);
 	while (cf != NULL) {
@@ -13389,7 +13423,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev)
 			       proxy_port_id, port_id);
 		return 0;
 	}
-	if (!proxy_priv->hw_esw_zero_tbl) {
+	if (!proxy_priv->hw_ctrl_fdb || !proxy_priv->hw_ctrl_fdb->hw_esw_zero_tbl) {
 		DRV_LOG(ERR, "Transfer proxy port (port %u) of port %u was configured, but "
 			     "default flow tables were not created.",
 			     proxy_port_id, port_id);
@@ -13397,7 +13431,7 @@ mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev)
 		return -rte_errno;
 	}
 	return flow_hw_create_ctrl_flow(dev, proxy_dev,
-					proxy_priv->hw_esw_zero_tbl,
+					proxy_priv->hw_ctrl_fdb->hw_esw_zero_tbl,
 					items, 0, actions, 0, &flow_info, false);
 }
 
@@ -13449,10 +13483,12 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev)
 	};
 
 	MLX5_ASSERT(priv->master);
-	if (!priv->dr_ctx || !priv->hw_tx_meta_cpy_tbl)
+	if (!priv->dr_ctx ||
+	    !priv->hw_ctrl_fdb ||
+	    !priv->hw_ctrl_fdb->hw_tx_meta_cpy_tbl)
 		return 0;
 	return flow_hw_create_ctrl_flow(dev, dev,
-					priv->hw_tx_meta_cpy_tbl,
+					priv->hw_ctrl_fdb->hw_tx_meta_cpy_tbl,
 					eth_all, 0, copy_reg_action, 0, &flow_info, false);
 }
 
@@ -13544,10 +13580,11 @@ mlx5_flow_hw_lacp_rx_flow(struct rte_eth_dev *dev)
 		.type = MLX5_HW_CTRL_FLOW_TYPE_LACP_RX,
 	};
 
-	if (!priv->dr_ctx || !priv->hw_lacp_rx_tbl)
+	if (!priv->dr_ctx || !priv->hw_ctrl_fdb || !priv->hw_ctrl_fdb->hw_lacp_rx_tbl)
 		return 0;
-	return flow_hw_create_ctrl_flow(dev, dev, priv->hw_lacp_rx_tbl, eth_lacp, 0,
-					miss_action, 0, &flow_info, false);
+	return flow_hw_create_ctrl_flow(dev, dev,
+					priv->hw_ctrl_fdb->hw_lacp_rx_tbl,
+					eth_lacp, 0, miss_action, 0, &flow_info, false);
 }
 
 static uint32_t
-- 
2.39.2


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 3/4] net/mlx5: fix rollback on failed flow configure
  2024-03-06 20:21 [PATCH 1/4] net/mlx5/hws: fix direct index insert on dep wqe Dariusz Sosnowski
  2024-03-06 20:21 ` [PATCH 2/4] net/mlx5: fix templates clean up of FDB control flow rules Dariusz Sosnowski
@ 2024-03-06 20:21 ` Dariusz Sosnowski
  2024-03-06 20:21 ` [PATCH 4/4] net/mlx5: fix flow configure validation Dariusz Sosnowski
  2024-03-13  7:46 ` [PATCH 1/4] net/mlx5/hws: fix direct index insert on dep wqe Raslan Darawsheh
  3 siblings, 0 replies; 5+ messages in thread
From: Dariusz Sosnowski @ 2024-03-06 20:21 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad,
	Bing Zhao, Gregory Etelson, Michael Baum
  Cc: dev, stable

If rte_flow_configure() failed, then some port resources
were either not freed, nor reset to the default state.
As a result, assumptions in other places in PMD were invalidated
and that lead to segmentation faults during release of HW Steering
resources when port was closed.

This patch adds missing resource release to rollback procedure
in mlx5 PMD implementation of rte_flow_configure().
Whole rollback procedure is reordered for clarity, to resemble
reverse order of resource allocation.

Fixes: 1939eb6f660c ("net/mlx5: support flow port action with HWS")
Fixes: 8a5c816691e7 ("net/mlx5: create NAT64 actions during configuration")
Fixes: 773ca0e91ba1 ("net/mlx5: support VLAN push/pop/modify with HWS")
Fixes: 04a4de756e14 ("net/mlx5: support flow age action with HWS")
Fixes: c3f085a4858c ("net/mlx5: improve pattern template validation")
Cc: stable@dpdk.org

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_hw.c | 65 ++++++++++++++++++++-------------
 1 file changed, 40 insertions(+), 25 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 21c37b7539..17ab3a98fe 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -10188,7 +10188,7 @@ flow_hw_compare_config(const struct mlx5_flow_hw_attr *hw_attr,
  * mlx5_dev_close -> flow_hw_resource_release -> flow_hw_actions_template_destroy
  */
 static void
-action_template_drop_release(struct rte_eth_dev *dev)
+flow_hw_action_template_drop_release(struct rte_eth_dev *dev)
 {
 	int i;
 	struct mlx5_priv *priv = dev->data->dev_private;
@@ -10204,7 +10204,7 @@ action_template_drop_release(struct rte_eth_dev *dev)
 }
 
 static int
-action_template_drop_init(struct rte_eth_dev *dev,
+flow_hw_action_template_drop_init(struct rte_eth_dev *dev,
 			  struct rte_flow_error *error)
 {
 	const struct rte_flow_action drop[2] = {
@@ -10466,7 +10466,7 @@ flow_hw_configure(struct rte_eth_dev *dev,
 	rte_spinlock_init(&priv->hw_ctrl_lock);
 	LIST_INIT(&priv->hw_ctrl_flows);
 	LIST_INIT(&priv->hw_ext_ctrl_flows);
-	ret = action_template_drop_init(dev, error);
+	ret = flow_hw_action_template_drop_init(dev, error);
 	if (ret)
 		goto err;
 	ret = flow_hw_create_ctrl_rx_tables(dev);
@@ -10594,6 +10594,15 @@ flow_hw_configure(struct rte_eth_dev *dev,
 	dev->flow_fp_ops = &mlx5_flow_hw_fp_ops;
 	return 0;
 err:
+	priv->hws_strict_queue = 0;
+	flow_hw_destroy_nat64_actions(priv);
+	flow_hw_destroy_vlan(dev);
+	if (priv->hws_age_req)
+		mlx5_hws_age_pool_destroy(priv);
+	if (priv->hws_cpool) {
+		mlx5_hws_cnt_pool_destroy(priv->sh, priv->hws_cpool);
+		priv->hws_cpool = NULL;
+	}
 	if (priv->hws_ctpool) {
 		flow_hw_ct_pool_destroy(dev, priv->hws_ctpool);
 		priv->hws_ctpool = NULL;
@@ -10602,29 +10611,38 @@ flow_hw_configure(struct rte_eth_dev *dev,
 		flow_hw_ct_mng_destroy(dev, priv->ct_mng);
 		priv->ct_mng = NULL;
 	}
-	if (priv->hws_age_req)
-		mlx5_hws_age_pool_destroy(priv);
-	if (priv->hws_cpool) {
-		mlx5_hws_cnt_pool_destroy(priv->sh, priv->hws_cpool);
-		priv->hws_cpool = NULL;
-	}
-	action_template_drop_release(dev);
-	mlx5_flow_quota_destroy(dev);
 	flow_hw_destroy_send_to_kernel_action(priv);
 	flow_hw_cleanup_ctrl_fdb_tables(dev);
 	flow_hw_free_vport_actions(priv);
+	if (priv->hw_def_miss) {
+		mlx5dr_action_destroy(priv->hw_def_miss);
+		priv->hw_def_miss = NULL;
+	}
+	flow_hw_cleanup_tx_repr_tagging(dev);
 	for (i = 0; i < MLX5_HW_ACTION_FLAG_MAX; i++) {
-		if (priv->hw_drop[i])
+		if (priv->hw_drop[i]) {
 			mlx5dr_action_destroy(priv->hw_drop[i]);
-		if (priv->hw_tag[i])
+			priv->hw_drop[i] = NULL;
+		}
+		if (priv->hw_tag[i]) {
 			mlx5dr_action_destroy(priv->hw_tag[i]);
+			priv->hw_tag[i] = NULL;
+		}
 	}
-	if (priv->hw_def_miss)
-		mlx5dr_action_destroy(priv->hw_def_miss);
-	flow_hw_destroy_nat64_actions(priv);
-	flow_hw_destroy_vlan(dev);
-	if (dr_ctx)
+	mlx5_flow_meter_uninit(dev);
+	mlx5_flow_quota_destroy(dev);
+	flow_hw_cleanup_ctrl_rx_tables(dev);
+	flow_hw_action_template_drop_release(dev);
+	if (dr_ctx) {
 		claim_zero(mlx5dr_context_close(dr_ctx));
+		priv->dr_ctx = NULL;
+	}
+	if (priv->shared_host) {
+		struct mlx5_priv *host_priv = priv->shared_host->data->dev_private;
+
+		__atomic_fetch_sub(&host_priv->shared_refcnt, 1, __ATOMIC_RELAXED);
+		priv->shared_host = NULL;
+	}
 	for (i = 0; i < nb_q_updated; i++) {
 		rte_ring_free(priv->hw_q[i].indir_iq);
 		rte_ring_free(priv->hw_q[i].indir_cq);
@@ -10637,14 +10655,11 @@ flow_hw_configure(struct rte_eth_dev *dev,
 		mlx5_ipool_destroy(priv->acts_ipool);
 		priv->acts_ipool = NULL;
 	}
-	if (_queue_attr)
-		mlx5_free(_queue_attr);
-	if (priv->shared_host) {
-		__atomic_fetch_sub(&host_priv->shared_refcnt, 1, __ATOMIC_RELAXED);
-		priv->shared_host = NULL;
-	}
 	mlx5_free(priv->hw_attr);
 	priv->hw_attr = NULL;
+	priv->nb_queue = 0;
+	if (_queue_attr)
+		mlx5_free(_queue_attr);
 	/* Do not overwrite the internal errno information. */
 	if (ret)
 		return ret;
@@ -10677,7 +10692,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
 	flow_hw_cleanup_ctrl_fdb_tables(dev);
 	flow_hw_cleanup_tx_repr_tagging(dev);
 	flow_hw_cleanup_ctrl_rx_tables(dev);
-	action_template_drop_release(dev);
+	flow_hw_action_template_drop_release(dev);
 	while (!LIST_EMPTY(&priv->flow_hw_grp)) {
 		grp = LIST_FIRST(&priv->flow_hw_grp);
 		flow_hw_group_unset_miss_group(dev, grp, NULL);
-- 
2.39.2


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 4/4] net/mlx5: fix flow configure validation
  2024-03-06 20:21 [PATCH 1/4] net/mlx5/hws: fix direct index insert on dep wqe Dariusz Sosnowski
  2024-03-06 20:21 ` [PATCH 2/4] net/mlx5: fix templates clean up of FDB control flow rules Dariusz Sosnowski
  2024-03-06 20:21 ` [PATCH 3/4] net/mlx5: fix rollback on failed flow configure Dariusz Sosnowski
@ 2024-03-06 20:21 ` Dariusz Sosnowski
  2024-03-13  7:46 ` [PATCH 1/4] net/mlx5/hws: fix direct index insert on dep wqe Raslan Darawsheh
  3 siblings, 0 replies; 5+ messages in thread
From: Dariusz Sosnowski @ 2024-03-06 20:21 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev, stable

There's an existing limitation in mlx5 PMD, that all configured flow
queues must have the same size. Even though this condition is checked,
some allocations are done before that. This lead to segmentation
fault during rollback on error in rte_flow_configure() implementation.

This patch fixes that by reorganizing validation, so that configuration
options are validated before any allocations are done and
necessary checks for NULL are added to error rollback.

Bugzilla ID: 1199
Fixes: b401400db24e ("net/mlx5: add port flow configuration")
Cc: stable@dpdk.org

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_hw.c | 62 +++++++++++++++++++++++----------
 1 file changed, 43 insertions(+), 19 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 17ab3a98fe..407a843578 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -10253,6 +10253,38 @@ mlx5_hwq_ring_create(uint16_t port_id, uint32_t queue, uint32_t size, const char
 			       RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
 }
 
+static int
+flow_hw_validate_attributes(const struct rte_flow_port_attr *port_attr,
+			    uint16_t nb_queue,
+			    const struct rte_flow_queue_attr *queue_attr[],
+			    struct rte_flow_error *error)
+{
+	uint32_t size;
+	unsigned int i;
+
+	if (port_attr == NULL)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Port attributes must be non-NULL");
+
+	if (nb_queue == 0)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "At least one flow queue is required");
+
+	if (queue_attr == NULL)
+		return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+					  "Queue attributes must be non-NULL");
+
+	size = queue_attr[0]->size;
+	for (i = 1; i < nb_queue; ++i) {
+		if (queue_attr[i]->size != size)
+			return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+						  NULL,
+						  "All flow queues must have the same size");
+	}
+
+	return 0;
+}
+
 /**
  * Configure port HWS resources.
  *
@@ -10304,10 +10336,8 @@ flow_hw_configure(struct rte_eth_dev *dev,
 	int ret = 0;
 	uint32_t action_flags;
 
-	if (!port_attr || !nb_queue || !queue_attr) {
-		rte_errno = EINVAL;
-		goto err;
-	}
+	if (flow_hw_validate_attributes(port_attr, nb_queue, queue_attr, error))
+		return -rte_errno;
 	/*
 	 * Calling rte_flow_configure() again is allowed if and only if
 	 * provided configuration matches the initially provided one.
@@ -10354,14 +10384,6 @@ flow_hw_configure(struct rte_eth_dev *dev,
 	/* Allocate the queue job descriptor LIFO. */
 	mem_size = sizeof(priv->hw_q[0]) * nb_q_updated;
 	for (i = 0; i < nb_q_updated; i++) {
-		/*
-		 * Check if the queues' size are all the same as the
-		 * limitation from HWS layer.
-		 */
-		if (_queue_attr[i]->size != _queue_attr[0]->size) {
-			rte_errno = EINVAL;
-			goto err;
-		}
 		mem_size += (sizeof(struct mlx5_hw_q_job *) +
 			     sizeof(struct mlx5_hw_q_job)) * _queue_attr[i]->size;
 	}
@@ -10643,14 +10665,16 @@ flow_hw_configure(struct rte_eth_dev *dev,
 		__atomic_fetch_sub(&host_priv->shared_refcnt, 1, __ATOMIC_RELAXED);
 		priv->shared_host = NULL;
 	}
-	for (i = 0; i < nb_q_updated; i++) {
-		rte_ring_free(priv->hw_q[i].indir_iq);
-		rte_ring_free(priv->hw_q[i].indir_cq);
-		rte_ring_free(priv->hw_q[i].flow_transfer_pending);
-		rte_ring_free(priv->hw_q[i].flow_transfer_completed);
+	if (priv->hw_q) {
+		for (i = 0; i < nb_q_updated; i++) {
+			rte_ring_free(priv->hw_q[i].indir_iq);
+			rte_ring_free(priv->hw_q[i].indir_cq);
+			rte_ring_free(priv->hw_q[i].flow_transfer_pending);
+			rte_ring_free(priv->hw_q[i].flow_transfer_completed);
+		}
+		mlx5_free(priv->hw_q);
+		priv->hw_q = NULL;
 	}
-	mlx5_free(priv->hw_q);
-	priv->hw_q = NULL;
 	if (priv->acts_ipool) {
 		mlx5_ipool_destroy(priv->acts_ipool);
 		priv->acts_ipool = NULL;
-- 
2.39.2


^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: [PATCH 1/4] net/mlx5/hws: fix direct index insert on dep wqe
  2024-03-06 20:21 [PATCH 1/4] net/mlx5/hws: fix direct index insert on dep wqe Dariusz Sosnowski
                   ` (2 preceding siblings ...)
  2024-03-06 20:21 ` [PATCH 4/4] net/mlx5: fix flow configure validation Dariusz Sosnowski
@ 2024-03-13  7:46 ` Raslan Darawsheh
  3 siblings, 0 replies; 5+ messages in thread
From: Raslan Darawsheh @ 2024-03-13  7:46 UTC (permalink / raw)
  To: Dariusz Sosnowski, Slava Ovsiienko, Ori Kam, Suanming Mou,
	Matan Azrad, Yevgeny Kliteynik
  Cc: dev, Alex Vesker, stable

Hi,

> -----Original Message-----
> From: Dariusz Sosnowski <dsosnowski@nvidia.com>
> Sent: Wednesday, March 6, 2024 10:22 PM
> To: Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>;
> Suanming Mou <suanmingm@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; Yevgeny Kliteynik <kliteyn@nvidia.com>
> Cc: dev@dpdk.org; Alex Vesker <valex@nvidia.com>; stable@dpdk.org
> Subject: [PATCH 1/4] net/mlx5/hws: fix direct index insert on dep wqe
> 
> From: Alex Vesker <valex@nvidia.com>
> 
> In case a depend WQE was required and direct index was needed we would
> not set the direct index on the dep_wqe.
> This leads to incorrect insertion to index zero.
> 
> Fixes: 38b5bf6452a6 ("net/mlx5/hws: support insert/distribute RTC
> properties")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Alex Vesker <valex@nvidia.com>
> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Series applied to next-net-mlx,

Kindest regards
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2024-03-13  7:46 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-06 20:21 [PATCH 1/4] net/mlx5/hws: fix direct index insert on dep wqe Dariusz Sosnowski
2024-03-06 20:21 ` [PATCH 2/4] net/mlx5: fix templates clean up of FDB control flow rules Dariusz Sosnowski
2024-03-06 20:21 ` [PATCH 3/4] net/mlx5: fix rollback on failed flow configure Dariusz Sosnowski
2024-03-06 20:21 ` [PATCH 4/4] net/mlx5: fix flow configure validation Dariusz Sosnowski
2024-03-13  7:46 ` [PATCH 1/4] net/mlx5/hws: fix direct index insert on dep wqe Raslan Darawsheh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).