DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/4] net/mlx5: implement Flow update API
@ 2023-06-12 20:05 Alexander Kozyrev
  2023-06-12 20:05 ` [PATCH 1/4] net/mlx5/hws: use the same function to check rule Alexander Kozyrev
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Alexander Kozyrev @ 2023-06-12 20:05 UTC (permalink / raw)
  To: dev; +Cc: rasland, matan, viacheslavo, orika, erezsh

Add the implementation for the rte_flow_async_actions_update() API.
Construct the new actions and replace them for the Flow handle.
Old resources are freed during the rte_flow_pull() invocation.

Alexander Kozyrev (1):
  net/mlx5: implement Flow update API

Erez Shitrit (3):
  net/mlx5/hws: use the same function to check rule
  net/mlx5/hws: use union in the wqe-data struct
  net/mlx5/hws: support rule update after its creation

 drivers/net/mlx5/hws/mlx5dr.h      |  17 +++
 drivers/net/mlx5/hws/mlx5dr_rule.c | 123 +++++++++++++-----
 drivers/net/mlx5/hws/mlx5dr_send.c |   2 +-
 drivers/net/mlx5/mlx5.h            |   1 +
 drivers/net/mlx5/mlx5_flow.c       |  56 +++++++++
 drivers/net/mlx5/mlx5_flow.h       |  13 ++
 drivers/net/mlx5/mlx5_flow_hw.c    | 194 ++++++++++++++++++++++++++---
 7 files changed, 362 insertions(+), 44 deletions(-)

-- 
2.18.2


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/4] net/mlx5/hws: use the same function to check rule
  2023-06-12 20:05 [PATCH 0/4] net/mlx5: implement Flow update API Alexander Kozyrev
@ 2023-06-12 20:05 ` Alexander Kozyrev
  2023-06-14 16:59   ` Ori Kam
  2023-06-12 20:05 ` [PATCH 2/4] net/mlx5/hws: use union in the wqe-data struct Alexander Kozyrev
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Alexander Kozyrev @ 2023-06-12 20:05 UTC (permalink / raw)
  To: dev; +Cc: rasland, matan, viacheslavo, orika, erezsh

From: Erez Shitrit <erezsh@nvidia.com>

Before handling it for insert/delete

Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_rule.c | 38 +++++++++++++++---------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c
index 2418ca0b26..e0c4a6a91a 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.c
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.c
@@ -630,6 +630,23 @@ static int mlx5dr_rule_destroy_root(struct mlx5dr_rule *rule,
 	return 0;
 }
 
+static int mlx5dr_rule_enqueue_precheck(struct mlx5dr_context *ctx,
+					struct mlx5dr_rule_attr *attr)
+{
+	if (unlikely(!attr->user_data)) {
+		rte_errno = EINVAL;
+		return rte_errno;
+	}
+
+	/* Check if there is room in queue */
+	if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) {
+		rte_errno = EBUSY;
+		return rte_errno;
+	}
+
+	return 0;
+}
+
 int mlx5dr_rule_create(struct mlx5dr_matcher *matcher,
 		       uint8_t mt_idx,
 		       const struct rte_flow_item items[],
@@ -644,16 +661,8 @@ int mlx5dr_rule_create(struct mlx5dr_matcher *matcher,
 	rule_handle->matcher = matcher;
 	ctx = matcher->tbl->ctx;
 
-	if (unlikely(!attr->user_data)) {
-		rte_errno = EINVAL;
+	if (mlx5dr_rule_enqueue_precheck(ctx, attr))
 		return -rte_errno;
-	}
-
-	/* Check if there is room in queue */
-	if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) {
-		rte_errno = EBUSY;
-		return -rte_errno;
-	}
 
 	assert(matcher->num_of_mt >= mt_idx);
 	assert(matcher->num_of_at >= at_idx);
@@ -677,19 +686,10 @@ int mlx5dr_rule_create(struct mlx5dr_matcher *matcher,
 int mlx5dr_rule_destroy(struct mlx5dr_rule *rule,
 			struct mlx5dr_rule_attr *attr)
 {
-	struct mlx5dr_context *ctx = rule->matcher->tbl->ctx;
 	int ret;
 
-	if (unlikely(!attr->user_data)) {
-		rte_errno = EINVAL;
-		return -rte_errno;
-	}
-
-	/* Check if there is room in queue */
-	if (unlikely(mlx5dr_send_engine_full(&ctx->send_queue[attr->queue_id]))) {
-		rte_errno = EBUSY;
+	if (mlx5dr_rule_enqueue_precheck(rule->matcher->tbl->ctx, attr))
 		return -rte_errno;
-	}
 
 	if (unlikely(mlx5dr_table_is_root(rule->matcher->tbl)))
 		ret = mlx5dr_rule_destroy_root(rule, attr);
-- 
2.18.2


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 2/4] net/mlx5/hws: use union in the wqe-data struct
  2023-06-12 20:05 [PATCH 0/4] net/mlx5: implement Flow update API Alexander Kozyrev
  2023-06-12 20:05 ` [PATCH 1/4] net/mlx5/hws: use the same function to check rule Alexander Kozyrev
@ 2023-06-12 20:05 ` Alexander Kozyrev
  2023-06-14 17:00   ` Ori Kam
  2023-06-12 20:05 ` [PATCH 3/4] net/mlx5/hws: support rule update after its creation Alexander Kozyrev
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: Alexander Kozyrev @ 2023-06-12 20:05 UTC (permalink / raw)
  To: dev; +Cc: rasland, matan, viacheslavo, orika, erezsh

From: Erez Shitrit <erezsh@nvidia.com>

To be clear about which field we are going to set.

Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_send.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c
index d650c55124..e58fdeb117 100644
--- a/drivers/net/mlx5/hws/mlx5dr_send.c
+++ b/drivers/net/mlx5/hws/mlx5dr_send.c
@@ -110,7 +110,7 @@ mlx5dr_send_wqe_set_tag(struct mlx5dr_wqe_gta_data_seg_ste *wqe_data,
 	if (is_jumbo) {
 		/* Clear previous possibly dirty control */
 		memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ);
-		memcpy(wqe_data->action, tag->jumbo, MLX5DR_JUMBO_TAG_SZ);
+		memcpy(wqe_data->jumbo, tag->jumbo, MLX5DR_JUMBO_TAG_SZ);
 	} else {
 		/* Clear previous possibly dirty control and actions */
 		memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ + MLX5DR_ACTIONS_SZ);
-- 
2.18.2


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 3/4] net/mlx5/hws: support rule update after its creation
  2023-06-12 20:05 [PATCH 0/4] net/mlx5: implement Flow update API Alexander Kozyrev
  2023-06-12 20:05 ` [PATCH 1/4] net/mlx5/hws: use the same function to check rule Alexander Kozyrev
  2023-06-12 20:05 ` [PATCH 2/4] net/mlx5/hws: use union in the wqe-data struct Alexander Kozyrev
@ 2023-06-12 20:05 ` Alexander Kozyrev
  2023-06-14 17:00   ` Ori Kam
  2023-06-12 20:05 ` [PATCH 4/4] net/mlx5: implement Flow update API Alexander Kozyrev
  2023-06-19 15:03 ` [PATCH 0/4] " Raslan Darawsheh
  4 siblings, 1 reply; 9+ messages in thread
From: Alexander Kozyrev @ 2023-06-12 20:05 UTC (permalink / raw)
  To: dev; +Cc: rasland, matan, viacheslavo, orika, erezsh

From: Erez Shitrit <erezsh@nvidia.com>

Add the ability to change rule's actions after the rule already created.
The new actions should be one of the action template list.
That support is only for matcher that uses the optimization of
using rule insertion by index (optimize_using_rule_idx)

Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Reviewed-by: Alex Vesker <valex@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr.h      | 17 ++++++
 drivers/net/mlx5/hws/mlx5dr_rule.c | 85 ++++++++++++++++++++++++++----
 2 files changed, 93 insertions(+), 9 deletions(-)

diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index c14fef7a6b..f881d7c961 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -365,6 +365,23 @@ int mlx5dr_rule_create(struct mlx5dr_matcher *matcher,
 int mlx5dr_rule_destroy(struct mlx5dr_rule *rule,
 			struct mlx5dr_rule_attr *attr);
 
+/* Enqueue update actions on an existing rule.
+ *
+ * @param[in, out] rule_handle
+ *	A valid rule handle to update.
+ * @param[in] at_idx
+ *	Action template index to update the actions with.
+ *  @param[in] rule_actions
+ *	Rule action to be executed on match.
+ * @param[in] attr
+ *	Rule update attributes.
+ * @return zero on successful enqueue non zero otherwise.
+ */
+int mlx5dr_rule_action_update(struct mlx5dr_rule *rule_handle,
+			      uint8_t at_idx,
+			      struct mlx5dr_rule_action rule_actions[],
+			      struct mlx5dr_rule_attr *attr);
+
 /* Create direct rule drop action.
  *
  * @param[in] ctx
diff --git a/drivers/net/mlx5/hws/mlx5dr_rule.c b/drivers/net/mlx5/hws/mlx5dr_rule.c
index e0c4a6a91a..071e1ad769 100644
--- a/drivers/net/mlx5/hws/mlx5dr_rule.c
+++ b/drivers/net/mlx5/hws/mlx5dr_rule.c
@@ -40,6 +40,17 @@ static void mlx5dr_rule_skip(struct mlx5dr_matcher *matcher,
 	}
 }
 
+static void
+mlx5dr_rule_update_copy_tag(struct mlx5dr_rule *rule,
+			    struct mlx5dr_wqe_gta_data_seg_ste *wqe_data,
+			    bool is_jumbo)
+{
+	if (is_jumbo)
+		memcpy(wqe_data->jumbo, rule->tag.jumbo, MLX5DR_JUMBO_TAG_SZ);
+	else
+		memcpy(wqe_data->tag, rule->tag.match, MLX5DR_MATCH_TAG_SZ);
+}
+
 static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe,
 				     struct mlx5dr_rule *rule,
 				     const struct rte_flow_item *items,
@@ -53,6 +64,14 @@ static void mlx5dr_rule_init_dep_wqe(struct mlx5dr_send_ring_dep_wqe *dep_wqe,
 	dep_wqe->rule = rule;
 	dep_wqe->user_data = user_data;
 
+	if (!items) { /* rule update */
+		dep_wqe->rtc_0 = rule->rtc_0;
+		dep_wqe->rtc_1 = rule->rtc_1;
+		dep_wqe->retry_rtc_1 = 0;
+		dep_wqe->retry_rtc_0 = 0;
+		return;
+	}
+
 	switch (tbl->type) {
 	case MLX5DR_TABLE_TYPE_NIC_RX:
 	case MLX5DR_TABLE_TYPE_NIC_TX:
@@ -213,15 +232,20 @@ void mlx5dr_rule_free_action_ste_idx(struct mlx5dr_rule *rule)
 
 static void mlx5dr_rule_create_init(struct mlx5dr_rule *rule,
 				    struct mlx5dr_send_ste_attr *ste_attr,
-				    struct mlx5dr_actions_apply_data *apply)
+				    struct mlx5dr_actions_apply_data *apply,
+				    bool is_update)
 {
 	struct mlx5dr_matcher *matcher = rule->matcher;
 	struct mlx5dr_table *tbl = matcher->tbl;
 	struct mlx5dr_context *ctx = tbl->ctx;
 
 	/* Init rule before reuse */
-	rule->rtc_0 = 0;
-	rule->rtc_1 = 0;
+	if (!is_update) {
+		/* In update we use these rtc's */
+		rule->rtc_0 = 0;
+		rule->rtc_1 = 0;
+	}
+
 	rule->pending_wqes = 0;
 	rule->action_ste_idx = -1;
 	rule->status = MLX5DR_RULE_STATUS_CREATING;
@@ -264,7 +288,7 @@ static int mlx5dr_rule_create_hws_fw_wqe(struct mlx5dr_rule *rule,
 		return rte_errno;
 	}
 
-	mlx5dr_rule_create_init(rule, &ste_attr, &apply);
+	mlx5dr_rule_create_init(rule, &ste_attr, &apply, false);
 	mlx5dr_rule_init_dep_wqe(&match_wqe, rule, items, mt, attr->user_data);
 	mlx5dr_rule_init_dep_wqe(&range_wqe, rule, items, mt, attr->user_data);
 
@@ -348,10 +372,13 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule,
 	struct mlx5dr_actions_apply_data apply;
 	struct mlx5dr_send_engine *queue;
 	uint8_t total_stes, action_stes;
+	bool is_update;
 	int i, ret;
 
+	is_update = (items == NULL);
+
 	/* Insert rule using FW WQE if cannot use GTA WQE */
-	if (unlikely(mlx5dr_matcher_req_fw_wqe(matcher)))
+	if (unlikely(mlx5dr_matcher_req_fw_wqe(matcher) && !is_update))
 		return mlx5dr_rule_create_hws_fw_wqe(rule, attr, mt_idx, items,
 						     at_idx, rule_actions);
 
@@ -361,7 +388,7 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule,
 		return rte_errno;
 	}
 
-	mlx5dr_rule_create_init(rule, &ste_attr, &apply);
+	mlx5dr_rule_create_init(rule, &ste_attr, &apply, is_update);
 
 	/* Allocate dependent match WQE since rule might have dependent writes.
 	 * The queued dependent WQE can be later aborted or kept as a dependency.
@@ -408,9 +435,11 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule,
 			 * will always match and perform the specified actions, which
 			 * makes the tag irrelevant.
 			 */
-			if (likely(!mlx5dr_matcher_is_insert_by_idx(matcher)))
+			if (likely(!mlx5dr_matcher_is_insert_by_idx(matcher) && !is_update))
 				mlx5dr_definer_create_tag(items, mt->fc, mt->fc_sz,
 							  (uint8_t *)dep_wqe->wqe_data.action);
+			else if (unlikely(is_update))
+				mlx5dr_rule_update_copy_tag(rule, &dep_wqe->wqe_data, is_jumbo);
 
 			/* Rule has dependent WQEs, match dep_wqe is queued */
 			if (action_stes || apply.require_dep)
@@ -437,8 +466,10 @@ static int mlx5dr_rule_create_hws(struct mlx5dr_rule *rule,
 		mlx5dr_send_ste(queue, &ste_attr);
 	}
 
-	/* Backup TAG on the rule for deletion */
-	mlx5dr_rule_save_delete_info(rule, &ste_attr);
+	/* Backup TAG on the rule for deletion, only after insertion */
+	if (!is_update)
+		mlx5dr_rule_save_delete_info(rule, &ste_attr);
+
 	mlx5dr_send_engine_inc_rule(queue);
 
 	/* Send dependent WQEs */
@@ -666,6 +697,7 @@ int mlx5dr_rule_create(struct mlx5dr_matcher *matcher,
 
 	assert(matcher->num_of_mt >= mt_idx);
 	assert(matcher->num_of_at >= at_idx);
+	assert(items);
 
 	if (unlikely(mlx5dr_table_is_root(matcher->tbl)))
 		ret = mlx5dr_rule_create_root(rule_handle,
@@ -699,6 +731,41 @@ int mlx5dr_rule_destroy(struct mlx5dr_rule *rule,
 	return -ret;
 }
 
+int mlx5dr_rule_action_update(struct mlx5dr_rule *rule_handle,
+			      uint8_t at_idx,
+			      struct mlx5dr_rule_action rule_actions[],
+			      struct mlx5dr_rule_attr *attr)
+{
+	struct mlx5dr_matcher *matcher = rule_handle->matcher;
+	int ret;
+
+	if (unlikely(mlx5dr_table_is_root(matcher->tbl) ||
+	    unlikely(mlx5dr_matcher_req_fw_wqe(matcher)))) {
+		DR_LOG(ERR, "Rule update not supported on cureent matcher");
+		rte_errno = ENOTSUP;
+		return -rte_errno;
+	}
+
+	if (!matcher->attr.optimize_using_rule_idx &&
+	    !mlx5dr_matcher_is_insert_by_idx(matcher)) {
+		DR_LOG(ERR, "Rule update requires optimize by idx matcher");
+		rte_errno = ENOTSUP;
+		return -rte_errno;
+	}
+
+	if (mlx5dr_rule_enqueue_precheck(matcher->tbl->ctx, attr))
+		return -rte_errno;
+
+	ret = mlx5dr_rule_create_hws(rule_handle,
+				     attr,
+				     0,
+				     NULL,
+				     at_idx,
+				     rule_actions);
+
+	return -ret;
+}
+
 size_t mlx5dr_rule_get_handle_size(void)
 {
 	return sizeof(struct mlx5dr_rule);
-- 
2.18.2


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 4/4] net/mlx5: implement Flow update API
  2023-06-12 20:05 [PATCH 0/4] net/mlx5: implement Flow update API Alexander Kozyrev
                   ` (2 preceding siblings ...)
  2023-06-12 20:05 ` [PATCH 3/4] net/mlx5/hws: support rule update after its creation Alexander Kozyrev
@ 2023-06-12 20:05 ` Alexander Kozyrev
  2023-06-19 15:03 ` [PATCH 0/4] " Raslan Darawsheh
  4 siblings, 0 replies; 9+ messages in thread
From: Alexander Kozyrev @ 2023-06-12 20:05 UTC (permalink / raw)
  To: dev; +Cc: rasland, matan, viacheslavo, orika, erezsh

Add the implementation for the rte_flow_async_actions_update() API.
Construct the new actions and replace them for the Flow handle.
Old resources are freed during the rte_flow_pull() invocation.

Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
---
 drivers/net/mlx5/mlx5.h         |   1 +
 drivers/net/mlx5/mlx5_flow.c    |  56 +++++++++
 drivers/net/mlx5/mlx5_flow.h    |  13 +++
 drivers/net/mlx5/mlx5_flow_hw.c | 194 +++++++++++++++++++++++++++++---
 4 files changed, 249 insertions(+), 15 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 021049ad2b..2715d6c0be 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -385,6 +385,7 @@ struct mlx5_hw_q_job {
 		struct rte_flow_item_ethdev port_spec;
 		struct rte_flow_item_tag tag_spec;
 	} __rte_packed;
+	struct rte_flow_hw *upd_flow; /* Flow with updated values. */
 };
 
 /* HW steering job descriptor LIFO pool. */
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index eb1d7a6be2..20d896dbe3 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1048,6 +1048,15 @@ mlx5_flow_async_flow_create_by_index(struct rte_eth_dev *dev,
 			    void *user_data,
 			    struct rte_flow_error *error);
 static int
+mlx5_flow_async_flow_update(struct rte_eth_dev *dev,
+			     uint32_t queue,
+			     const struct rte_flow_op_attr *attr,
+			     struct rte_flow *flow,
+			     const struct rte_flow_action actions[],
+			     uint8_t action_template_index,
+			     void *user_data,
+			     struct rte_flow_error *error);
+static int
 mlx5_flow_async_flow_destroy(struct rte_eth_dev *dev,
 			     uint32_t queue,
 			     const struct rte_flow_op_attr *attr,
@@ -1152,6 +1161,7 @@ static const struct rte_flow_ops mlx5_flow_ops = {
 		mlx5_flow_async_action_handle_query_update,
 	.async_action_handle_query = mlx5_flow_async_action_handle_query,
 	.async_action_handle_destroy = mlx5_flow_async_action_handle_destroy,
+	.async_actions_update = mlx5_flow_async_flow_update,
 };
 
 /* Tunnel information. */
@@ -9349,6 +9359,52 @@ mlx5_flow_async_flow_create_by_index(struct rte_eth_dev *dev,
 				       user_data, error);
 }
 
+/**
+ * Enqueue flow update.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to destroy the flow.
+ * @param[in] attr
+ *   Pointer to the flow operation attributes.
+ * @param[in] flow
+ *   Pointer to the flow to be destroyed.
+ * @param[in] actions
+ *   Action with flow spec value.
+ * @param[in] action_template_index
+ *   The action pattern flow follows from the table.
+ * @param[in] user_data
+ *   Pointer to the user_data.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 on success, negative value otherwise and rte_errno is set.
+ */
+static int
+mlx5_flow_async_flow_update(struct rte_eth_dev *dev,
+			     uint32_t queue,
+			     const struct rte_flow_op_attr *attr,
+			     struct rte_flow *flow,
+			     const struct rte_flow_action actions[],
+			     uint8_t action_template_index,
+			     void *user_data,
+			     struct rte_flow_error *error)
+{
+	const struct mlx5_flow_driver_ops *fops;
+	struct rte_flow_attr fattr = {0};
+
+	if (flow_get_drv_type(dev, &fattr) != MLX5_FLOW_TYPE_HW)
+		return rte_flow_error_set(error, ENOTSUP,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				NULL,
+				"flow_q update with incorrect steering mode");
+	fops = flow_get_drv_ops(MLX5_FLOW_TYPE_HW);
+	return fops->async_flow_update(dev, queue, attr, flow,
+					actions, action_template_index, user_data, error);
+}
+
 /**
  * Enqueue flow destruction.
  *
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 02e33c7fb3..e3247fb011 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1173,6 +1173,7 @@ typedef uint32_t cnt_id_t;
 /* HWS flow struct. */
 struct rte_flow_hw {
 	uint32_t idx; /* Flow index from indexed pool. */
+	uint32_t res_idx; /* Resource index from indexed pool. */
 	uint32_t fate_type; /* Fate action type. */
 	union {
 		/* Jump action. */
@@ -1180,6 +1181,7 @@ struct rte_flow_hw {
 		struct mlx5_hrxq *hrxq; /* TIR action. */
 	};
 	struct rte_flow_template_table *table; /* The table flow allcated from. */
+	uint8_t mt_idx;
 	uint32_t age_idx;
 	cnt_id_t cnt_id;
 	uint32_t mtr_id;
@@ -1371,6 +1373,7 @@ struct rte_flow_template_table {
 	/* Action templates bind to the table. */
 	struct mlx5_hw_action_template ats[MLX5_HW_TBL_MAX_ACTION_TEMPLATE];
 	struct mlx5_indexed_pool *flow; /* The table's flow ipool. */
+	struct mlx5_indexed_pool *resource; /* The table's resource ipool. */
 	struct mlx5_flow_template_table_cfg cfg;
 	uint32_t type; /* Flow table type RX/TX/FDB. */
 	uint8_t nb_item_templates; /* Item template number. */
@@ -1865,6 +1868,15 @@ typedef struct rte_flow *(*mlx5_flow_async_flow_create_by_index_t)
 			 uint8_t action_template_index,
 			 void *user_data,
 			 struct rte_flow_error *error);
+typedef int (*mlx5_flow_async_flow_update_t)
+			(struct rte_eth_dev *dev,
+			 uint32_t queue,
+			 const struct rte_flow_op_attr *attr,
+			 struct rte_flow *flow,
+			 const struct rte_flow_action actions[],
+			 uint8_t action_template_index,
+			 void *user_data,
+			 struct rte_flow_error *error);
 typedef int (*mlx5_flow_async_flow_destroy_t)
 			(struct rte_eth_dev *dev,
 			 uint32_t queue,
@@ -1975,6 +1987,7 @@ struct mlx5_flow_driver_ops {
 	mlx5_flow_table_destroy_t template_table_destroy;
 	mlx5_flow_async_flow_create_t async_flow_create;
 	mlx5_flow_async_flow_create_by_index_t async_flow_create_by_index;
+	mlx5_flow_async_flow_update_t async_flow_update;
 	mlx5_flow_async_flow_destroy_t async_flow_destroy;
 	mlx5_flow_pull_t pull;
 	mlx5_flow_push_t push;
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index f17a2a0522..949e9dfb95 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -2248,7 +2248,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 
 		if (!hw_acts->mhdr->shared) {
 			rule_acts[pos].modify_header.offset =
-						job->flow->idx - 1;
+						job->flow->res_idx - 1;
 			rule_acts[pos].modify_header.data =
 						(uint8_t *)job->mhdr_cmd;
 			rte_memcpy(job->mhdr_cmd, hw_acts->mhdr->mhdr_cmds,
@@ -2405,7 +2405,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 			 */
 			age_idx = mlx5_hws_age_action_create(priv, queue, 0,
 							     age,
-							     job->flow->idx,
+							     job->flow->res_idx,
 							     error);
 			if (age_idx == 0)
 				return -rte_errno;
@@ -2504,7 +2504,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 	}
 	if (hw_acts->encap_decap && !hw_acts->encap_decap->shared) {
 		rule_acts[hw_acts->encap_decap_pos].reformat.offset =
-				job->flow->idx - 1;
+				job->flow->res_idx - 1;
 		rule_acts[hw_acts->encap_decap_pos].reformat.data = buf;
 	}
 	if (mlx5_hws_cnt_id_valid(hw_acts->cnt_id))
@@ -2612,6 +2612,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
 	struct mlx5_hw_q_job *job;
 	const struct rte_flow_item *rule_items;
 	uint32_t flow_idx;
+	uint32_t res_idx = 0;
 	int ret;
 
 	if (unlikely((!dev->data->dev_started))) {
@@ -2625,12 +2626,17 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
 	flow = mlx5_ipool_zmalloc(table->flow, &flow_idx);
 	if (!flow)
 		goto error;
+	mlx5_ipool_malloc(table->resource, &res_idx);
+	if (!res_idx)
+		goto flow_free;
 	/*
 	 * Set the table here in order to know the destination table
 	 * when free the flow afterwards.
 	 */
 	flow->table = table;
+	flow->mt_idx = pattern_template_index;
 	flow->idx = flow_idx;
+	flow->res_idx = res_idx;
 	job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
 	/*
 	 * Set the job type here in order to know if the flow memory
@@ -2644,8 +2650,9 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
 	 * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices for rule
 	 * insertion hints.
 	 */
-	MLX5_ASSERT(flow_idx > 0);
-	rule_attr.rule_idx = flow_idx - 1;
+	MLX5_ASSERT(res_idx > 0);
+	flow->rule_idx = res_idx - 1;
+	rule_attr.rule_idx = flow->rule_idx;
 	/*
 	 * Construct the flow actions based on the input actions.
 	 * The implicitly appended action is always fixed, like metadata
@@ -2672,8 +2679,10 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
 		return (struct rte_flow *)flow;
 free:
 	/* Flow created fail, return the descriptor and flow memory. */
-	mlx5_ipool_free(table->flow, flow_idx);
 	priv->hw_q[queue].job_idx++;
+	mlx5_ipool_free(table->resource, res_idx);
+flow_free:
+	mlx5_ipool_free(table->flow, flow_idx);
 error:
 	rte_flow_error_set(error, rte_errno,
 			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
@@ -2729,6 +2738,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
 	struct rte_flow_hw *flow;
 	struct mlx5_hw_q_job *job;
 	uint32_t flow_idx;
+	uint32_t res_idx = 0;
 	int ret;
 
 	if (unlikely(rule_index >= table->cfg.attr.nb_flows)) {
@@ -2742,12 +2752,17 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
 	flow = mlx5_ipool_zmalloc(table->flow, &flow_idx);
 	if (!flow)
 		goto error;
+	mlx5_ipool_malloc(table->resource, &res_idx);
+	if (!res_idx)
+		goto flow_free;
 	/*
 	 * Set the table here in order to know the destination table
 	 * when free the flow afterwards.
 	 */
 	flow->table = table;
+	flow->mt_idx = 0;
 	flow->idx = flow_idx;
+	flow->res_idx = res_idx;
 	job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
 	/*
 	 * Set the job type here in order to know if the flow memory
@@ -2760,9 +2775,8 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
 	/*
 	 * Set the rule index.
 	 */
-	MLX5_ASSERT(flow_idx > 0);
-	rule_attr.rule_idx = rule_index;
 	flow->rule_idx = rule_index;
+	rule_attr.rule_idx = flow->rule_idx;
 	/*
 	 * Construct the flow actions based on the input actions.
 	 * The implicitly appended action is always fixed, like metadata
@@ -2784,8 +2798,10 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
 		return (struct rte_flow *)flow;
 free:
 	/* Flow created fail, return the descriptor and flow memory. */
-	mlx5_ipool_free(table->flow, flow_idx);
 	priv->hw_q[queue].job_idx++;
+	mlx5_ipool_free(table->resource, res_idx);
+flow_free:
+	mlx5_ipool_free(table->flow, flow_idx);
 error:
 	rte_flow_error_set(error, rte_errno,
 			   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
@@ -2793,6 +2809,123 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
 	return NULL;
 }
 
+/**
+ * Enqueue HW steering flow update.
+ *
+ * The flow will be applied to the HW only if the postpone bit is not set or
+ * the extra push function is called.
+ * The flow destruction status should be checked from dequeue result.
+ *
+ * @param[in] dev
+ *   Pointer to the rte_eth_dev structure.
+ * @param[in] queue
+ *   The queue to destroy the flow.
+ * @param[in] attr
+ *   Pointer to the flow operation attributes.
+ * @param[in] flow
+ *   Pointer to the flow to be destroyed.
+ * @param[in] actions
+ *   Action with flow spec value.
+ * @param[in] action_template_index
+ *   The action pattern flow follows from the table.
+ * @param[in] user_data
+ *   Pointer to the user_data.
+ * @param[out] error
+ *   Pointer to error structure.
+ *
+ * @return
+ *    0 on success, negative value otherwise and rte_errno is set.
+ */
+static int
+flow_hw_async_flow_update(struct rte_eth_dev *dev,
+			   uint32_t queue,
+			   const struct rte_flow_op_attr *attr,
+			   struct rte_flow *flow,
+			   const struct rte_flow_action actions[],
+			   uint8_t action_template_index,
+			   void *user_data,
+			   struct rte_flow_error *error)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5dr_rule_attr rule_attr = {
+		.queue_id = queue,
+		.user_data = user_data,
+		.burst = attr->postpone,
+	};
+	struct mlx5dr_rule_action rule_acts[MLX5_HW_MAX_ACTS];
+	struct rte_flow_hw *of = (struct rte_flow_hw *)flow;
+	struct rte_flow_hw *nf;
+	struct rte_flow_template_table *table = of->table;
+	struct mlx5_hw_q_job *job;
+	uint32_t res_idx = 0;
+	int ret;
+
+	if (unlikely(!priv->hw_q[queue].job_idx)) {
+		rte_errno = ENOMEM;
+		goto error;
+	}
+	mlx5_ipool_malloc(table->resource, &res_idx);
+	if (!res_idx)
+		goto error;
+	job = priv->hw_q[queue].job[--priv->hw_q[queue].job_idx];
+	nf = job->upd_flow;
+	memset(nf, 0, sizeof(struct rte_flow_hw));
+	/*
+	 * Set the table here in order to know the destination table
+	 * when free the flow afterwards.
+	 */
+	nf->table = table;
+	nf->mt_idx = of->mt_idx;
+	nf->idx = of->idx;
+	nf->res_idx = res_idx;
+	/*
+	 * Set the job type here in order to know if the flow memory
+	 * should be freed or not when get the result from dequeue.
+	 */
+	job->type = MLX5_HW_Q_JOB_TYPE_UPDATE;
+	job->flow = nf;
+	job->user_data = user_data;
+	rule_attr.user_data = job;
+	/*
+	 * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices for rule
+	 * insertion hints.
+	 */
+	MLX5_ASSERT(res_idx > 0);
+	nf->rule_idx = res_idx - 1;
+	rule_attr.rule_idx = nf->rule_idx;
+	/*
+	 * Construct the flow actions based on the input actions.
+	 * The implicitly appended action is always fixed, like metadata
+	 * copy action from FDB to NIC Rx.
+	 * No need to copy and contrust a new "actions" list based on the
+	 * user's input, in order to save the cost.
+	 */
+	if (flow_hw_actions_construct(dev, job,
+				      &table->ats[action_template_index],
+				      nf->mt_idx, actions,
+				      rule_acts, queue, error)) {
+		rte_errno = EINVAL;
+		goto free;
+	}
+	/*
+	 * Switch the old flow and the new flow.
+	 */
+	job->flow = of;
+	job->upd_flow = nf;
+	ret = mlx5dr_rule_action_update((struct mlx5dr_rule *)of->rule,
+					action_template_index, rule_acts, &rule_attr);
+	if (likely(!ret))
+		return 0;
+free:
+	/* Flow created fail, return the descriptor and flow memory. */
+	priv->hw_q[queue].job_idx++;
+	mlx5_ipool_free(table->resource, res_idx);
+error:
+	return rte_flow_error_set(error, rte_errno,
+			RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+			"fail to update rte flow");
+}
+
 /**
  * Enqueue HW steering flow destruction.
  *
@@ -3002,6 +3135,7 @@ flow_hw_pull(struct rte_eth_dev *dev,
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_aso_mtr_pool *pool = priv->hws_mpool;
 	struct mlx5_hw_q_job *job;
+	uint32_t res_idx;
 	int ret, i;
 
 	/* 1. Pull the flow completion. */
@@ -3012,9 +3146,12 @@ flow_hw_pull(struct rte_eth_dev *dev,
 				"fail to query flow queue");
 	for (i = 0; i <  ret; i++) {
 		job = (struct mlx5_hw_q_job *)res[i].user_data;
+		/* Release the original resource index in case of update. */
+		res_idx = job->flow->res_idx;
 		/* Restore user data. */
 		res[i].user_data = job->user_data;
-		if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) {
+		if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY ||
+		    job->type == MLX5_HW_Q_JOB_TYPE_UPDATE) {
 			if (job->flow->fate_type == MLX5_FLOW_FATE_JUMP)
 				flow_hw_jump_release(dev, job->flow->jump);
 			else if (job->flow->fate_type == MLX5_FLOW_FATE_QUEUE)
@@ -3026,7 +3163,14 @@ flow_hw_pull(struct rte_eth_dev *dev,
 				mlx5_ipool_free(pool->idx_pool,	job->flow->mtr_id);
 				job->flow->mtr_id = 0;
 			}
-			mlx5_ipool_free(job->flow->table->flow, job->flow->idx);
+			if (job->type == MLX5_HW_Q_JOB_TYPE_DESTROY) {
+				mlx5_ipool_free(job->flow->table->resource, res_idx);
+				mlx5_ipool_free(job->flow->table->flow, job->flow->idx);
+			} else {
+				rte_memcpy(job->flow, job->upd_flow,
+					offsetof(struct rte_flow_hw, rule));
+				mlx5_ipool_free(job->flow->table->resource, res_idx);
+			}
 		}
 		priv->hw_q[queue].job[priv->hw_q[queue].job_idx++] = job;
 	}
@@ -3315,6 +3459,13 @@ flow_hw_table_create(struct rte_eth_dev *dev,
 	tbl->flow = mlx5_ipool_create(&cfg);
 	if (!tbl->flow)
 		goto error;
+	/* Allocate rule indexed pool. */
+	cfg.size = 0;
+	cfg.type = "mlx5_hw_table_rule";
+	cfg.max_idx += priv->hw_q[0].size;
+	tbl->resource = mlx5_ipool_create(&cfg);
+	if (!tbl->resource)
+		goto error;
 	/* Register the flow group. */
 	ge = mlx5_hlist_register(priv->sh->groups, attr->flow_attr.group, &ctx);
 	if (!ge)
@@ -3417,6 +3568,8 @@ flow_hw_table_create(struct rte_eth_dev *dev,
 		if (tbl->grp)
 			mlx5_hlist_unregister(priv->sh->groups,
 					      &tbl->grp->entry);
+		if (tbl->resource)
+			mlx5_ipool_destroy(tbl->resource);
 		if (tbl->flow)
 			mlx5_ipool_destroy(tbl->flow);
 		mlx5_free(tbl);
@@ -3593,16 +3746,20 @@ flow_hw_table_destroy(struct rte_eth_dev *dev,
 	struct mlx5_priv *priv = dev->data->dev_private;
 	int i;
 	uint32_t fidx = 1;
+	uint32_t ridx = 1;
 
 	/* Build ipool allocated object bitmap. */
+	mlx5_ipool_flush_cache(table->resource);
 	mlx5_ipool_flush_cache(table->flow);
 	/* Check if ipool has allocated objects. */
-	if (table->refcnt || mlx5_ipool_get_next(table->flow, &fidx)) {
-		DRV_LOG(WARNING, "Table %p is still in using.", (void *)table);
+	if (table->refcnt ||
+	    mlx5_ipool_get_next(table->flow, &fidx) ||
+	    mlx5_ipool_get_next(table->resource, &ridx)) {
+		DRV_LOG(WARNING, "Table %p is still in use.", (void *)table);
 		return rte_flow_error_set(error, EBUSY,
 				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				   NULL,
-				   "table in using");
+				   "table in use");
 	}
 	LIST_REMOVE(table, next);
 	for (i = 0; i < table->nb_item_templates; i++)
@@ -3615,6 +3772,7 @@ flow_hw_table_destroy(struct rte_eth_dev *dev,
 	}
 	mlx5dr_matcher_destroy(table->matcher);
 	mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry);
+	mlx5_ipool_destroy(table->resource);
 	mlx5_ipool_destroy(table->flow);
 	mlx5_free(table);
 	return 0;
@@ -7416,7 +7574,8 @@ flow_hw_configure(struct rte_eth_dev *dev,
 			    sizeof(struct mlx5_modification_cmd) *
 			    MLX5_MHDR_MAX_CMD +
 			    sizeof(struct rte_flow_item) *
-			    MLX5_HW_MAX_ITEMS) *
+			    MLX5_HW_MAX_ITEMS +
+				sizeof(struct rte_flow_hw)) *
 			    _queue_attr[i]->size;
 	}
 	priv->hw_q = mlx5_malloc(MLX5_MEM_ZERO, mem_size,
@@ -7430,6 +7589,7 @@ flow_hw_configure(struct rte_eth_dev *dev,
 		uint8_t *encap = NULL;
 		struct mlx5_modification_cmd *mhdr_cmd = NULL;
 		struct rte_flow_item *items = NULL;
+		struct rte_flow_hw *upd_flow = NULL;
 
 		priv->hw_q[i].job_idx = _queue_attr[i]->size;
 		priv->hw_q[i].size = _queue_attr[i]->size;
@@ -7448,10 +7608,13 @@ flow_hw_configure(struct rte_eth_dev *dev,
 			 &mhdr_cmd[_queue_attr[i]->size * MLX5_MHDR_MAX_CMD];
 		items = (struct rte_flow_item *)
 			 &encap[_queue_attr[i]->size * MLX5_ENCAP_MAX_LEN];
+		upd_flow = (struct rte_flow_hw *)
+			&items[_queue_attr[i]->size * MLX5_HW_MAX_ITEMS];
 		for (j = 0; j < _queue_attr[i]->size; j++) {
 			job[j].mhdr_cmd = &mhdr_cmd[j * MLX5_MHDR_MAX_CMD];
 			job[j].encap_data = &encap[j * MLX5_ENCAP_MAX_LEN];
 			job[j].items = &items[j * MLX5_HW_MAX_ITEMS];
+			job[j].upd_flow = &upd_flow[j];
 			priv->hw_q[i].job[j] = &job[j];
 		}
 		snprintf(mz_name, sizeof(mz_name), "port_%u_indir_act_cq_%u",
@@ -9031,6 +9194,7 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = {
 	.template_table_destroy = flow_hw_table_destroy,
 	.async_flow_create = flow_hw_async_flow_create,
 	.async_flow_create_by_index = flow_hw_async_flow_create_by_index,
+	.async_flow_update = flow_hw_async_flow_update,
 	.async_flow_destroy = flow_hw_async_flow_destroy,
 	.pull = flow_hw_pull,
 	.push = flow_hw_push,
-- 
2.18.2


^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [PATCH 1/4] net/mlx5/hws: use the same function to check rule
  2023-06-12 20:05 ` [PATCH 1/4] net/mlx5/hws: use the same function to check rule Alexander Kozyrev
@ 2023-06-14 16:59   ` Ori Kam
  0 siblings, 0 replies; 9+ messages in thread
From: Ori Kam @ 2023-06-14 16:59 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: Raslan Darawsheh, Matan Azrad, Slava Ovsiienko, Erez Shitrit



> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Monday, June 12, 2023 11:06 PM
> To: dev@dpdk.org
> Cc: Raslan Darawsheh <rasland@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam
> <orika@nvidia.com>; Erez Shitrit <erezsh@nvidia.com>
> Subject: [PATCH 1/4] net/mlx5/hws: use the same function to check rule
> 
> From: Erez Shitrit <erezsh@nvidia.com>
> 
> Before handling it for insert/delete
> 
> Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
> Reviewed-by: Alex Vesker <valex@nvidia.com>
> ---
Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [PATCH 2/4] net/mlx5/hws: use union in the wqe-data struct
  2023-06-12 20:05 ` [PATCH 2/4] net/mlx5/hws: use union in the wqe-data struct Alexander Kozyrev
@ 2023-06-14 17:00   ` Ori Kam
  0 siblings, 0 replies; 9+ messages in thread
From: Ori Kam @ 2023-06-14 17:00 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: Raslan Darawsheh, Matan Azrad, Slava Ovsiienko, Erez Shitrit



> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Monday, June 12, 2023 11:06 PM
> 
> From: Erez Shitrit <erezsh@nvidia.com>
> 
> To be clear about which field we are going to set.
> 
> Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
> Reviewed-by: Alex Vesker <valex@nvidia.com>
> ---
>  drivers/net/mlx5/hws/mlx5dr_send.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c
> b/drivers/net/mlx5/hws/mlx5dr_send.c
> index d650c55124..e58fdeb117 100644
> --- a/drivers/net/mlx5/hws/mlx5dr_send.c
> +++ b/drivers/net/mlx5/hws/mlx5dr_send.c
> @@ -110,7 +110,7 @@ mlx5dr_send_wqe_set_tag(struct
> mlx5dr_wqe_gta_data_seg_ste *wqe_data,
>  	if (is_jumbo) {
>  		/* Clear previous possibly dirty control */
>  		memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ);
> -		memcpy(wqe_data->action, tag->jumbo,
> MLX5DR_JUMBO_TAG_SZ);
> +		memcpy(wqe_data->jumbo, tag->jumbo,
> MLX5DR_JUMBO_TAG_SZ);
>  	} else {
>  		/* Clear previous possibly dirty control and actions */
>  		memset(wqe_data, 0, MLX5DR_STE_CTRL_SZ +
> MLX5DR_ACTIONS_SZ);
> --
> 2.18.2

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori


^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [PATCH 3/4] net/mlx5/hws: support rule update after its creation
  2023-06-12 20:05 ` [PATCH 3/4] net/mlx5/hws: support rule update after its creation Alexander Kozyrev
@ 2023-06-14 17:00   ` Ori Kam
  0 siblings, 0 replies; 9+ messages in thread
From: Ori Kam @ 2023-06-14 17:00 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: Raslan Darawsheh, Matan Azrad, Slava Ovsiienko, Erez Shitrit



> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Monday, June 12, 2023 11:06 PM
> 
> From: Erez Shitrit <erezsh@nvidia.com>
> 
> Add the ability to change rule's actions after the rule already created.
> The new actions should be one of the action template list.
> That support is only for matcher that uses the optimization of
> using rule insertion by index (optimize_using_rule_idx)
> 
> Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
> Reviewed-by: Alex Vesker <valex@nvidia.com>
> ---

Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori

^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [PATCH 0/4] net/mlx5: implement Flow update API
  2023-06-12 20:05 [PATCH 0/4] net/mlx5: implement Flow update API Alexander Kozyrev
                   ` (3 preceding siblings ...)
  2023-06-12 20:05 ` [PATCH 4/4] net/mlx5: implement Flow update API Alexander Kozyrev
@ 2023-06-19 15:03 ` Raslan Darawsheh
  4 siblings, 0 replies; 9+ messages in thread
From: Raslan Darawsheh @ 2023-06-19 15:03 UTC (permalink / raw)
  To: Alexander Kozyrev, dev
  Cc: Matan Azrad, Slava Ovsiienko, Ori Kam, Erez Shitrit

Hi,

> -----Original Message-----
> From: Alexander Kozyrev <akozyrev@nvidia.com>
> Sent: Monday, June 12, 2023 11:06 PM
> To: dev@dpdk.org
> Cc: Raslan Darawsheh <rasland@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam
> <orika@nvidia.com>; Erez Shitrit <erezsh@nvidia.com>
> Subject: [PATCH 0/4] net/mlx5: implement Flow update API
> 
> Add the implementation for the rte_flow_async_actions_update() API.
> Construct the new actions and replace them for the Flow handle.
> Old resources are freed during the rte_flow_pull() invocation.
> 
> Alexander Kozyrev (1):
>   net/mlx5: implement Flow update API
> 
> Erez Shitrit (3):
>   net/mlx5/hws: use the same function to check rule
>   net/mlx5/hws: use union in the wqe-data struct
>   net/mlx5/hws: support rule update after its creation
> 
>  drivers/net/mlx5/hws/mlx5dr.h      |  17 +++
>  drivers/net/mlx5/hws/mlx5dr_rule.c | 123 +++++++++++++-----
>  drivers/net/mlx5/hws/mlx5dr_send.c |   2 +-
>  drivers/net/mlx5/mlx5.h            |   1 +
>  drivers/net/mlx5/mlx5_flow.c       |  56 +++++++++
>  drivers/net/mlx5/mlx5_flow.h       |  13 ++
>  drivers/net/mlx5/mlx5_flow_hw.c    | 194 ++++++++++++++++++++++++++-
> --
>  7 files changed, 362 insertions(+), 44 deletions(-)
> 
> --
> 2.18.2

Series applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-06-19 15:03 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-06-12 20:05 [PATCH 0/4] net/mlx5: implement Flow update API Alexander Kozyrev
2023-06-12 20:05 ` [PATCH 1/4] net/mlx5/hws: use the same function to check rule Alexander Kozyrev
2023-06-14 16:59   ` Ori Kam
2023-06-12 20:05 ` [PATCH 2/4] net/mlx5/hws: use union in the wqe-data struct Alexander Kozyrev
2023-06-14 17:00   ` Ori Kam
2023-06-12 20:05 ` [PATCH 3/4] net/mlx5/hws: support rule update after its creation Alexander Kozyrev
2023-06-14 17:00   ` Ori Kam
2023-06-12 20:05 ` [PATCH 4/4] net/mlx5: implement Flow update API Alexander Kozyrev
2023-06-19 15:03 ` [PATCH 0/4] " Raslan Darawsheh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).