DPDK patches and discussions
 help / color / mirror / Atom feed
From: Gregory Etelson <getelson@nvidia.com>
To: <dev@dpdk.org>
Cc: getelson@nvidia.com,   <mkashani@nvidia.com>,
	"  . .
	/patches/upstream-pmd-indirect-actions-list/v2/v2-0000-cover-letter
	. patch" <rasland@nvidia.com>, "Matan Azrad" <matan@nvidia.com>,
	"Viacheslav Ovsiienko" <viacheslavo@nvidia.com>,
	"Ori Kam" <orika@nvidia.com>,
	"Suanming Mou" <suanmingm@nvidia.com>
Subject: [PATCH v2 08/16] net/mlx5: support HWS mirror action
Date: Mon, 16 Oct 2023 21:42:27 +0300	[thread overview]
Message-ID: <20231016184235.200427-8-getelson@nvidia.com> (raw)
In-Reply-To: <20231016184235.200427-1-getelson@nvidia.com>

HWS mirror clones original packet to one or two destinations and
proceeds with the original packet path.

The mirror has no dedicated RTE flow action type.
Mirror object is referenced by INDIRECT_LIST action.
INDIRECT_LIST for a mirror built from actions list:

    SAMPLE [/ SAMPLE] / <Orig. packet destination> / END

Mirror SAMPLE action defines packet clone. It specifies the clone
destination and optional clone reformat action.
Destination action for both clone and original packet depends on HCA
domain:
- for NIC RX, destination is ether RSS or QUEUE
- for FDB, destination is PORT

HWS mirror was inplemented with the INDIRECT_LIST flow action.

MLX5 PMD defines general `struct mlx5_indirect_list` type for all.
INDIRECT_LIST handler objects:

		struct mlx5_indirect_list {
			enum mlx5_indirect_list_type type;
			LIST_ENTRY(mlx5_indirect_list) chain;
			char data[];
		};

Specific INDIRECT_LIST type must overload `mlx5_indirect_list::data`
and provide unique `type` value.
PMD returns a pointer to `mlx5_indirect_list` object.

Existing non-masked actions template API cannot identify flow actions
in INDIRECT_LIST handler because INDIRECT_LIST handler can represent
several flow actions.

For example:
A: SAMPLE / JUMP
B: SAMPE / SAMPLE / RSS

Actions template command

	template indirect_list / end mask indirect_list 0 / end

does not provide any information to differentiate between flow
actions in A and B.

MLX5 PMD requires INDIRECT_LIST configuration parameter in the
template section:

Non-masked INDIRECT_LIST API:
=============================

	template indirect_list X / end mask indirect_list 0 / end

PMD identifies type of X handler and will use the same type in
template creation. Actual parameters for actions in the list will
be extracted from flow configuration

Masked INDIRECT_LIST API:
=========================

	template indirect_list X / end mask indirect_list -lUL / end

PMD creates action template from actions types and configurations
referenced by X.

INDIRECT_LIST action without configuration is invalid and will be
rejected by PMD.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
 drivers/net/mlx5/mlx5.c         |   1 +
 drivers/net/mlx5/mlx5.h         |   2 +
 drivers/net/mlx5/mlx5_flow.c    | 134 +++++++
 drivers/net/mlx5/mlx5_flow.h    |  69 +++-
 drivers/net/mlx5/mlx5_flow_hw.c | 616 +++++++++++++++++++++++++++++++-
 5 files changed, 817 insertions(+), 5 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 997df595d0..08b7b03365 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2168,6 +2168,7 @@ mlx5_dev_close(struct rte_eth_dev *dev)
 	/* Free the eCPRI flex parser resource. */
 	mlx5_flex_parser_ecpri_release(dev);
 	mlx5_flex_item_port_cleanup(dev);
+	mlx5_indirect_list_handles_release(dev);
 #ifdef HAVE_MLX5_HWS_SUPPORT
 	flow_hw_destroy_vport_action(dev);
 	flow_hw_resource_release(dev);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 0b709a1bda..f3b872f59c 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1791,6 +1791,8 @@ struct mlx5_priv {
 	LIST_HEAD(ind_tables, mlx5_ind_table_obj) ind_tbls;
 	/* Standalone indirect tables. */
 	LIST_HEAD(stdl_ind_tables, mlx5_ind_table_obj) standalone_ind_tbls;
+	/* Objects created with indirect list action */
+	LIST_HEAD(indirect_list, mlx5_indirect_list) indirect_list_head;
 	/* Pointer to next element. */
 	rte_rwlock_t ind_tbls_lock;
 	uint32_t refcnt; /**< Reference counter. */
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 8ad85e6027..693d1320e1 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -62,6 +62,30 @@ struct tunnel_default_miss_ctx {
 	};
 };
 
+void
+mlx5_indirect_list_handles_release(struct rte_eth_dev *dev)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	while (!LIST_EMPTY(&priv->indirect_list_head)) {
+		struct mlx5_indirect_list *e =
+			LIST_FIRST(&priv->indirect_list_head);
+
+		LIST_REMOVE(e, entry);
+		switch (e->type) {
+#ifdef HAVE_MLX5_HWS_SUPPORT
+		case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR:
+			mlx5_hw_mirror_destroy(dev, (struct mlx5_mirror *)e, true);
+		break;
+#endif
+		default:
+			DRV_LOG(ERR, "invalid indirect list type");
+			MLX5_ASSERT(false);
+			break;
+		}
+	}
+}
+
 static int
 flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
 			     struct rte_flow *flow,
@@ -1120,6 +1144,32 @@ mlx5_flow_async_action_handle_query_update
 	 enum rte_flow_query_update_mode qu_mode,
 	 void *user_data, struct rte_flow_error *error);
 
+static struct rte_flow_action_list_handle *
+mlx5_action_list_handle_create(struct rte_eth_dev *dev,
+			       const struct rte_flow_indir_action_conf *conf,
+			       const struct rte_flow_action *actions,
+			       struct rte_flow_error *error);
+
+static int
+mlx5_action_list_handle_destroy(struct rte_eth_dev *dev,
+				struct rte_flow_action_list_handle *handle,
+				struct rte_flow_error *error);
+
+static struct rte_flow_action_list_handle *
+mlx5_flow_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue_id,
+					  const struct rte_flow_op_attr *attr,
+					  const struct
+					  rte_flow_indir_action_conf *conf,
+					  const struct rte_flow_action *actions,
+					  void *user_data,
+					  struct rte_flow_error *error);
+static int
+mlx5_flow_async_action_list_handle_destroy
+			(struct rte_eth_dev *dev, uint32_t queue_id,
+			 const struct rte_flow_op_attr *op_attr,
+			 struct rte_flow_action_list_handle *action_handle,
+			 void *user_data, struct rte_flow_error *error);
+
 static const struct rte_flow_ops mlx5_flow_ops = {
 	.validate = mlx5_flow_validate,
 	.create = mlx5_flow_create,
@@ -1135,6 +1185,8 @@ static const struct rte_flow_ops mlx5_flow_ops = {
 	.action_handle_update = mlx5_action_handle_update,
 	.action_handle_query = mlx5_action_handle_query,
 	.action_handle_query_update = mlx5_action_handle_query_update,
+	.action_list_handle_create = mlx5_action_list_handle_create,
+	.action_list_handle_destroy = mlx5_action_list_handle_destroy,
 	.tunnel_decap_set = mlx5_flow_tunnel_decap_set,
 	.tunnel_match = mlx5_flow_tunnel_match,
 	.tunnel_action_decap_release = mlx5_flow_tunnel_action_release,
@@ -1163,6 +1215,10 @@ static const struct rte_flow_ops mlx5_flow_ops = {
 	.async_action_handle_query = mlx5_flow_async_action_handle_query,
 	.async_action_handle_destroy = mlx5_flow_async_action_handle_destroy,
 	.async_actions_update = mlx5_flow_async_flow_update,
+	.async_action_list_handle_create =
+		mlx5_flow_async_action_list_handle_create,
+	.async_action_list_handle_destroy =
+		mlx5_flow_async_action_list_handle_destroy,
 };
 
 /* Tunnel information. */
@@ -10869,6 +10925,84 @@ mlx5_action_handle_query_update(struct rte_eth_dev *dev,
 					 query, qu_mode, error);
 }
 
+
+#define MLX5_DRV_FOPS_OR_ERR(dev, fops, drv_cb, ret)                           \
+{                                                                              \
+        struct rte_flow_attr attr = { .transfer = 0 };                         \
+        enum mlx5_flow_drv_type drv_type = flow_get_drv_type((dev), &attr);    \
+        if (drv_type == MLX5_FLOW_TYPE_MIN ||                                  \
+	    drv_type == MLX5_FLOW_TYPE_MAX) {                                  \
+                rte_flow_error_set(error, ENOTSUP,                             \
+				   RTE_FLOW_ERROR_TYPE_ACTION,                 \
+                                   NULL, "invalid driver type");               \
+                return ret;                                                    \
+        }                                                                      \
+        (fops) = flow_get_drv_ops(drv_type);                                   \
+        if (!(fops) || !(fops)->drv_cb) {                                      \
+                rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, \
+                                   NULL, "no action_list handler");            \
+                return ret;                                                    \
+        }                                                                      \
+}
+
+static struct rte_flow_action_list_handle *
+mlx5_action_list_handle_create(struct rte_eth_dev *dev,
+			       const struct rte_flow_indir_action_conf *conf,
+			       const struct rte_flow_action *actions,
+			       struct rte_flow_error *error)
+{
+	const struct mlx5_flow_driver_ops *fops;
+
+	MLX5_DRV_FOPS_OR_ERR(dev, fops, action_list_handle_create, NULL);
+	return fops->action_list_handle_create(dev, conf, actions, error);
+}
+
+static int
+mlx5_action_list_handle_destroy(struct rte_eth_dev *dev,
+				struct rte_flow_action_list_handle *handle,
+				struct rte_flow_error *error)
+{
+	const struct mlx5_flow_driver_ops *fops;
+
+	MLX5_DRV_FOPS_OR_ERR(dev, fops, action_list_handle_destroy, ENOTSUP);
+	return fops->action_list_handle_destroy(dev, handle, error);
+}
+
+static struct rte_flow_action_list_handle *
+mlx5_flow_async_action_list_handle_create(struct rte_eth_dev *dev,
+					  uint32_t queue_id,
+					  const struct
+					  rte_flow_op_attr *op_attr,
+					  const struct
+					  rte_flow_indir_action_conf *conf,
+					  const struct rte_flow_action *actions,
+					  void *user_data,
+					  struct rte_flow_error *error)
+{
+	const struct mlx5_flow_driver_ops *fops;
+
+	MLX5_DRV_FOPS_OR_ERR(dev, fops, async_action_list_handle_create, NULL);
+	return fops->async_action_list_handle_create(dev, queue_id, op_attr,
+						     conf, actions, user_data,
+						     error);
+}
+
+static int
+mlx5_flow_async_action_list_handle_destroy
+	(struct rte_eth_dev *dev, uint32_t queue_id,
+	 const struct rte_flow_op_attr *op_attr,
+	 struct rte_flow_action_list_handle *action_handle,
+	 void *user_data, struct rte_flow_error *error)
+{
+	const struct mlx5_flow_driver_ops *fops;
+
+	MLX5_DRV_FOPS_OR_ERR(dev, fops,
+			     async_action_list_handle_destroy, ENOTSUP);
+	return fops->async_action_list_handle_destroy(dev, queue_id, op_attr,
+						      action_handle, user_data,
+						      error);
+}
+
 /**
  * Destroy all indirect actions (shared RSS).
  *
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 903ff66d72..f6a752475d 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -65,7 +65,7 @@ enum mlx5_rte_flow_field_id {
 	(((uint32_t)(uintptr_t)(handle)) & \
 	 ((1u << MLX5_INDIRECT_ACTION_TYPE_OFFSET) - 1))
 
-enum {
+enum mlx5_indirect_type{
 	MLX5_INDIRECT_ACTION_TYPE_RSS,
 	MLX5_INDIRECT_ACTION_TYPE_AGE,
 	MLX5_INDIRECT_ACTION_TYPE_COUNT,
@@ -97,6 +97,28 @@ enum {
 #define MLX5_ACTION_CTX_CT_GET_OWNER MLX5_INDIRECT_ACT_CT_GET_OWNER
 #define MLX5_ACTION_CTX_CT_GEN_IDX MLX5_INDIRECT_ACT_CT_GEN_IDX
 
+enum mlx5_indirect_list_type {
+	MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR = 1,
+};
+
+/*
+ * Base type for indirect list type.
+ * Actual indirect list type MUST override that type and put type spec data
+ * after the `chain`.
+ */
+struct mlx5_indirect_list {
+	/* type field MUST be the first */
+	enum mlx5_indirect_list_type type;
+	LIST_ENTRY(mlx5_indirect_list) entry;
+	/* put type specific data after chain */
+};
+
+static __rte_always_inline enum mlx5_indirect_list_type
+mlx5_get_indirect_list_type(const struct mlx5_indirect_list *obj)
+{
+	return obj->type;
+}
+
 /* Matches on selected register. */
 struct mlx5_rte_flow_item_tag {
 	enum modify_reg id;
@@ -1218,6 +1240,10 @@ struct rte_flow_hw {
 #pragma GCC diagnostic error "-Wpedantic"
 #endif
 
+struct mlx5dr_action;
+typedef struct mlx5dr_action *
+(*indirect_list_callback_t)(const struct rte_flow_action *);
+
 /* rte flow action translate to DR action struct. */
 struct mlx5_action_construct_data {
 	LIST_ENTRY(mlx5_action_construct_data) next;
@@ -1266,6 +1292,9 @@ struct mlx5_action_construct_data {
 		struct {
 			uint32_t id;
 		} shared_meter;
+		struct {
+			indirect_list_callback_t cb;
+		} indirect_list;
 	};
 };
 
@@ -1776,6 +1805,17 @@ typedef int (*mlx5_flow_action_query_update_t)
 			 const void *update, void *data,
 			 enum rte_flow_query_update_mode qu_mode,
 			 struct rte_flow_error *error);
+typedef struct rte_flow_action_list_handle *
+(*mlx5_flow_action_list_handle_create_t)
+			(struct rte_eth_dev *dev,
+			 const struct rte_flow_indir_action_conf *conf,
+			 const struct rte_flow_action *actions,
+			 struct rte_flow_error *error);
+typedef int
+(*mlx5_flow_action_list_handle_destroy_t)
+			(struct rte_eth_dev *dev,
+			 struct rte_flow_action_list_handle *handle,
+			 struct rte_flow_error *error);
 typedef int (*mlx5_flow_sync_domain_t)
 			(struct rte_eth_dev *dev,
 			 uint32_t domains,
@@ -1964,6 +2004,20 @@ typedef int (*mlx5_flow_async_action_handle_destroy_t)
 			 struct rte_flow_action_handle *handle,
 			 void *user_data,
 			 struct rte_flow_error *error);
+typedef struct rte_flow_action_list_handle *
+(*mlx5_flow_async_action_list_handle_create_t)
+			(struct rte_eth_dev *dev, uint32_t queue_id,
+			 const struct rte_flow_op_attr *attr,
+			 const struct rte_flow_indir_action_conf *conf,
+			 const struct rte_flow_action *actions,
+			 void *user_data, struct rte_flow_error *error);
+typedef int
+(*mlx5_flow_async_action_list_handle_destroy_t)
+			(struct rte_eth_dev *dev, uint32_t queue_id,
+			 const struct rte_flow_op_attr *op_attr,
+			 struct rte_flow_action_list_handle *action_handle,
+			 void *user_data, struct rte_flow_error *error);
+
 
 struct mlx5_flow_driver_ops {
 	mlx5_flow_validate_t validate;
@@ -1999,6 +2053,8 @@ struct mlx5_flow_driver_ops {
 	mlx5_flow_action_update_t action_update;
 	mlx5_flow_action_query_t action_query;
 	mlx5_flow_action_query_update_t action_query_update;
+	mlx5_flow_action_list_handle_create_t action_list_handle_create;
+	mlx5_flow_action_list_handle_destroy_t action_list_handle_destroy;
 	mlx5_flow_sync_domain_t sync_domain;
 	mlx5_flow_discover_priorities_t discover_priorities;
 	mlx5_flow_item_create_t item_create;
@@ -2025,6 +2081,10 @@ struct mlx5_flow_driver_ops {
 	mlx5_flow_async_action_handle_query_update_t async_action_query_update;
 	mlx5_flow_async_action_handle_query_t async_action_query;
 	mlx5_flow_async_action_handle_destroy_t async_action_destroy;
+	mlx5_flow_async_action_list_handle_create_t
+		async_action_list_handle_create;
+	mlx5_flow_async_action_list_handle_destroy_t
+		async_action_list_handle_destroy;
 };
 
 /* mlx5_flow.c */
@@ -2755,4 +2815,11 @@ flow_hw_get_srh_flex_parser_byte_off_from_ctx(void *dr_ctx __rte_unused)
 #endif
 	return UINT32_MAX;
 }
+void
+mlx5_indirect_list_handles_release(struct rte_eth_dev *dev);
+#ifdef HAVE_MLX5_HWS_SUPPORT
+struct mlx5_mirror;
+void
+mlx5_hw_mirror_destroy(struct rte_eth_dev *dev, struct mlx5_mirror *mirror, bool release);
+#endif
 #endif /* RTE_PMD_MLX5_FLOW_H_ */
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index b2215fb5cf..44ed23b1fd 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -58,6 +58,24 @@
 #define MLX5_HW_VLAN_PUSH_VID_IDX 1
 #define MLX5_HW_VLAN_PUSH_PCP_IDX 2
 
+#define MLX5_MIRROR_MAX_CLONES_NUM 3
+#define MLX5_MIRROR_MAX_SAMPLE_ACTIONS_LEN 4
+
+struct mlx5_mirror_clone {
+	enum rte_flow_action_type type;
+	void *action_ctx;
+};
+
+struct mlx5_mirror {
+	/* type field MUST be the first */
+	enum mlx5_indirect_list_type type;
+	LIST_ENTRY(mlx5_indirect_list) entry;
+
+	uint32_t clones_num;
+	struct mlx5dr_action *mirror_action;
+	struct mlx5_mirror_clone clone[MLX5_MIRROR_MAX_CLONES_NUM];
+};
+
 static int flow_hw_flush_all_ctrl_flows(struct rte_eth_dev *dev);
 static int flow_hw_translate_group(struct rte_eth_dev *dev,
 				   const struct mlx5_flow_template_table_cfg *cfg,
@@ -568,6 +586,22 @@ __flow_hw_act_data_general_append(struct mlx5_priv *priv,
 	return 0;
 }
 
+static __rte_always_inline int
+flow_hw_act_data_indirect_list_append(struct mlx5_priv *priv,
+				      struct mlx5_hw_actions *acts,
+				      enum rte_flow_action_type type,
+				      uint16_t action_src, uint16_t action_dst,
+				      indirect_list_callback_t cb)
+{
+	struct mlx5_action_construct_data *act_data;
+
+	act_data = __flow_hw_act_data_alloc(priv, type, action_src, action_dst);
+	if (!act_data)
+		return -1;
+	act_data->indirect_list.cb = cb;
+	LIST_INSERT_HEAD(&acts->act_list, act_data, next);
+	return 0;
+}
 /**
  * Append dynamic encap action to the dynamic action list.
  *
@@ -1383,6 +1417,48 @@ flow_hw_meter_mark_compile(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static struct mlx5dr_action *
+flow_hw_mirror_action(const struct rte_flow_action *action)
+{
+	struct mlx5_mirror *mirror = (void *)(uintptr_t)action->conf;
+
+	return mirror->mirror_action;
+}
+
+static int
+table_template_translate_indirect_list(struct rte_eth_dev *dev,
+				       const struct rte_flow_action *action,
+				       const struct rte_flow_action *mask,
+				       struct mlx5_hw_actions *acts,
+				       uint16_t action_src,
+				       uint16_t action_dst)
+{
+	int ret;
+	bool is_masked = action->conf && mask->conf;
+	struct mlx5_priv *priv = dev->data->dev_private;
+	enum mlx5_indirect_list_type type;
+
+	if (!action->conf)
+		return -EINVAL;
+	type = mlx5_get_indirect_list_type(action->conf);
+	switch (type) {
+	case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR:
+		if (is_masked) {
+			acts->rule_acts[action_dst].action = flow_hw_mirror_action(action);
+		} else {
+			ret = flow_hw_act_data_indirect_list_append
+				(priv, acts, RTE_FLOW_ACTION_TYPE_INDIRECT_LIST,
+				 action_src, action_dst, flow_hw_mirror_action);
+			if (ret)
+				return ret;
+		}
+		break;
+	default:
+		return -EINVAL;
+	}
+	return 0;
+}
+
 /**
  * Translate rte_flow actions to DR action.
  *
@@ -1419,7 +1495,7 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
 	struct rte_flow_action *actions = at->actions;
 	struct rte_flow_action *action_start = actions;
 	struct rte_flow_action *masks = at->masks;
-	enum mlx5dr_action_type refmt_type = 0;
+	enum mlx5dr_action_type refmt_type = MLX5DR_ACTION_TYP_LAST;
 	const struct rte_flow_action_raw_encap *raw_encap_data;
 	const struct rte_flow_item *enc_item = NULL, *enc_item_m = NULL;
 	uint16_t reformat_src = 0;
@@ -1433,7 +1509,7 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
 	uint16_t action_pos;
 	uint16_t jump_pos;
 	uint32_t ct_idx;
-	int err;
+	int ret, err;
 	uint32_t target_grp = 0;
 	int table_type;
 
@@ -1445,7 +1521,20 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
 	else
 		type = MLX5DR_TABLE_TYPE_NIC_RX;
 	for (; !actions_end; actions++, masks++) {
-		switch (actions->type) {
+		switch ((int)actions->type) {
+		case RTE_FLOW_ACTION_TYPE_INDIRECT_LIST:
+			action_pos = at->actions_off[actions - at->actions];
+			if (!attr->group) {
+				DRV_LOG(ERR, "Indirect action is not supported in root table.");
+				goto err;
+			}
+			ret = table_template_translate_indirect_list
+				(dev, actions, masks, acts,
+				 actions - action_start,
+				 action_pos);
+			if (ret)
+				goto err;
+			break;
 		case RTE_FLOW_ACTION_TYPE_INDIRECT:
 			action_pos = at->actions_off[actions - at->actions];
 			if (!attr->group) {
@@ -2301,7 +2390,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 			MLX5_ASSERT(action->type ==
 				    RTE_FLOW_ACTION_TYPE_INDIRECT ||
 				    (int)action->type == act_data->type);
-		switch (act_data->type) {
+		switch ((int)act_data->type) {
+		case RTE_FLOW_ACTION_TYPE_INDIRECT_LIST:
+			rule_acts[act_data->action_dst].action =
+				act_data->indirect_list.cb(action);
+			break;
 		case RTE_FLOW_ACTION_TYPE_INDIRECT:
 			if (flow_hw_shared_action_construct
 					(dev, queue, action, table, it_idx,
@@ -4366,6 +4459,8 @@ mlx5_flow_hw_actions_validate(struct rte_eth_dev *dev,
 		switch (action->type) {
 		case RTE_FLOW_ACTION_TYPE_VOID:
 			break;
+		case RTE_FLOW_ACTION_TYPE_INDIRECT_LIST:
+			break;
 		case RTE_FLOW_ACTION_TYPE_INDIRECT:
 			ret = flow_hw_validate_action_indirect(dev, action,
 							       mask,
@@ -4607,6 +4702,28 @@ flow_hw_dr_actions_template_handle_shared(const struct rte_flow_action *mask,
 	return 0;
 }
 
+
+static int
+flow_hw_template_actions_list(struct rte_flow_actions_template *at,
+			      unsigned int action_src,
+			      enum mlx5dr_action_type *action_types,
+			      uint16_t *curr_off)
+{
+	enum mlx5_indirect_list_type list_type;
+
+	list_type = mlx5_get_indirect_list_type(at->actions[action_src].conf);
+	switch (list_type) {
+	case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR:
+		action_template_set_type(at, action_types, action_src, curr_off,
+					 MLX5DR_ACTION_TYP_DEST_ARRAY);
+		break;
+	default:
+		DRV_LOG(ERR, "Unsupported indirect list type");
+		return -EINVAL;
+	}
+	return 0;
+}
+
 /**
  * Create DR action template based on a provided sequence of flow actions.
  *
@@ -4639,6 +4756,12 @@ flow_hw_dr_actions_template_create(struct rte_flow_actions_template *at)
 		switch (at->actions[i].type) {
 		case RTE_FLOW_ACTION_TYPE_VOID:
 			break;
+		case RTE_FLOW_ACTION_TYPE_INDIRECT_LIST:
+			ret = flow_hw_template_actions_list(at, i, action_types,
+							    &curr_off);
+			if (ret)
+				return NULL;
+			break;
 		case RTE_FLOW_ACTION_TYPE_INDIRECT:
 			ret = flow_hw_dr_actions_template_handle_shared
 								 (&at->masks[i],
@@ -5119,6 +5242,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
 		 * Need to restore the indirect action index from action conf here.
 		 */
 		case RTE_FLOW_ACTION_TYPE_INDIRECT:
+		case RTE_FLOW_ACTION_TYPE_INDIRECT_LIST:
 			at->actions[i].conf = actions->conf;
 			at->masks[i].conf = masks->conf;
 			break;
@@ -9354,6 +9478,484 @@ flow_hw_get_aged_flows(struct rte_eth_dev *dev, void **contexts,
 	return flow_hw_get_q_aged_flows(dev, 0, contexts, nb_contexts, error);
 }
 
+static void
+mlx5_mirror_destroy_clone(struct rte_eth_dev *dev,
+			  struct mlx5_mirror_clone *clone)
+{
+	switch (clone->type) {
+	case RTE_FLOW_ACTION_TYPE_RSS:
+	case RTE_FLOW_ACTION_TYPE_QUEUE:
+		mlx5_hrxq_release(dev,
+				  ((struct mlx5_hrxq *)(clone->action_ctx))->idx);
+		break;
+	case RTE_FLOW_ACTION_TYPE_JUMP:
+		flow_hw_jump_release(dev, clone->action_ctx);
+		break;
+	case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
+	case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
+	case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
+	case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
+	case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
+	default:
+		break;
+	}
+}
+
+void
+mlx5_hw_mirror_destroy(struct rte_eth_dev *dev, struct mlx5_mirror *mirror, bool release)
+{
+	uint32_t i;
+
+	if (mirror->entry.le_prev)
+		LIST_REMOVE(mirror, entry);
+	for(i = 0; i < mirror->clones_num; i++)
+		mlx5_mirror_destroy_clone(dev, &mirror->clone[i]);
+	if (mirror->mirror_action)
+		mlx5dr_action_destroy(mirror->mirror_action);
+    if (release)
+	    mlx5_free(mirror);
+}
+
+static inline enum mlx5dr_table_type
+get_mlx5dr_table_type(const struct rte_flow_attr *attr)
+{
+	enum mlx5dr_table_type type;
+
+	if (attr->transfer)
+		type = MLX5DR_TABLE_TYPE_FDB;
+	else if (attr->egress)
+		type = MLX5DR_TABLE_TYPE_NIC_TX;
+	else
+		type = MLX5DR_TABLE_TYPE_NIC_RX;
+	return type;
+}
+
+static __rte_always_inline bool
+mlx5_mirror_terminal_action(const struct rte_flow_action *action)
+{
+	switch (action->type) {
+	case RTE_FLOW_ACTION_TYPE_JUMP:
+	case RTE_FLOW_ACTION_TYPE_RSS:
+	case RTE_FLOW_ACTION_TYPE_QUEUE:
+	case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
+		return true;
+	default:
+		break;
+	}
+	return false;
+}
+
+static bool
+mlx5_mirror_validate_sample_action(struct rte_eth_dev *dev,
+	const struct rte_flow_action *action)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	switch(action->type) {
+	case RTE_FLOW_ACTION_TYPE_QUEUE:
+	case RTE_FLOW_ACTION_TYPE_RSS:
+		if (priv->sh->esw_mode)
+			return false;
+		break;
+	case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
+	case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
+	case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
+	case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
+	case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
+		if (!priv->sh->esw_mode)
+			return false;
+		if (action[0].type == RTE_FLOW_ACTION_TYPE_RAW_DECAP &&
+		    action[1].type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP)
+			return false;
+		break;
+	default:
+		return false;
+	}
+	return true;
+}
+
+/**
+ * Valid mirror actions list includes one or two SAMPLE actions
+ * followed by JUMP.
+ *
+ * @return
+ * Number of mirrors *action* list was valid.
+ * -EINVAL otherwise.
+ */
+static int
+mlx5_hw_mirror_actions_list_validate(struct rte_eth_dev *dev,
+				     const struct rte_flow_action *actions)
+{
+	if (actions[0].type == RTE_FLOW_ACTION_TYPE_SAMPLE) {
+		int i = 1;
+		bool valid;
+		const struct rte_flow_action_sample *sample = actions[0].conf;
+		valid = mlx5_mirror_validate_sample_action(dev, sample->actions);
+		if (!valid)
+			return -EINVAL;
+		if (actions[1].type == RTE_FLOW_ACTION_TYPE_SAMPLE) {
+			i = 2;
+			sample = actions[1].conf;
+			valid = mlx5_mirror_validate_sample_action(dev, sample->actions);
+			if (!valid)
+				return -EINVAL;
+		}
+		return mlx5_mirror_terminal_action(actions + i) ? i + 1 : -EINVAL;
+	}
+	return -EINVAL;
+}
+
+static int
+mirror_format_tir(struct rte_eth_dev *dev,
+		  struct mlx5_mirror_clone *clone,
+		  const struct mlx5_flow_template_table_cfg *table_cfg,
+		  const struct rte_flow_action *action,
+		  struct mlx5dr_action_dest_attr *dest_attr,
+		  struct rte_flow_error *error)
+{
+	uint32_t hws_flags;
+	enum mlx5dr_table_type table_type;
+	struct mlx5_hrxq* tir_ctx;
+
+	table_type = get_mlx5dr_table_type(&table_cfg->attr.flow_attr);
+	hws_flags = mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_NONE_ROOT][table_type];
+	tir_ctx = flow_hw_tir_action_register(dev, hws_flags, action);
+	if (!tir_ctx)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION,
+					  action, "failed to create QUEUE action for mirror clone");
+	dest_attr->dest = tir_ctx->action;
+	clone->action_ctx = tir_ctx;
+	return 0;
+}
+
+static int
+mirror_format_jump(struct rte_eth_dev *dev,
+		   struct mlx5_mirror_clone *clone,
+		   const struct mlx5_flow_template_table_cfg *table_cfg,
+		   const struct rte_flow_action *action,
+		   struct mlx5dr_action_dest_attr *dest_attr,
+		   struct rte_flow_error *error)
+{
+	const struct rte_flow_action_jump *jump_conf = action->conf;
+	struct mlx5_hw_jump_action *jump = flow_hw_jump_action_register
+						(dev, table_cfg,
+						 jump_conf->group, error);
+
+	if (!jump)
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION,
+					  action, "failed to create JUMP action for mirror clone");
+	dest_attr->dest = jump->hws_action;
+	clone->action_ctx = jump;
+	return 0;
+}
+
+static int
+mirror_format_port(struct rte_eth_dev *dev,
+		   const struct rte_flow_action *action,
+		   struct mlx5dr_action_dest_attr *dest_attr,
+		   struct rte_flow_error __rte_unused *error)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	const struct rte_flow_action_ethdev *port_action = action->conf;
+
+	dest_attr->dest =priv->hw_vport[port_action->port_id];
+	return 0;
+}
+
+#define MLX5_CONST_ENCAP_ITEM(encap_type, ptr) \
+(((const struct encap_type *)(ptr))->definition)
+
+static int
+hw_mirror_clone_reformat(const struct rte_flow_action *actions,
+                         struct mlx5dr_action_dest_attr *dest_attr,
+                         enum mlx5dr_action_type *action_type, bool decap)
+{
+	int ret;
+	uint8_t encap_buf[MLX5_ENCAP_MAX_LEN];
+	const struct rte_flow_item *encap_item = NULL;
+	const struct rte_flow_action_raw_encap *encap_conf = NULL;
+	typeof(dest_attr->reformat) *reformat = &dest_attr->reformat;
+
+	switch (actions[0].type) {
+	case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
+		encap_conf = actions[0].conf;
+		break;
+	case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
+		encap_item = MLX5_CONST_ENCAP_ITEM(rte_flow_action_vxlan_encap,
+						   actions);
+		break;
+	case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
+		encap_item = MLX5_CONST_ENCAP_ITEM(rte_flow_action_nvgre_encap,
+						   actions);
+		break;
+	default:
+		return -EINVAL;
+	}
+	*action_type = decap ?
+		       MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L3 :
+		       MLX5DR_ACTION_TYP_REFORMAT_L2_TO_TNL_L2;
+	if (encap_item) {
+		ret = flow_dv_convert_encap_data(encap_item, encap_buf,
+						 &reformat->reformat_data_sz, NULL);
+		if (ret)
+			return -EINVAL;
+		reformat->reformat_data = (void *)(uintptr_t)encap_buf;
+	} else {
+		reformat->reformat_data = (void *)(uintptr_t)encap_conf->data;
+		reformat->reformat_data_sz = encap_conf->size;
+	}
+	return 0;
+}
+
+static int
+hw_mirror_format_clone(struct rte_eth_dev *dev,
+                       struct mlx5_mirror_clone *clone,
+                       const struct mlx5_flow_template_table_cfg *table_cfg,
+                       const struct rte_flow_action *actions,
+                       struct mlx5dr_action_dest_attr *dest_attr,
+                       struct rte_flow_error *error)
+{
+	int ret;
+	uint32_t i;
+	bool decap_seen = false;
+
+	for (i = 0; actions[i].type != RTE_FLOW_ACTION_TYPE_END; i++) {
+		dest_attr->action_type[i] = mlx5_hw_dr_action_types[actions[i].type];
+		switch(actions[i].type) {
+		case RTE_FLOW_ACTION_TYPE_QUEUE:
+		case RTE_FLOW_ACTION_TYPE_RSS:
+			ret = mirror_format_tir(dev, clone, table_cfg,
+						&actions[i], dest_attr, error);
+			if (ret)
+				return ret;
+			break;
+		case RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT:
+			ret = mirror_format_port(dev, &actions[i],
+						 dest_attr, error);
+			if (ret)
+				return ret;
+			break;
+		case RTE_FLOW_ACTION_TYPE_JUMP:
+			ret = mirror_format_jump(dev, clone, table_cfg,
+						 &actions[i], dest_attr, error);
+			if (ret)
+				return ret;
+			break;
+		case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
+			decap_seen = true;
+			break;
+		case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
+		case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
+		case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
+			ret = hw_mirror_clone_reformat(&actions[i], dest_attr,
+						       &dest_attr->action_type[i],
+						       decap_seen);
+			if (ret < 0)
+				return rte_flow_error_set(error, EINVAL,
+							  RTE_FLOW_ERROR_TYPE_ACTION,
+							  &actions[i],
+							  "failed to create reformat action");
+			break;
+		default:
+			return rte_flow_error_set(error, EINVAL,
+						  RTE_FLOW_ERROR_TYPE_ACTION,
+						  &actions[i], "unsupported sample action");
+		}
+		clone->type = actions->type;
+	}
+	dest_attr->action_type[i] = MLX5DR_ACTION_TYP_LAST;
+	return 0;
+}
+
+static struct rte_flow_action_list_handle *
+mlx5_hw_mirror_handle_create(struct rte_eth_dev *dev,
+			     const struct mlx5_flow_template_table_cfg *table_cfg,
+			     const struct rte_flow_action *actions,
+			     struct rte_flow_error *error)
+{
+	uint32_t hws_flags;
+	int ret = 0, i, clones_num;
+	struct mlx5_mirror *mirror;
+	enum mlx5dr_table_type table_type;
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5dr_action_dest_attr mirror_attr[MLX5_MIRROR_MAX_CLONES_NUM + 1];
+	enum mlx5dr_action_type array_action_types[MLX5_MIRROR_MAX_CLONES_NUM + 1]
+						  [MLX5_MIRROR_MAX_SAMPLE_ACTIONS_LEN + 1];
+
+	memset(mirror_attr, 0, sizeof(mirror_attr));
+	memset(array_action_types, 0, sizeof(array_action_types));
+	table_type = get_mlx5dr_table_type(&table_cfg->attr.flow_attr);
+	hws_flags = mlx5_hw_act_flag[MLX5_HW_ACTION_FLAG_NONE_ROOT][table_type];
+	clones_num = mlx5_hw_mirror_actions_list_validate(dev, actions);
+	if (clones_num < 0) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+				   actions, "Invalid mirror list format");
+		return NULL;
+	}
+	mirror = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*mirror),
+			     0, SOCKET_ID_ANY);
+	if (!mirror) {
+		rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_ACTION,
+				   actions, "Failed to allocate mirror context");
+		return NULL;
+	}
+	mirror->type = MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR;
+	mirror->clones_num = clones_num;
+	for (i = 0; i < clones_num; i++) {
+		const struct rte_flow_action *clone_actions;
+
+		mirror_attr[i].action_type = array_action_types[i];
+		if (actions[i].type == RTE_FLOW_ACTION_TYPE_SAMPLE) {
+			const struct rte_flow_action_sample *sample = actions[i].conf;
+
+			clone_actions = sample->actions;
+		} else {
+			clone_actions = &actions[i];
+		}
+		ret = hw_mirror_format_clone(dev, &mirror->clone[i], table_cfg,
+					     clone_actions, &mirror_attr[i],
+					     error);
+
+		if (ret)
+			goto error;
+
+	}
+	hws_flags |= MLX5DR_ACTION_FLAG_SHARED;
+	mirror->mirror_action = mlx5dr_action_create_dest_array(priv->dr_ctx,
+                                                                clones_num,
+                                                                mirror_attr,
+                                                                hws_flags);
+	if (!mirror->mirror_action) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+				   actions, "Failed to create HWS mirror action");
+		goto error;
+	}
+
+	LIST_INSERT_HEAD(&priv->indirect_list_head,
+			 (struct mlx5_indirect_list *)mirror, entry);
+	return (struct rte_flow_action_list_handle *)mirror;
+
+error:
+	mlx5_hw_mirror_destroy(dev, mirror, true);
+	return NULL;
+}
+
+static struct rte_flow_action_list_handle *
+flow_hw_async_action_list_handle_create(struct rte_eth_dev *dev, uint32_t queue,
+					const struct rte_flow_op_attr *attr,
+					const struct rte_flow_indir_action_conf *conf,
+					const struct rte_flow_action *actions,
+					void *user_data,
+					struct rte_flow_error *error)
+{
+	struct mlx5_hw_q_job *job = NULL;
+	bool push = flow_hw_action_push(attr);
+	struct rte_flow_action_list_handle *handle;
+	struct mlx5_priv *priv = dev->data->dev_private;
+	const struct mlx5_flow_template_table_cfg table_cfg = {
+		.external = true,
+		.attr = {
+			.flow_attr = {
+				.ingress = conf->ingress,
+				.egress = conf->egress,
+				.transfer = conf->transfer
+			}
+		}
+	};
+
+	if (!actions) {
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+				   NULL, "No action list");
+		return NULL;
+	}
+	if (attr) {
+		job = flow_hw_action_job_init(priv, queue, NULL, user_data,
+					      NULL, MLX5_HW_Q_JOB_TYPE_CREATE,
+					      error);
+		if (!job)
+			return NULL;
+	}
+	switch (actions[0].type) {
+	case RTE_FLOW_ACTION_TYPE_SAMPLE:
+		handle = mlx5_hw_mirror_handle_create(dev, &table_cfg,
+						      actions, error);
+		break;
+	default:
+		handle = NULL;
+		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+				   actions, "Invalid list");
+	}
+	if (job) {
+		job->action = handle;
+		flow_hw_action_finalize(dev, queue, job, push, false,
+					handle != NULL);
+	}
+	return handle;
+}
+
+static struct rte_flow_action_list_handle *
+flow_hw_action_list_handle_create(struct rte_eth_dev *dev,
+				  const struct rte_flow_indir_action_conf *conf,
+				  const struct rte_flow_action *actions,
+				  struct rte_flow_error *error)
+{
+	return flow_hw_async_action_list_handle_create(dev, MLX5_HW_INV_QUEUE,
+						       NULL, conf, actions,
+						       NULL, error);
+}
+
+static int
+flow_hw_async_action_list_handle_destroy
+			(struct rte_eth_dev *dev, uint32_t queue,
+			 const struct rte_flow_op_attr *attr,
+			 struct rte_flow_action_list_handle *handle,
+			 void *user_data, struct rte_flow_error *error)
+{
+	int ret = 0;
+	struct mlx5_hw_q_job *job = NULL;
+	bool push = flow_hw_action_push(attr);
+	struct mlx5_priv *priv = dev->data->dev_private;
+	enum mlx5_indirect_list_type type =
+		mlx5_get_indirect_list_type((void *)handle);
+
+	if (attr) {
+		job = flow_hw_action_job_init(priv, queue, NULL, user_data,
+					      NULL, MLX5_HW_Q_JOB_TYPE_DESTROY,
+					      error);
+		if (!job)
+			return rte_errno;
+	}
+	switch(type) {
+	case MLX5_INDIRECT_ACTION_LIST_TYPE_MIRROR:
+		mlx5_hw_mirror_destroy(dev, (struct mlx5_mirror *)handle, false);
+		break;
+	default:
+		handle = NULL;
+		ret = rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ACTION,
+					  NULL, "Invalid indirect list handle");
+	}
+	if (job) {
+		job->action = handle;
+		flow_hw_action_finalize(dev, queue, job, push, false,
+				       handle != NULL);
+	}
+	mlx5_free(handle);
+	return ret;
+}
+
+static int
+flow_hw_action_list_handle_destroy(struct rte_eth_dev *dev,
+				   struct rte_flow_action_list_handle *handle,
+				   struct rte_flow_error *error)
+{
+	return flow_hw_async_action_list_handle_destroy(dev, MLX5_HW_INV_QUEUE,
+							NULL, handle, NULL,
+							error);
+}
+
 const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = {
 	.info_get = flow_hw_info_get,
 	.configure = flow_hw_configure,
@@ -9382,6 +9984,12 @@ const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops = {
 	.action_update = flow_hw_action_update,
 	.action_query = flow_hw_action_query,
 	.action_query_update = flow_hw_action_query_update,
+	.action_list_handle_create = flow_hw_action_list_handle_create,
+	.action_list_handle_destroy = flow_hw_action_list_handle_destroy,
+	.async_action_list_handle_create =
+		flow_hw_async_action_list_handle_create,
+	.async_action_list_handle_destroy =
+		flow_hw_async_action_list_handle_destroy,
 	.query = flow_hw_query,
 	.get_aged_flows = flow_hw_get_aged_flows,
 	.get_q_aged_flows = flow_hw_get_q_aged_flows,
-- 
2.39.2


  parent reply	other threads:[~2023-10-16 18:44 UTC|newest]

Thread overview: 81+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-27 19:10 [PATCH 0/3] net/mlx5: support indirect list actions Gregory Etelson
2023-10-16 18:42 ` [PATCH v2 01/16] net/mlx5/hws: add support for reformat DevX object Gregory Etelson
2023-10-16 18:42   ` [PATCH v2 02/16] net/mlx5/hws: support creating of dynamic forward table and FTE Gregory Etelson
2023-10-16 18:42   ` [PATCH v2 03/16] net/mlx5/hws: add mlx5dr DevX object struct to mlx5dr action Gregory Etelson
2023-10-16 18:42   ` [PATCH v2 04/16] net/mlx5/hws: add support for mirroring Gregory Etelson
2023-10-16 18:42   ` [PATCH v2 05/16] net/mlx5/hws: allow destination into default miss FT Gregory Etelson
2023-10-16 18:42   ` [PATCH v2 06/16] net/mlx5/hws: support reformat for hws mirror Gregory Etelson
2023-10-16 18:42   ` [PATCH v2 07/16] net/mlx5: reformat HWS code Gregory Etelson
2023-10-16 18:42   ` Gregory Etelson [this message]
2023-10-16 18:42   ` [PATCH v2 09/16] net/mlx5: fix mirror action validation Gregory Etelson
2023-10-16 18:42   ` [PATCH v2 10/16] net/mlx5: fix in shared counter and age template action create Gregory Etelson
2023-10-16 18:42   ` [PATCH v2 11/16] net/mlx5: fix modify field expansion for raw DECAP / ENCAP Gregory Etelson
2023-10-16 18:42   ` [PATCH v2 12/16] net/mlx5: refactor HWS code Gregory Etelson
2023-10-16 18:42   ` [PATCH v2 13/16] net/mlx5: fix RTE action location tracking in a template Gregory Etelson
2023-10-16 18:42   ` [PATCH v2 14/16] net/mlx5: fix mirror redirect action Gregory Etelson
2023-10-16 18:42   ` [PATCH v2 15/16] net/mlx5: support indirect list METER_MARK action Gregory Etelson
2023-10-16 18:42   ` [PATCH v2 16/16] net/mlx5: fix METER_MARK indirection list callback Gregory Etelson
2023-10-17  7:56   ` [PATCH v2 01/16] net/mlx5/hws: add support for reformat DevX object Suanming Mou
2023-10-17  7:31 ` [PATCH v2 00/16] net/mlx5: support indirect actions list Gregory Etelson
2023-10-17  8:09 ` [PATCH v3 " Gregory Etelson
2023-10-17  8:09   ` [PATCH v3 01/16] net/mlx5/hws: add support for reformat DevX object Gregory Etelson
2023-10-17  8:09   ` [PATCH v3 02/16] net/mlx5/hws: support creating of dynamic forward table and FTE Gregory Etelson
2023-10-17  8:09   ` [PATCH v3 03/16] net/mlx5/hws: add mlx5dr DevX object struct to mlx5dr action Gregory Etelson
2023-10-17  8:09   ` [PATCH v3 04/16] net/mlx5/hws: add support for mirroring Gregory Etelson
2023-10-17  8:09   ` [PATCH v3 05/16] net/mlx5/hws: allow destination into default miss FT Gregory Etelson
2023-10-17  8:09   ` [PATCH v3 06/16] net/mlx5/hws: support reformat for hws mirror Gregory Etelson
2023-10-17  8:09   ` [PATCH v3 07/16] net/mlx5: reformat HWS code Gregory Etelson
2023-10-17  8:09   ` [PATCH v3 08/16] net/mlx5: support HWS mirror action Gregory Etelson
2023-10-17  8:09   ` [PATCH v3 09/16] net/mlx5: fix mirror action validation Gregory Etelson
2023-10-17  8:09   ` [PATCH v3 10/16] net/mlx5: fix in shared counter and age template action create Gregory Etelson
2023-10-17  8:09   ` [PATCH v3 11/16] net/mlx5: fix modify field expansion for raw DECAP / ENCAP Gregory Etelson
2023-10-17  8:09   ` [PATCH v3 12/16] net/mlx5: refactor HWS code Gregory Etelson
2023-10-17  8:09   ` [PATCH v3 13/16] net/mlx5: fix RTE action location tracking in a template Gregory Etelson
2023-10-17  8:09   ` [PATCH v3 14/16] net/mlx5: fix mirror redirect action Gregory Etelson
2023-10-17  8:09   ` [PATCH v3 15/16] net/mlx5: support indirect list METER_MARK action Gregory Etelson
2023-10-17  8:09   ` [PATCH v3 16/16] net/mlx5: fix METER_MARK indirection list callback Gregory Etelson
2023-10-23 12:42   ` [PATCH v4 00/10] net/mlx5: support indirect actions list Gregory Etelson
2023-10-23 12:42     ` [PATCH v4 01/10] net/mlx5/hws: add support for reformat DevX object Gregory Etelson
2023-10-23 12:42     ` [PATCH v4 02/10] net/mlx5/hws: support creating of dynamic forward table and FTE Gregory Etelson
2023-10-23 12:42     ` [PATCH v4 03/10] net/mlx5/hws: add mlx5dr DevX object struct to mlx5dr action Gregory Etelson
2023-10-23 12:42     ` [PATCH v4 04/10] net/mlx5/hws: add support for mirroring Gregory Etelson
2023-10-23 12:42     ` [PATCH v4 05/10] net/mlx5/hws: allow destination into default miss FT Gregory Etelson
2023-10-23 12:42     ` [PATCH v4 06/10] net/mlx5/hws: support reformat for hws mirror Gregory Etelson
2023-10-23 12:42     ` [PATCH v4 07/10] net/mlx5: reformat HWS code for HWS mirror action Gregory Etelson
2023-10-23 12:42     ` [PATCH v4 08/10] net/mlx5: support " Gregory Etelson
2023-10-23 12:42     ` [PATCH v4 09/10] net/mlx5: reformat HWS code for indirect list actions Gregory Etelson
2023-10-23 12:42     ` [PATCH v4 10/10] net/mlx5: support indirect list METER_MARK action Gregory Etelson
2023-10-25 10:27   ` [PATCH v5 00/10] net/mlx5: support indirect actions list Gregory Etelson
2023-10-25 10:27     ` [PATCH v5 01/10] net/mlx5/hws: add support for reformat DevX object Gregory Etelson
2023-10-25 10:27     ` [PATCH v5 02/10] net/mlx5/hws: support creating of dynamic forward table and FTE Gregory Etelson
2023-10-25 10:27     ` [PATCH v5 03/10] net/mlx5/hws: add mlx5dr DevX object struct to mlx5dr action Gregory Etelson
2023-10-25 10:27     ` [PATCH v5 04/10] net/mlx5/hws: add support for mirroring Gregory Etelson
2023-10-25 10:27     ` [PATCH v5 05/10] net/mlx5/hws: allow destination into default miss FT Gregory Etelson
2023-10-25 10:27     ` [PATCH v5 06/10] net/mlx5/hws: support reformat for hws mirror Gregory Etelson
2023-10-25 10:27     ` [PATCH v5 07/10] net/mlx5: reformat HWS code for HWS mirror action Gregory Etelson
2023-10-25 10:27     ` [PATCH v5 08/10] net/mlx5: support " Gregory Etelson
2023-10-25 10:27     ` [PATCH v5 09/10] net/mlx5: reformat HWS code for indirect list actions Gregory Etelson
2023-10-25 10:27     ` [PATCH v5 10/10] net/mlx5: support indirect list METER_MARK action Gregory Etelson
2023-10-25 11:22   ` [PATCH v6 00/10] net/mlx5: support indirect actions list Gregory Etelson
2023-10-25 11:22     ` [PATCH v6 01/10] net/mlx5/hws: add support for reformat DevX object Gregory Etelson
2023-10-25 11:22     ` [PATCH v6 02/10] net/mlx5/hws: support creating of dynamic forward table and FTE Gregory Etelson
2023-10-25 11:22     ` [PATCH v6 03/10] net/mlx5/hws: add mlx5dr DevX object struct to mlx5dr action Gregory Etelson
2023-10-25 11:22     ` [PATCH v6 04/10] net/mlx5/hws: add support for mirroring Gregory Etelson
2023-10-25 11:22     ` [PATCH v6 05/10] net/mlx5/hws: allow destination into default miss FT Gregory Etelson
2023-10-25 11:22     ` [PATCH v6 06/10] net/mlx5/hws: support reformat for hws mirror Gregory Etelson
2023-10-25 11:22     ` [PATCH v6 07/10] net/mlx5: reformat HWS code for HWS mirror action Gregory Etelson
2023-10-25 11:22     ` [PATCH v6 08/10] net/mlx5: support " Gregory Etelson
2023-10-25 11:22     ` [PATCH v6 09/10] net/mlx5: reformat HWS code for indirect list actions Gregory Etelson
2023-10-25 11:22     ` [PATCH v6 10/10] net/mlx5: support indirect list METER_MARK action Gregory Etelson
2023-10-26  7:12   ` [PATCH v7 00/10] net/mlx5: support indirect actions list Gregory Etelson
2023-10-26  7:12     ` [PATCH v7 01/10] net/mlx5/hws: add support for reformat DevX object Gregory Etelson
2023-10-26  7:12     ` [PATCH v7 02/10] net/mlx5/hws: support creating of dynamic forward table and FTE Gregory Etelson
2023-10-26  7:12     ` [PATCH v7 03/10] net/mlx5/hws: add mlx5dr DevX object struct to mlx5dr action Gregory Etelson
2023-10-26  7:12     ` [PATCH v7 04/10] net/mlx5/hws: add support for mirroring Gregory Etelson
2023-10-26  7:12     ` [PATCH v7 05/10] net/mlx5/hws: allow destination into default miss FT Gregory Etelson
2023-10-26  7:12     ` [PATCH v7 06/10] net/mlx5/hws: support reformat for hws mirror Gregory Etelson
2023-10-26  7:12     ` [PATCH v7 07/10] net/mlx5: reformat HWS code for HWS mirror action Gregory Etelson
2023-10-26  7:12     ` [PATCH v7 08/10] net/mlx5: support " Gregory Etelson
2023-10-26  7:12     ` [PATCH v7 09/10] net/mlx5: reformat HWS code for indirect list actions Gregory Etelson
2023-10-26  7:12     ` [PATCH v7 10/10] net/mlx5: support indirect list METER_MARK action Gregory Etelson
2023-10-29  7:53     ` [PATCH v7 00/10] net/mlx5: support indirect actions list Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231016184235.200427-8-getelson@nvidia.com \
    --to=getelson@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=mkashani@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=suanmingm@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).