DPDK patches and discussions
 help / color / mirror / Atom feed
From: Suanming Mou <suanmingm@nvidia.com>
To: Matan Azrad <matan@nvidia.com>,
	Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Cc: <dev@dpdk.org>, <rasland@nvidia.com>, <orika@nvidia.com>,
	"Dariusz Sosnowski" <dsosnowski@nvidia.com>
Subject: [PATCH v2 17/17] net/mlx5: support device control of representor matching
Date: Wed, 28 Sep 2022 06:31:30 +0300	[thread overview]
Message-ID: <20220928033130.9106-18-suanmingm@nvidia.com> (raw)
In-Reply-To: <20220928033130.9106-1-suanmingm@nvidia.com>

From: Dariusz Sosnowski <dsosnowski@nvidia.com>

In some E-Switch use cases applications want to receive all traffic
on a single port. Since currently flow API does not provide a way to
match traffic forwarded to any port representor, this patch adds
support for controlling representor matching on ingress flow rules.

Representor matching is controlled through new device argument
repr_matching_en.

- If representor matching is enabled (default setting),
  then each ingress pattern template has an implicit REPRESENTED_PORT
  item added. Flow rules based on this pattern template will match
  the vport associated with port on which rule is created.
- If representor matching is disabled, then there will be no implicit
  item added. As a result ingress flow rules will match traffic
  coming to any port, not only the port on which flow rule is created.

Representor matching is enabled by default, to provide an expected
default behavior.

This patch enables egress flow rules on representors when E-Switch is
enabled in the following configurations:

- repr_matching_en=1 and dv_xmeta_en=4
- repr_matching_en=1 and dv_xmeta_en=0
- repr_matching_en=0 and dv_xmeta_en=0

When representor matching is enabled, the following logic is
implemented:

1. Creating an egress template table in group 0 for each port. These
   tables will hold default flow rules defined as follows:

      pattern SQ
      actions MODIFY_FIELD (set available bits in REG_C_0 to
                            vport_meta_tag)
              MODIFY_FIELD (copy REG_A to REG_C_1, only when
                            dv_xmeta_en == 4)
              JUMP (group 1)

2. Egress pattern templates created by an application have an implicit
   MLX5_RTE_FLOW_ITEM_TYPE_TAG item prepended to pattern, which matches
   available bits of REG_C_0.

3. Egress flow rules created by an application have an implicit
   MLX5_RTE_FLOW_ITEM_TYPE_TAG item prepended to pattern, which matches
   vport_meta_tag placed in available bits of REG_C_0.

4. Egress template tables created by an application, which are in
   group n, are placed in group n + 1.

5. Items and actions related to META are operating on REG_A when
   dv_xmeta_en == 0 or REG_C_1 when dv_xmeta_en == 4.

When representor matching is disabled and extended metadata is disabled,
no changes to current logic are required.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_os.c |  11 +
 drivers/net/mlx5/mlx5.c          |  13 +
 drivers/net/mlx5/mlx5.h          |   5 +
 drivers/net/mlx5/mlx5_flow.c     |   8 +-
 drivers/net/mlx5/mlx5_flow.h     |   7 +
 drivers/net/mlx5/mlx5_flow_hw.c  | 718 ++++++++++++++++++++++++-------
 drivers/net/mlx5/mlx5_trigger.c  | 167 ++++++-
 7 files changed, 771 insertions(+), 158 deletions(-)

diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index de8c003d02..50d34b152a 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1555,6 +1555,9 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	if (priv->sh->config.dv_flow_en == 2) {
 #ifdef HAVE_IBV_FLOW_DV_SUPPORT
 		if (priv->sh->config.dv_esw_en) {
+			uint32_t usable_bits;
+			uint32_t required_bits;
+
 			if (priv->sh->dv_regc0_mask == UINT32_MAX) {
 				DRV_LOG(ERR, "E-Switch port metadata is required when using HWS "
 					     "but it is disabled (configure it through devlink)");
@@ -1567,6 +1570,14 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 				err = ENOTSUP;
 				goto error;
 			}
+			usable_bits = __builtin_popcount(priv->sh->dv_regc0_mask);
+			required_bits = __builtin_popcount(priv->vport_meta_mask);
+			if (usable_bits < required_bits) {
+				DRV_LOG(ERR, "Not enough bits available in reg_c[0] to provide "
+					     "representor matching.");
+				err = ENOTSUP;
+				goto error;
+			}
 		}
 		if (priv->vport_meta_mask)
 			flow_hw_set_port_info(eth_dev);
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 742607509b..c249619a60 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -181,6 +181,9 @@
 /* HW steering counter's query interval. */
 #define MLX5_HWS_CNT_CYCLE_TIME "svc_cycle_time"
 
+/* Device parameter to control representor matching in ingress/egress flows with HWS. */
+#define MLX5_REPR_MATCHING_EN "repr_matching_en"
+
 /* Shared memory between primary and secondary processes. */
 struct mlx5_shared_data *mlx5_shared_data;
 
@@ -1283,6 +1286,8 @@ mlx5_dev_args_check_handler(const char *key, const char *val, void *opaque)
 		config->cnt_svc.service_core = tmp;
 	} else if (strcmp(MLX5_HWS_CNT_CYCLE_TIME, key) == 0) {
 		config->cnt_svc.cycle_time = tmp;
+	} else if (strcmp(MLX5_REPR_MATCHING_EN, key) == 0) {
+		config->repr_matching = !!tmp;
 	}
 	return 0;
 }
@@ -1321,6 +1326,7 @@ mlx5_shared_dev_ctx_args_config(struct mlx5_dev_ctx_shared *sh,
 		MLX5_FDB_DEFAULT_RULE_EN,
 		MLX5_HWS_CNT_SERVICE_CORE,
 		MLX5_HWS_CNT_CYCLE_TIME,
+		MLX5_REPR_MATCHING_EN,
 		NULL,
 	};
 	int ret = 0;
@@ -1335,6 +1341,7 @@ mlx5_shared_dev_ctx_args_config(struct mlx5_dev_ctx_shared *sh,
 	config->fdb_def_rule = 1;
 	config->cnt_svc.cycle_time = MLX5_CNT_SVC_CYCLE_TIME_DEFAULT;
 	config->cnt_svc.service_core = rte_get_main_lcore();
+	config->repr_matching = 1;
 	if (mkvlist != NULL) {
 		/* Process parameters. */
 		ret = mlx5_kvargs_process(mkvlist, params,
@@ -1368,6 +1375,11 @@ mlx5_shared_dev_ctx_args_config(struct mlx5_dev_ctx_shared *sh,
 			config->dv_xmeta_en);
 		config->dv_xmeta_en = MLX5_XMETA_MODE_LEGACY;
 	}
+	if (config->dv_flow_en != 2 && !config->repr_matching) {
+		DRV_LOG(DEBUG, "Disabling representor matching is valid only "
+			       "when HW Steering is enabled.");
+		config->repr_matching = 1;
+	}
 	if (config->tx_pp && !sh->dev_cap.txpp_en) {
 		DRV_LOG(ERR, "Packet pacing is not supported.");
 		rte_errno = ENODEV;
@@ -1411,6 +1423,7 @@ mlx5_shared_dev_ctx_args_config(struct mlx5_dev_ctx_shared *sh,
 	DRV_LOG(DEBUG, "\"allow_duplicate_pattern\" is %u.",
 		config->allow_duplicate_pattern);
 	DRV_LOG(DEBUG, "\"fdb_def_rule_en\" is %u.", config->fdb_def_rule);
+	DRV_LOG(DEBUG, "\"repr_matching_en\" is %u.", config->repr_matching);
 	return 0;
 }
 
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 9299ffe9f9..25719401c8 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -316,6 +316,7 @@ struct mlx5_sh_config {
 	} cnt_svc; /* configure for HW steering's counter's service. */
 	/* Allow/Prevent the duplicate rules pattern. */
 	uint32_t fdb_def_rule:1; /* Create FDB default jump rule */
+	uint32_t repr_matching:1; /* Enable implicit vport matching in HWS FDB. */
 };
 
 /* Structure for VF VLAN workaround. */
@@ -366,6 +367,7 @@ struct mlx5_hw_q_job {
 			void *out_data;
 		} __rte_packed;
 		struct rte_flow_item_ethdev port_spec;
+		struct rte_flow_item_tag tag_spec;
 	} __rte_packed;
 };
 
@@ -1673,6 +1675,9 @@ struct mlx5_priv {
 	struct rte_flow_template_table *hw_esw_sq_miss_tbl;
 	struct rte_flow_template_table *hw_esw_zero_tbl;
 	struct rte_flow_template_table *hw_tx_meta_cpy_tbl;
+	struct rte_flow_pattern_template *hw_tx_repr_tagging_pt;
+	struct rte_flow_actions_template *hw_tx_repr_tagging_at;
+	struct rte_flow_template_table *hw_tx_repr_tagging_tbl;
 	struct mlx5_indexed_pool *flows[MLX5_FLOW_TYPE_MAXI];
 	/* RTE Flow rules. */
 	uint32_t ctrl_flows; /* Control flow rules. */
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 2142cd828a..026d4eb9c0 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1123,7 +1123,11 @@ mlx5_flow_get_reg_id(struct rte_eth_dev *dev,
 		}
 		break;
 	case MLX5_METADATA_TX:
-		return REG_A;
+		if (config->dv_flow_en == 2 && config->dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS) {
+			return REG_C_1;
+		} else {
+			return REG_A;
+		}
 	case MLX5_METADATA_FDB:
 		switch (config->dv_xmeta_en) {
 		case MLX5_XMETA_MODE_LEGACY:
@@ -11319,7 +11323,7 @@ mlx5_flow_pick_transfer_proxy(struct rte_eth_dev *dev,
 			return 0;
 		}
 	}
-	return rte_flow_error_set(error, EINVAL,
+	return rte_flow_error_set(error, ENODEV,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, "unable to find a proxy port");
 }
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 63f946473d..a497dac474 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -1199,12 +1199,18 @@ struct rte_flow_pattern_template {
 	struct rte_flow_pattern_template_attr attr;
 	struct mlx5dr_match_template *mt; /* mlx5 match template. */
 	uint64_t item_flags; /* Item layer flags. */
+	uint64_t orig_item_nb; /* Number of pattern items provided by the user (with END item). */
 	uint32_t refcnt;  /* Reference counter. */
 	/*
 	 * If true, then rule pattern should be prepended with
 	 * represented_port pattern item.
 	 */
 	bool implicit_port;
+	/*
+	 * If true, then rule pattern should be prepended with
+	 * tag pattern item for representor matching.
+	 */
+	bool implicit_tag;
 };
 
 /* Flow action template struct. */
@@ -2479,6 +2485,7 @@ int mlx5_flow_hw_esw_create_sq_miss_flow(struct rte_eth_dev *dev,
 					 uint32_t sqn);
 int mlx5_flow_hw_esw_create_default_jump_flow(struct rte_eth_dev *dev);
 int mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev);
+int mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn);
 int mlx5_flow_actions_validate(struct rte_eth_dev *dev,
 		const struct rte_flow_actions_template_attr *attr,
 		const struct rte_flow_action actions[],
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 6a7f1376f7..242bbaa4d4 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -32,12 +32,15 @@
 /* Maximum number of rules in control flow tables. */
 #define MLX5_HW_CTRL_FLOW_NB_RULES (4096)
 
-/* Lowest flow group usable by an application. */
+/* Lowest flow group usable by an application if group translation is done. */
 #define MLX5_HW_LOWEST_USABLE_GROUP (1)
 
 /* Maximum group index usable by user applications for transfer flows. */
 #define MLX5_HW_MAX_TRANSFER_GROUP (UINT32_MAX - 1)
 
+/* Maximum group index usable by user applications for egress flows. */
+#define MLX5_HW_MAX_EGRESS_GROUP (UINT32_MAX - 1)
+
 /* Lowest priority for HW root table. */
 #define MLX5_HW_LOWEST_PRIO_ROOT 15
 
@@ -61,6 +64,9 @@ flow_hw_set_vlan_vid_construct(struct rte_eth_dev *dev,
 			       const struct mlx5_hw_actions *hw_acts,
 			       const struct rte_flow_action *action);
 
+static __rte_always_inline uint32_t flow_hw_tx_tag_regc_mask(struct rte_eth_dev *dev);
+static __rte_always_inline uint32_t flow_hw_tx_tag_regc_value(struct rte_eth_dev *dev);
+
 const struct mlx5_flow_driver_ops mlx5_flow_hw_drv_ops;
 
 /* DR action flags with different table. */
@@ -2329,21 +2335,18 @@ flow_hw_get_rule_items(struct rte_eth_dev *dev,
 		       uint8_t pattern_template_index,
 		       struct mlx5_hw_q_job *job)
 {
-	if (table->its[pattern_template_index]->implicit_port) {
-		const struct rte_flow_item *curr_item;
-		unsigned int nb_items;
-		bool found_end;
-		unsigned int i;
-
-		/* Count number of pattern items. */
-		nb_items = 0;
-		found_end = false;
-		for (curr_item = items; !found_end; ++curr_item) {
-			++nb_items;
-			if (curr_item->type == RTE_FLOW_ITEM_TYPE_END)
-				found_end = true;
+	struct rte_flow_pattern_template *pt = table->its[pattern_template_index];
+
+	/* Only one implicit item can be added to flow rule pattern. */
+	MLX5_ASSERT(!pt->implicit_port || !pt->implicit_tag);
+	/* At least one item was allocated in job descriptor for items. */
+	MLX5_ASSERT(MLX5_HW_MAX_ITEMS >= 1);
+	if (pt->implicit_port) {
+		if (pt->orig_item_nb + 1 > MLX5_HW_MAX_ITEMS) {
+			rte_errno = ENOMEM;
+			return NULL;
 		}
-		/* Prepend represented port item. */
+		/* Set up represented port item in job descriptor. */
 		job->port_spec = (struct rte_flow_item_ethdev){
 			.port_id = dev->data->port_id,
 		};
@@ -2351,21 +2354,26 @@ flow_hw_get_rule_items(struct rte_eth_dev *dev,
 			.type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT,
 			.spec = &job->port_spec,
 		};
-		found_end = false;
-		for (i = 1; i < MLX5_HW_MAX_ITEMS && i - 1 < nb_items; ++i) {
-			job->items[i] = items[i - 1];
-			if (items[i - 1].type == RTE_FLOW_ITEM_TYPE_END) {
-				found_end = true;
-				break;
-			}
-		}
-		if (i >= MLX5_HW_MAX_ITEMS && !found_end) {
+		rte_memcpy(&job->items[1], items, sizeof(*items) * pt->orig_item_nb);
+		return job->items;
+	} else if (pt->implicit_tag) {
+		if (pt->orig_item_nb + 1 > MLX5_HW_MAX_ITEMS) {
 			rte_errno = ENOMEM;
 			return NULL;
 		}
+		/* Set up tag item in job descriptor. */
+		job->tag_spec = (struct rte_flow_item_tag){
+			.data = flow_hw_tx_tag_regc_value(dev),
+		};
+		job->items[0] = (struct rte_flow_item){
+			.type = (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_TAG,
+			.spec = &job->tag_spec,
+		};
+		rte_memcpy(&job->items[1], items, sizeof(*items) * pt->orig_item_nb);
 		return job->items;
+	} else {
+		return items;
 	}
-	return items;
 }
 
 /**
@@ -3152,9 +3160,10 @@ flow_hw_translate_group(struct rte_eth_dev *dev,
 			struct rte_flow_error *error)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_sh_config *config = &priv->sh->config;
 	const struct rte_flow_attr *flow_attr = &cfg->attr.flow_attr;
 
-	if (priv->sh->config.dv_esw_en &&
+	if (config->dv_esw_en &&
 	    priv->fdb_def_rule &&
 	    cfg->external &&
 	    flow_attr->transfer) {
@@ -3164,6 +3173,22 @@ flow_hw_translate_group(struct rte_eth_dev *dev,
 						  NULL,
 						  "group index not supported");
 		*table_group = group + 1;
+	} else if (config->dv_esw_en &&
+		   !(config->repr_matching && config->dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS) &&
+		   cfg->external &&
+		   flow_attr->egress) {
+		/*
+		 * On E-Switch setups, egress group translation is not done if and only if
+		 * representor matching is disabled and legacy metadata mode is selected.
+		 * In all other cases, egree group 0 is reserved for representor tagging flows
+		 * and metadata copy flows.
+		 */
+		if (group > MLX5_HW_MAX_EGRESS_GROUP)
+			return rte_flow_error_set(error, EINVAL,
+						  RTE_FLOW_ERROR_TYPE_ATTR_GROUP,
+						  NULL,
+						  "group index not supported");
+		*table_group = group + 1;
 	} else {
 		*table_group = group;
 	}
@@ -3204,7 +3229,6 @@ flow_hw_template_table_create(struct rte_eth_dev *dev,
 			      uint8_t nb_action_templates,
 			      struct rte_flow_error *error)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_flow_template_table_cfg cfg = {
 		.attr = *attr,
 		.external = true,
@@ -3213,12 +3237,6 @@ flow_hw_template_table_create(struct rte_eth_dev *dev,
 
 	if (flow_hw_translate_group(dev, &cfg, group, &cfg.attr.flow_attr.group, error))
 		return NULL;
-	if (priv->sh->config.dv_esw_en && cfg.attr.flow_attr.egress) {
-		rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, NULL,
-				  "egress flows are not supported with HW Steering"
-				  " when E-Switch is enabled");
-		return NULL;
-	}
 	return flow_hw_table_create(dev, &cfg, item_templates, nb_item_templates,
 				    action_templates, nb_action_templates, error);
 }
@@ -4456,26 +4474,28 @@ flow_hw_actions_template_destroy(struct rte_eth_dev *dev __rte_unused,
 	return 0;
 }
 
-static struct rte_flow_item *
-flow_hw_copy_prepend_port_item(const struct rte_flow_item *items,
-			       struct rte_flow_error *error)
+static uint32_t
+flow_hw_count_items(const struct rte_flow_item *items)
 {
 	const struct rte_flow_item *curr_item;
-	struct rte_flow_item *copied_items;
-	bool found_end;
-	unsigned int nb_items;
-	unsigned int i;
-	size_t size;
+	uint32_t nb_items;
 
-	/* Count number of pattern items. */
 	nb_items = 0;
-	found_end = false;
-	for (curr_item = items; !found_end; ++curr_item) {
+	for (curr_item = items; curr_item->type != RTE_FLOW_ITEM_TYPE_END; ++curr_item)
 		++nb_items;
-		if (curr_item->type == RTE_FLOW_ITEM_TYPE_END)
-			found_end = true;
-	}
-	/* Allocate new array of items and prepend REPRESENTED_PORT item. */
+	return ++nb_items;
+}
+
+static struct rte_flow_item *
+flow_hw_prepend_item(const struct rte_flow_item *items,
+		     const uint32_t nb_items,
+		     const struct rte_flow_item *new_item,
+		     struct rte_flow_error *error)
+{
+	struct rte_flow_item *copied_items;
+	size_t size;
+
+	/* Allocate new array of items. */
 	size = sizeof(*copied_items) * (nb_items + 1);
 	copied_items = mlx5_malloc(MLX5_MEM_ZERO, size, 0, rte_socket_id());
 	if (!copied_items) {
@@ -4485,14 +4505,9 @@ flow_hw_copy_prepend_port_item(const struct rte_flow_item *items,
 				   "cannot allocate item template");
 		return NULL;
 	}
-	copied_items[0] = (struct rte_flow_item){
-		.type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT,
-		.spec = NULL,
-		.last = NULL,
-		.mask = &rte_flow_item_ethdev_mask,
-	};
-	for (i = 1; i < nb_items + 1; ++i)
-		copied_items[i] = items[i - 1];
+	/* Put new item at the beginning and copy the rest. */
+	copied_items[0] = *new_item;
+	rte_memcpy(&copied_items[1], items, sizeof(*items) * nb_items);
 	return copied_items;
 }
 
@@ -4513,17 +4528,13 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev,
 	if (priv->sh->config.dv_esw_en) {
 		MLX5_ASSERT(priv->master || priv->representor);
 		if (priv->master) {
-			/*
-			 * It is allowed to specify ingress, egress and transfer attributes
-			 * at the same time, in order to construct flows catching all missed
-			 * FDB traffic and forwarding it to the master port.
-			 */
-			if (!(attr->ingress ^ attr->egress ^ attr->transfer))
+			if ((attr->ingress && attr->egress) ||
+			    (attr->ingress && attr->transfer) ||
+			    (attr->egress && attr->transfer))
 				return rte_flow_error_set(error, EINVAL,
 							  RTE_FLOW_ERROR_TYPE_ATTR, NULL,
-							  "only one or all direction attributes"
-							  " at once can be used on transfer proxy"
-							  " port");
+							  "only one direction attribute at once"
+							  " can be used on transfer proxy port");
 		} else {
 			if (attr->transfer)
 				return rte_flow_error_set(error, EINVAL,
@@ -4576,11 +4587,16 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev,
 			break;
 		}
 		case RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT:
-			if (attr->ingress || attr->egress)
+			if (attr->ingress && priv->sh->config.repr_matching)
+				return rte_flow_error_set(error, EINVAL,
+						  RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+						  "represented port item cannot be used"
+						  " when ingress attribute is set");
+			if (attr->egress)
 				return rte_flow_error_set(error, EINVAL,
 						  RTE_FLOW_ERROR_TYPE_ITEM, NULL,
 						  "represented port item cannot be used"
-						  " when transfer attribute is set");
+						  " when egress attribute is set");
 			break;
 		case RTE_FLOW_ITEM_TYPE_META:
 			if (!priv->sh->config.dv_esw_en ||
@@ -4642,6 +4658,17 @@ flow_hw_pattern_validate(struct rte_eth_dev *dev,
 	return 0;
 }
 
+static bool
+flow_hw_pattern_has_sq_match(const struct rte_flow_item *items)
+{
+	unsigned int i;
+
+	for (i = 0; items[i].type != RTE_FLOW_ITEM_TYPE_END; ++i)
+		if (items[i].type == (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_SQ)
+			return true;
+	return false;
+}
+
 /**
  * Create flow item template.
  *
@@ -4667,17 +4694,53 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
 	struct rte_flow_pattern_template *it;
 	struct rte_flow_item *copied_items = NULL;
 	const struct rte_flow_item *tmpl_items;
+	uint64_t orig_item_nb;
+	struct rte_flow_item port = {
+		.type = RTE_FLOW_ITEM_TYPE_REPRESENTED_PORT,
+		.mask = &rte_flow_item_ethdev_mask,
+	};
+	struct rte_flow_item_tag tag_v = {
+		.data = 0,
+		.index = REG_C_0,
+	};
+	struct rte_flow_item_tag tag_m = {
+		.data = flow_hw_tx_tag_regc_mask(dev),
+		.index = 0xff,
+	};
+	struct rte_flow_item tag = {
+		.type = (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_TAG,
+		.spec = &tag_v,
+		.mask = &tag_m,
+		.last = NULL
+	};
 
 	if (flow_hw_pattern_validate(dev, attr, items, error))
 		return NULL;
-	if (priv->sh->config.dv_esw_en && attr->ingress && !attr->egress && !attr->transfer) {
-		copied_items = flow_hw_copy_prepend_port_item(items, error);
+	orig_item_nb = flow_hw_count_items(items);
+	if (priv->sh->config.dv_esw_en &&
+	    priv->sh->config.repr_matching &&
+	    attr->ingress && !attr->egress && !attr->transfer) {
+		copied_items = flow_hw_prepend_item(items, orig_item_nb, &port, error);
+		if (!copied_items)
+			return NULL;
+		tmpl_items = copied_items;
+	} else if (priv->sh->config.dv_esw_en &&
+		   priv->sh->config.repr_matching &&
+		   !attr->ingress && attr->egress && !attr->transfer) {
+		if (flow_hw_pattern_has_sq_match(items)) {
+			DRV_LOG(DEBUG, "Port %u omitting implicit REG_C_0 match for egress "
+				       "pattern template", dev->data->port_id);
+			tmpl_items = items;
+			goto setup_pattern_template;
+		}
+		copied_items = flow_hw_prepend_item(items, orig_item_nb, &tag, error);
 		if (!copied_items)
 			return NULL;
 		tmpl_items = copied_items;
 	} else {
 		tmpl_items = items;
 	}
+setup_pattern_template:
 	it = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*it), 0, rte_socket_id());
 	if (!it) {
 		if (copied_items)
@@ -4689,6 +4752,7 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
 		return NULL;
 	}
 	it->attr = *attr;
+	it->orig_item_nb = orig_item_nb;
 	it->mt = mlx5dr_match_template_create(tmpl_items, attr->relaxed_matching);
 	if (!it->mt) {
 		if (copied_items)
@@ -4701,11 +4765,15 @@ flow_hw_pattern_template_create(struct rte_eth_dev *dev,
 		return NULL;
 	}
 	it->item_flags = flow_hw_rss_item_flags_get(tmpl_items);
-	it->implicit_port = !!copied_items;
+	if (copied_items) {
+		if (attr->ingress)
+			it->implicit_port = true;
+		else if (attr->egress)
+			it->implicit_tag = true;
+		mlx5_free(copied_items);
+	}
 	__atomic_fetch_add(&it->refcnt, 1, __ATOMIC_RELAXED);
 	LIST_INSERT_HEAD(&priv->flow_hw_itt, it, next);
-	if (copied_items)
-		mlx5_free(copied_items);
 	return it;
 }
 
@@ -5105,6 +5173,254 @@ flow_hw_free_vport_actions(struct mlx5_priv *priv)
 	priv->hw_vport = NULL;
 }
 
+/**
+ * Create an egress pattern template matching on source SQ.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ *
+ * @return
+ *   Pointer to pattern template on success. NULL otherwise, and rte_errno is set.
+ */
+static struct rte_flow_pattern_template *
+flow_hw_create_tx_repr_sq_pattern_tmpl(struct rte_eth_dev *dev)
+{
+	struct rte_flow_pattern_template_attr attr = {
+		.relaxed_matching = 0,
+		.egress = 1,
+	};
+	struct mlx5_rte_flow_item_sq sq_mask = {
+		.queue = UINT32_MAX,
+	};
+	struct rte_flow_item items[] = {
+		{
+			.type = (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_SQ,
+			.mask = &sq_mask,
+		},
+		{
+			.type = RTE_FLOW_ITEM_TYPE_END,
+		},
+	};
+
+	return flow_hw_pattern_template_create(dev, &attr, items, NULL);
+}
+
+static __rte_always_inline uint32_t
+flow_hw_tx_tag_regc_mask(struct rte_eth_dev *dev)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	uint32_t mask = priv->sh->dv_regc0_mask;
+
+	/* Mask is verified during device initialization. Sanity checking here. */
+	MLX5_ASSERT(mask != 0);
+	/*
+	 * Availability of sufficient number of bits in REG_C_0 is verified on initialization.
+	 * Sanity checking here.
+	 */
+	MLX5_ASSERT(__builtin_popcount(mask) >= __builtin_popcount(priv->vport_meta_mask));
+	return mask;
+}
+
+static __rte_always_inline uint32_t
+flow_hw_tx_tag_regc_value(struct rte_eth_dev *dev)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	uint32_t tag;
+
+	/* Mask is verified during device initialization. Sanity checking here. */
+	MLX5_ASSERT(priv->vport_meta_mask != 0);
+	tag = priv->vport_meta_tag >> (rte_bsf32(priv->vport_meta_mask));
+	/*
+	 * Availability of sufficient number of bits in REG_C_0 is verified on initialization.
+	 * Sanity checking here.
+	 */
+	MLX5_ASSERT((tag & priv->sh->dv_regc0_mask) == tag);
+	return tag;
+}
+
+static void
+flow_hw_update_action_mask(struct rte_flow_action *action,
+			   struct rte_flow_action *mask,
+			   enum rte_flow_action_type type,
+			   void *conf_v,
+			   void *conf_m)
+{
+	action->type = type;
+	action->conf = conf_v;
+	mask->type = type;
+	mask->conf = conf_m;
+}
+
+/**
+ * Create an egress actions template with MODIFY_FIELD action for setting unused REG_C_0 bits
+ * to vport tag and JUMP action to group 1.
+ *
+ * If extended metadata mode is enabled, then MODIFY_FIELD action for copying software metadata
+ * to REG_C_1 is added as well.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ *
+ * @return
+ *   Pointer to actions template on success. NULL otherwise, and rte_errno is set.
+ */
+static struct rte_flow_actions_template *
+flow_hw_create_tx_repr_tag_jump_acts_tmpl(struct rte_eth_dev *dev)
+{
+	uint32_t tag_mask = flow_hw_tx_tag_regc_mask(dev);
+	uint32_t tag_value = flow_hw_tx_tag_regc_value(dev);
+	struct rte_flow_actions_template_attr attr = {
+		.egress = 1,
+	};
+	struct rte_flow_action_modify_field set_tag_v = {
+		.operation = RTE_FLOW_MODIFY_SET,
+		.dst = {
+			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
+			.level = REG_C_0,
+			.offset = rte_bsf32(tag_mask),
+		},
+		.src = {
+			.field = RTE_FLOW_FIELD_VALUE,
+		},
+		.width = __builtin_popcount(tag_mask),
+	};
+	struct rte_flow_action_modify_field set_tag_m = {
+		.operation = RTE_FLOW_MODIFY_SET,
+		.dst = {
+			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
+			.level = UINT32_MAX,
+			.offset = UINT32_MAX,
+		},
+		.src = {
+			.field = RTE_FLOW_FIELD_VALUE,
+		},
+		.width = UINT32_MAX,
+	};
+	struct rte_flow_action_modify_field copy_metadata_v = {
+		.operation = RTE_FLOW_MODIFY_SET,
+		.dst = {
+			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
+			.level = REG_C_1,
+		},
+		.src = {
+			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
+			.level = REG_A,
+		},
+		.width = 32,
+	};
+	struct rte_flow_action_modify_field copy_metadata_m = {
+		.operation = RTE_FLOW_MODIFY_SET,
+		.dst = {
+			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
+			.level = UINT32_MAX,
+			.offset = UINT32_MAX,
+		},
+		.src = {
+			.field = (enum rte_flow_field_id)MLX5_RTE_FLOW_FIELD_META_REG,
+			.level = UINT32_MAX,
+			.offset = UINT32_MAX,
+		},
+		.width = UINT32_MAX,
+	};
+	struct rte_flow_action_jump jump_v = {
+		.group = MLX5_HW_LOWEST_USABLE_GROUP,
+	};
+	struct rte_flow_action_jump jump_m = {
+		.group = UINT32_MAX,
+	};
+	struct rte_flow_action actions_v[4] = { { 0 } };
+	struct rte_flow_action actions_m[4] = { { 0 } };
+	unsigned int idx = 0;
+
+	rte_memcpy(set_tag_v.src.value, &tag_value, sizeof(tag_value));
+	rte_memcpy(set_tag_m.src.value, &tag_mask, sizeof(tag_mask));
+	flow_hw_update_action_mask(&actions_v[idx], &actions_m[idx],
+				   RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
+				   &set_tag_v, &set_tag_m);
+	idx++;
+	if (MLX5_SH(dev)->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS) {
+		flow_hw_update_action_mask(&actions_v[idx], &actions_m[idx],
+					   RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
+					   &copy_metadata_v, &copy_metadata_m);
+		idx++;
+	}
+	flow_hw_update_action_mask(&actions_v[idx], &actions_m[idx], RTE_FLOW_ACTION_TYPE_JUMP,
+				   &jump_v, &jump_m);
+	idx++;
+	flow_hw_update_action_mask(&actions_v[idx], &actions_m[idx], RTE_FLOW_ACTION_TYPE_END,
+				   NULL, NULL);
+	idx++;
+	MLX5_ASSERT(idx <= RTE_DIM(actions_v));
+	return flow_hw_actions_template_create(dev, &attr, actions_v, actions_m, NULL);
+}
+
+static void
+flow_hw_cleanup_tx_repr_tagging(struct rte_eth_dev *dev)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+
+	if (priv->hw_tx_repr_tagging_tbl) {
+		flow_hw_table_destroy(dev, priv->hw_tx_repr_tagging_tbl, NULL);
+		priv->hw_tx_repr_tagging_tbl = NULL;
+	}
+	if (priv->hw_tx_repr_tagging_at) {
+		flow_hw_actions_template_destroy(dev, priv->hw_tx_repr_tagging_at, NULL);
+		priv->hw_tx_repr_tagging_at = NULL;
+	}
+	if (priv->hw_tx_repr_tagging_pt) {
+		flow_hw_pattern_template_destroy(dev, priv->hw_tx_repr_tagging_pt, NULL);
+		priv->hw_tx_repr_tagging_pt = NULL;
+	}
+}
+
+/**
+ * Setup templates and table used to create default Tx flow rules. These default rules
+ * allow for matching Tx representor traffic using a vport tag placed in unused bits of
+ * REG_C_0 register.
+ *
+ * @param dev
+ *   Pointer to Ethernet device.
+ *
+ * @return
+ *   0 on success, negative errno value otherwise.
+ */
+static int
+flow_hw_setup_tx_repr_tagging(struct rte_eth_dev *dev)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct rte_flow_template_table_attr attr = {
+		.flow_attr = {
+			.group = 0,
+			.priority = MLX5_HW_LOWEST_PRIO_ROOT,
+			.egress = 1,
+		},
+		.nb_flows = MLX5_HW_CTRL_FLOW_NB_RULES,
+	};
+	struct mlx5_flow_template_table_cfg cfg = {
+		.attr = attr,
+		.external = false,
+	};
+
+	MLX5_ASSERT(priv->sh->config.dv_esw_en);
+	MLX5_ASSERT(priv->sh->config.repr_matching);
+	priv->hw_tx_repr_tagging_pt = flow_hw_create_tx_repr_sq_pattern_tmpl(dev);
+	if (!priv->hw_tx_repr_tagging_pt)
+		goto error;
+	priv->hw_tx_repr_tagging_at = flow_hw_create_tx_repr_tag_jump_acts_tmpl(dev);
+	if (!priv->hw_tx_repr_tagging_at)
+		goto error;
+	priv->hw_tx_repr_tagging_tbl = flow_hw_table_create(dev, &cfg,
+							    &priv->hw_tx_repr_tagging_pt, 1,
+							    &priv->hw_tx_repr_tagging_at, 1,
+							    NULL);
+	if (!priv->hw_tx_repr_tagging_tbl)
+		goto error;
+	return 0;
+error:
+	flow_hw_cleanup_tx_repr_tagging(dev);
+	return -rte_errno;
+}
+
 static uint32_t
 flow_hw_esw_mgr_regc_marker_mask(struct rte_eth_dev *dev)
 {
@@ -5511,29 +5827,43 @@ flow_hw_create_tx_default_mreg_copy_actions_template(struct rte_eth_dev *dev)
 		},
 		.width = UINT32_MAX,
 	};
-	const struct rte_flow_action copy_reg_action[] = {
+	const struct rte_flow_action_jump jump_action = {
+		.group = 1,
+	};
+	const struct rte_flow_action_jump jump_mask = {
+		.group = UINT32_MAX,
+	};
+	const struct rte_flow_action actions[] = {
 		[0] = {
 			.type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
 			.conf = &mreg_action,
 		},
 		[1] = {
+			.type = RTE_FLOW_ACTION_TYPE_JUMP,
+			.conf = &jump_action,
+		},
+		[2] = {
 			.type = RTE_FLOW_ACTION_TYPE_END,
 		},
 	};
-	const struct rte_flow_action copy_reg_mask[] = {
+	const struct rte_flow_action masks[] = {
 		[0] = {
 			.type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
 			.conf = &mreg_mask,
 		},
 		[1] = {
+			.type = RTE_FLOW_ACTION_TYPE_JUMP,
+			.conf = &jump_mask,
+		},
+		[2] = {
 			.type = RTE_FLOW_ACTION_TYPE_END,
 		},
 	};
 	struct rte_flow_error drop_err;
 
 	RTE_SET_USED(drop_err);
-	return flow_hw_actions_template_create(dev, &tx_act_attr, copy_reg_action,
-					       copy_reg_mask, &drop_err);
+	return flow_hw_actions_template_create(dev, &tx_act_attr, actions,
+					       masks, &drop_err);
 }
 
 /**
@@ -5711,63 +6041,21 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev)
 	struct rte_flow_actions_template *jump_one_actions_tmpl = NULL;
 	struct rte_flow_actions_template *tx_meta_actions_tmpl = NULL;
 	uint32_t xmeta = priv->sh->config.dv_xmeta_en;
+	uint32_t repr_matching = priv->sh->config.repr_matching;
 
-	/* Item templates */
+	/* Create templates and table for default SQ miss flow rules - root table. */
 	esw_mgr_items_tmpl = flow_hw_create_ctrl_esw_mgr_pattern_template(dev);
 	if (!esw_mgr_items_tmpl) {
 		DRV_LOG(ERR, "port %u failed to create E-Switch Manager item"
 			" template for control flows", dev->data->port_id);
 		goto error;
 	}
-	regc_sq_items_tmpl = flow_hw_create_ctrl_regc_sq_pattern_template(dev);
-	if (!regc_sq_items_tmpl) {
-		DRV_LOG(ERR, "port %u failed to create SQ item template for"
-			" control flows", dev->data->port_id);
-		goto error;
-	}
-	port_items_tmpl = flow_hw_create_ctrl_port_pattern_template(dev);
-	if (!port_items_tmpl) {
-		DRV_LOG(ERR, "port %u failed to create SQ item template for"
-			" control flows", dev->data->port_id);
-		goto error;
-	}
-	if (xmeta == MLX5_XMETA_MODE_META32_HWS) {
-		tx_meta_items_tmpl = flow_hw_create_tx_default_mreg_copy_pattern_template(dev);
-		if (!tx_meta_items_tmpl) {
-			DRV_LOG(ERR, "port %u failed to Tx metadata copy pattern"
-				" template for control flows", dev->data->port_id);
-			goto error;
-		}
-	}
-	/* Action templates */
 	regc_jump_actions_tmpl = flow_hw_create_ctrl_regc_jump_actions_template(dev);
 	if (!regc_jump_actions_tmpl) {
 		DRV_LOG(ERR, "port %u failed to create REG_C set and jump action template"
 			" for control flows", dev->data->port_id);
 		goto error;
 	}
-	port_actions_tmpl = flow_hw_create_ctrl_port_actions_template(dev);
-	if (!port_actions_tmpl) {
-		DRV_LOG(ERR, "port %u failed to create port action template"
-			" for control flows", dev->data->port_id);
-		goto error;
-	}
-	jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template
-			(dev, MLX5_HW_LOWEST_USABLE_GROUP);
-	if (!jump_one_actions_tmpl) {
-		DRV_LOG(ERR, "port %u failed to create jump action template"
-			" for control flows", dev->data->port_id);
-		goto error;
-	}
-	if (xmeta == MLX5_XMETA_MODE_META32_HWS) {
-		tx_meta_actions_tmpl = flow_hw_create_tx_default_mreg_copy_actions_template(dev);
-		if (!tx_meta_actions_tmpl) {
-			DRV_LOG(ERR, "port %u failed to Tx metadata copy actions"
-				" template for control flows", dev->data->port_id);
-			goto error;
-		}
-	}
-	/* Tables */
 	MLX5_ASSERT(priv->hw_esw_sq_miss_root_tbl == NULL);
 	priv->hw_esw_sq_miss_root_tbl = flow_hw_create_ctrl_sq_miss_root_table
 			(dev, esw_mgr_items_tmpl, regc_jump_actions_tmpl);
@@ -5776,6 +6064,19 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev)
 			" for control flows", dev->data->port_id);
 		goto error;
 	}
+	/* Create templates and table for default SQ miss flow rules - non-root table. */
+	regc_sq_items_tmpl = flow_hw_create_ctrl_regc_sq_pattern_template(dev);
+	if (!regc_sq_items_tmpl) {
+		DRV_LOG(ERR, "port %u failed to create SQ item template for"
+			" control flows", dev->data->port_id);
+		goto error;
+	}
+	port_actions_tmpl = flow_hw_create_ctrl_port_actions_template(dev);
+	if (!port_actions_tmpl) {
+		DRV_LOG(ERR, "port %u failed to create port action template"
+			" for control flows", dev->data->port_id);
+		goto error;
+	}
 	MLX5_ASSERT(priv->hw_esw_sq_miss_tbl == NULL);
 	priv->hw_esw_sq_miss_tbl = flow_hw_create_ctrl_sq_miss_table(dev, regc_sq_items_tmpl,
 								     port_actions_tmpl);
@@ -5784,6 +6085,20 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev)
 			" for control flows", dev->data->port_id);
 		goto error;
 	}
+	/* Create templates and table for default FDB jump flow rules. */
+	port_items_tmpl = flow_hw_create_ctrl_port_pattern_template(dev);
+	if (!port_items_tmpl) {
+		DRV_LOG(ERR, "port %u failed to create SQ item template for"
+			" control flows", dev->data->port_id);
+		goto error;
+	}
+	jump_one_actions_tmpl = flow_hw_create_ctrl_jump_actions_template
+			(dev, MLX5_HW_LOWEST_USABLE_GROUP);
+	if (!jump_one_actions_tmpl) {
+		DRV_LOG(ERR, "port %u failed to create jump action template"
+			" for control flows", dev->data->port_id);
+		goto error;
+	}
 	MLX5_ASSERT(priv->hw_esw_zero_tbl == NULL);
 	priv->hw_esw_zero_tbl = flow_hw_create_ctrl_jump_table(dev, port_items_tmpl,
 							       jump_one_actions_tmpl);
@@ -5792,7 +6107,20 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev)
 			" for control flows", dev->data->port_id);
 		goto error;
 	}
-	if (xmeta == MLX5_XMETA_MODE_META32_HWS) {
+	/* Create templates and table for default Tx metadata copy flow rule. */
+	if (!repr_matching && xmeta == MLX5_XMETA_MODE_META32_HWS) {
+		tx_meta_items_tmpl = flow_hw_create_tx_default_mreg_copy_pattern_template(dev);
+		if (!tx_meta_items_tmpl) {
+			DRV_LOG(ERR, "port %u failed to Tx metadata copy pattern"
+				" template for control flows", dev->data->port_id);
+			goto error;
+		}
+		tx_meta_actions_tmpl = flow_hw_create_tx_default_mreg_copy_actions_template(dev);
+		if (!tx_meta_actions_tmpl) {
+			DRV_LOG(ERR, "port %u failed to Tx metadata copy actions"
+				" template for control flows", dev->data->port_id);
+			goto error;
+		}
 		MLX5_ASSERT(priv->hw_tx_meta_cpy_tbl == NULL);
 		priv->hw_tx_meta_cpy_tbl = flow_hw_create_tx_default_mreg_copy_table(dev,
 					tx_meta_items_tmpl, tx_meta_actions_tmpl);
@@ -5816,7 +6144,7 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev)
 		flow_hw_table_destroy(dev, priv->hw_esw_sq_miss_root_tbl, NULL);
 		priv->hw_esw_sq_miss_root_tbl = NULL;
 	}
-	if (xmeta == MLX5_XMETA_MODE_META32_HWS && tx_meta_actions_tmpl)
+	if (tx_meta_actions_tmpl)
 		flow_hw_actions_template_destroy(dev, tx_meta_actions_tmpl, NULL);
 	if (jump_one_actions_tmpl)
 		flow_hw_actions_template_destroy(dev, jump_one_actions_tmpl, NULL);
@@ -5824,7 +6152,7 @@ flow_hw_create_ctrl_tables(struct rte_eth_dev *dev)
 		flow_hw_actions_template_destroy(dev, port_actions_tmpl, NULL);
 	if (regc_jump_actions_tmpl)
 		flow_hw_actions_template_destroy(dev, regc_jump_actions_tmpl, NULL);
-	if (xmeta == MLX5_XMETA_MODE_META32_HWS && tx_meta_items_tmpl)
+	if (tx_meta_items_tmpl)
 		flow_hw_pattern_template_destroy(dev, tx_meta_items_tmpl, NULL);
 	if (port_items_tmpl)
 		flow_hw_pattern_template_destroy(dev, port_items_tmpl, NULL);
@@ -6169,6 +6497,13 @@ flow_hw_configure(struct rte_eth_dev *dev,
 		if (!priv->hw_tag[i])
 			goto err;
 	}
+	if (priv->sh->config.dv_esw_en && priv->sh->config.repr_matching) {
+		ret = flow_hw_setup_tx_repr_tagging(dev);
+		if (ret) {
+			rte_errno = -ret;
+			goto err;
+		}
+	}
 	if (is_proxy) {
 		ret = flow_hw_create_vport_actions(priv);
 		if (ret) {
@@ -6291,6 +6626,7 @@ flow_hw_resource_release(struct rte_eth_dev *dev)
 		return;
 	flow_hw_rxq_flag_set(dev, false);
 	flow_hw_flush_all_ctrl_flows(dev);
+	flow_hw_cleanup_tx_repr_tagging(dev);
 	while (!LIST_EMPTY(&priv->flow_hw_tbl_ongo)) {
 		tbl = LIST_FIRST(&priv->flow_hw_tbl_ongo);
 		flow_hw_table_destroy(dev, tbl, NULL);
@@ -7650,45 +7986,30 @@ flow_hw_destroy_ctrl_flow(struct rte_eth_dev *dev, struct rte_flow *flow)
 }
 
 /**
- * Destroys control flows created on behalf of @p owner_dev device.
+ * Destroys control flows created on behalf of @p owner device on @p dev device.
  *
- * @param owner_dev
+ * @param dev
+ *   Pointer to Ethernet device on which control flows were created.
+ * @param owner
  *   Pointer to Ethernet device owning control flows.
  *
  * @return
  *   0 on success, otherwise negative error code is returned and
  *   rte_errno is set.
  */
-int
-mlx5_flow_hw_flush_ctrl_flows(struct rte_eth_dev *owner_dev)
+static int
+flow_hw_flush_ctrl_flows_owned_by(struct rte_eth_dev *dev, struct rte_eth_dev *owner)
 {
-	struct mlx5_priv *owner_priv = owner_dev->data->dev_private;
-	struct rte_eth_dev *proxy_dev;
-	struct mlx5_priv *proxy_priv;
+	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_hw_ctrl_flow *cf;
 	struct mlx5_hw_ctrl_flow *cf_next;
-	uint16_t owner_port_id = owner_dev->data->port_id;
-	uint16_t proxy_port_id = owner_dev->data->port_id;
 	int ret;
 
-	if (owner_priv->sh->config.dv_esw_en) {
-		if (rte_flow_pick_transfer_proxy(owner_port_id, &proxy_port_id, NULL)) {
-			DRV_LOG(ERR, "Unable to find proxy port for port %u",
-				owner_port_id);
-			rte_errno = EINVAL;
-			return -rte_errno;
-		}
-		proxy_dev = &rte_eth_devices[proxy_port_id];
-		proxy_priv = proxy_dev->data->dev_private;
-	} else {
-		proxy_dev = owner_dev;
-		proxy_priv = owner_priv;
-	}
-	cf = LIST_FIRST(&proxy_priv->hw_ctrl_flows);
+	cf = LIST_FIRST(&priv->hw_ctrl_flows);
 	while (cf != NULL) {
 		cf_next = LIST_NEXT(cf, next);
-		if (cf->owner_dev == owner_dev) {
-			ret = flow_hw_destroy_ctrl_flow(proxy_dev, cf->flow);
+		if (cf->owner_dev == owner) {
+			ret = flow_hw_destroy_ctrl_flow(dev, cf->flow);
 			if (ret) {
 				rte_errno = ret;
 				return -ret;
@@ -7701,6 +8022,50 @@ mlx5_flow_hw_flush_ctrl_flows(struct rte_eth_dev *owner_dev)
 	return 0;
 }
 
+/**
+ * Destroys control flows created for @p owner_dev device.
+ *
+ * @param owner_dev
+ *   Pointer to Ethernet device owning control flows.
+ *
+ * @return
+ *   0 on success, otherwise negative error code is returned and
+ *   rte_errno is set.
+ */
+int
+mlx5_flow_hw_flush_ctrl_flows(struct rte_eth_dev *owner_dev)
+{
+	struct mlx5_priv *owner_priv = owner_dev->data->dev_private;
+	struct rte_eth_dev *proxy_dev;
+	uint16_t owner_port_id = owner_dev->data->port_id;
+	uint16_t proxy_port_id = owner_dev->data->port_id;
+	int ret;
+
+	/* Flush all flows created by this port for itself. */
+	ret = flow_hw_flush_ctrl_flows_owned_by(owner_dev, owner_dev);
+	if (ret)
+		return ret;
+	/* Flush all flows created for this port on proxy port. */
+	if (owner_priv->sh->config.dv_esw_en) {
+		ret = rte_flow_pick_transfer_proxy(owner_port_id, &proxy_port_id, NULL);
+		if (ret == -ENODEV) {
+			DRV_LOG(DEBUG, "Unable to find transfer proxy port for port %u. It was "
+				       "probably closed. Control flows were cleared.",
+				       owner_port_id);
+			rte_errno = 0;
+			return 0;
+		} else if (ret) {
+			DRV_LOG(ERR, "Unable to find proxy port for port %u (ret = %d)",
+				owner_port_id, ret);
+			return ret;
+		}
+		proxy_dev = &rte_eth_devices[proxy_port_id];
+	} else {
+		proxy_dev = owner_dev;
+	}
+	return flow_hw_flush_ctrl_flows_owned_by(proxy_dev, owner_dev);
+}
+
 /**
  * Destroys all control flows created on @p dev device.
  *
@@ -7952,6 +8317,9 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev)
 			.conf = &mreg_action,
 		},
 		[1] = {
+			.type = RTE_FLOW_ACTION_TYPE_JUMP,
+		},
+		[2] = {
 			.type = RTE_FLOW_ACTION_TYPE_END,
 		},
 	};
@@ -7964,6 +8332,60 @@ mlx5_flow_hw_create_tx_default_mreg_copy_flow(struct rte_eth_dev *dev)
 					eth_all, 0, copy_reg_action, 0);
 }
 
+int
+mlx5_flow_hw_tx_repr_matching_flow(struct rte_eth_dev *dev, uint32_t sqn)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_rte_flow_item_sq sq_spec = {
+		.queue = sqn,
+	};
+	struct rte_flow_item items[] = {
+		{
+			.type = (enum rte_flow_item_type)MLX5_RTE_FLOW_ITEM_TYPE_SQ,
+			.spec = &sq_spec,
+		},
+		{
+			.type = RTE_FLOW_ITEM_TYPE_END,
+		},
+	};
+	/*
+	 * Allocate actions array suitable for all cases - extended metadata enabled or not.
+	 * With extended metadata there will be an additional MODIFY_FIELD action before JUMP.
+	 */
+	struct rte_flow_action actions[] = {
+		{ .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD },
+		{ .type = RTE_FLOW_ACTION_TYPE_JUMP },
+		{ .type = RTE_FLOW_ACTION_TYPE_END },
+		{ .type = RTE_FLOW_ACTION_TYPE_END },
+	};
+
+	/* It is assumed that caller checked for representor matching. */
+	MLX5_ASSERT(priv->sh->config.repr_matching);
+	if (!priv->dr_ctx) {
+		DRV_LOG(DEBUG, "Port %u must be configured for HWS, before creating "
+			       "default egress flow rules. Omitting creation.",
+			       dev->data->port_id);
+		return 0;
+	}
+	if (!priv->hw_tx_repr_tagging_tbl) {
+		DRV_LOG(ERR, "Port %u is configured for HWS, but table for default "
+			     "egress flow rules does not exist.",
+			     dev->data->port_id);
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+	/*
+	 * If extended metadata mode is enabled, then an additional MODIFY_FIELD action must be
+	 * placed before terminating JUMP action.
+	 */
+	if (priv->sh->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS) {
+		actions[1].type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD;
+		actions[2].type = RTE_FLOW_ACTION_TYPE_JUMP;
+	}
+	return flow_hw_create_ctrl_flow(dev, dev, priv->hw_tx_repr_tagging_tbl,
+					items, 0, actions, 0);
+}
+
 void
 mlx5_flow_meter_uninit(struct rte_eth_dev *dev)
 {
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index a973cbc5e3..dcb02f2a7f 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1065,6 +1065,69 @@ mlx5_hairpin_get_peer_ports(struct rte_eth_dev *dev, uint16_t *peer_ports,
 	return ret;
 }
 
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+
+/**
+ * Check if starting representor port is allowed.
+ *
+ * If transfer proxy port is configured for HWS, then starting representor port
+ * is allowed if and only if transfer proxy port is started as well.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ *
+ * @return
+ *   If stopping representor port is allowed, then 0 is returned.
+ *   Otherwise rte_errno is set, and negative errno value is returned.
+ */
+static int
+mlx5_hw_representor_port_allowed_start(struct rte_eth_dev *dev)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct rte_eth_dev *proxy_dev;
+	struct mlx5_priv *proxy_priv;
+	uint16_t proxy_port_id = UINT16_MAX;
+	int ret;
+
+	MLX5_ASSERT(priv->sh->config.dv_flow_en == 2);
+	MLX5_ASSERT(priv->sh->config.dv_esw_en);
+	MLX5_ASSERT(priv->representor);
+	ret = rte_flow_pick_transfer_proxy(dev->data->port_id, &proxy_port_id, NULL);
+	if (ret) {
+		if (ret == -ENODEV)
+			DRV_LOG(ERR, "Starting representor port %u is not allowed. Transfer "
+				     "proxy port is not available.", dev->data->port_id);
+		else
+			DRV_LOG(ERR, "Failed to pick transfer proxy for port %u (ret = %d)",
+				dev->data->port_id, ret);
+		return ret;
+	}
+	proxy_dev = &rte_eth_devices[proxy_port_id];
+	proxy_priv = proxy_dev->data->dev_private;
+	if (proxy_priv->dr_ctx == NULL) {
+		DRV_LOG(DEBUG, "Starting representor port %u is allowed, but default traffic flows"
+			       " will not be created. Transfer proxy port must be configured"
+			       " for HWS and started.",
+			       dev->data->port_id);
+		return 0;
+	}
+	if (!proxy_dev->data->dev_started) {
+		DRV_LOG(ERR, "Failed to start port %u: transfer proxy (port %u) must be started",
+			     dev->data->port_id, proxy_port_id);
+		rte_errno = EAGAIN;
+		return -rte_errno;
+	}
+	if (priv->sh->config.repr_matching && !priv->dr_ctx) {
+		DRV_LOG(ERR, "Failed to start port %u: with representor matching enabled, port "
+			     "must be configured for HWS", dev->data->port_id);
+		rte_errno = EINVAL;
+		return -rte_errno;
+	}
+	return 0;
+}
+
+#endif
+
 /**
  * DPDK callback to start the device.
  *
@@ -1084,6 +1147,19 @@ mlx5_dev_start(struct rte_eth_dev *dev)
 	int fine_inline;
 
 	DRV_LOG(DEBUG, "port %u starting device", dev->data->port_id);
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+	if (priv->sh->config.dv_flow_en == 2) {
+		/* If there is no E-Switch, then there are no start/stop order limitations. */
+		if (!priv->sh->config.dv_esw_en)
+			goto continue_dev_start;
+		/* If master is being started, then it is always allowed. */
+		if (priv->master)
+			goto continue_dev_start;
+		if (mlx5_hw_representor_port_allowed_start(dev))
+			return -rte_errno;
+	}
+continue_dev_start:
+#endif
 	fine_inline = rte_mbuf_dynflag_lookup
 		(RTE_PMD_MLX5_FINE_GRANULARITY_INLINE, NULL);
 	if (fine_inline >= 0)
@@ -1248,6 +1324,53 @@ mlx5_dev_start(struct rte_eth_dev *dev)
 	return -rte_errno;
 }
 
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+/**
+ * Check if stopping transfer proxy port is allowed.
+ *
+ * If transfer proxy port is configured for HWS, then it is allowed to stop it
+ * if and only if all other representor ports are stopped.
+ *
+ * @param dev
+ *   Pointer to Ethernet device structure.
+ *
+ * @return
+ *   If stopping transfer proxy port is allowed, then 0 is returned.
+ *   Otherwise rte_errno is set, and negative errno value is returned.
+ */
+static int
+mlx5_hw_proxy_port_allowed_stop(struct rte_eth_dev *dev)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	bool representor_started = false;
+	uint16_t port_id;
+
+	MLX5_ASSERT(priv->sh->config.dv_flow_en == 2);
+	MLX5_ASSERT(priv->sh->config.dv_esw_en);
+	MLX5_ASSERT(priv->master);
+	/* If transfer proxy port was not configured for HWS, then stopping it is allowed. */
+	if (!priv->dr_ctx)
+		return 0;
+	MLX5_ETH_FOREACH_DEV(port_id, dev->device) {
+		const struct rte_eth_dev *port_dev = &rte_eth_devices[port_id];
+		const struct mlx5_priv *port_priv = port_dev->data->dev_private;
+
+		if (port_id != dev->data->port_id &&
+		    port_priv->domain_id == priv->domain_id &&
+		    port_dev->data->dev_started)
+			representor_started = true;
+	}
+	if (representor_started) {
+		DRV_LOG(INFO, "Failed to stop port %u: attached representor ports"
+			      " must be stopped before stopping transfer proxy port",
+			      dev->data->port_id);
+		rte_errno = EBUSY;
+		return -rte_errno;
+	}
+	return 0;
+}
+#endif
+
 /**
  * DPDK callback to stop the device.
  *
@@ -1261,6 +1384,21 @@ mlx5_dev_stop(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 
+#ifdef HAVE_IBV_FLOW_DV_SUPPORT
+	if (priv->sh->config.dv_flow_en == 2) {
+		/* If there is no E-Switch, then there are no start/stop order limitations. */
+		if (!priv->sh->config.dv_esw_en)
+			goto continue_dev_stop;
+		/* If representor is being stopped, then it is always allowed. */
+		if (priv->representor)
+			goto continue_dev_stop;
+		if (mlx5_hw_proxy_port_allowed_stop(dev)) {
+			dev->data->dev_started = 1;
+			return -rte_errno;
+		}
+	}
+continue_dev_stop:
+#endif
 	dev->data->dev_started = 0;
 	/* Prevent crashes when queues are still in use. */
 	dev->rx_pkt_burst = rte_eth_pkt_burst_dummy;
@@ -1296,13 +1434,21 @@ static int
 mlx5_traffic_enable_hws(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_sh_config *config = &priv->sh->config;
 	unsigned int i;
 	int ret;
 
-	if (priv->sh->config.dv_esw_en && priv->master) {
-		if (priv->sh->config.dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS)
-			if (mlx5_flow_hw_create_tx_default_mreg_copy_flow(dev))
-				goto error;
+	/*
+	 * With extended metadata enabled, the Tx metadata copy is handled by default
+	 * Tx tagging flow rules, so default Tx flow rule is not needed. It is only
+	 * required when representor matching is disabled.
+	 */
+	if (config->dv_esw_en &&
+	    !config->repr_matching &&
+	    config->dv_xmeta_en == MLX5_XMETA_MODE_META32_HWS &&
+	    priv->master) {
+		if (mlx5_flow_hw_create_tx_default_mreg_copy_flow(dev))
+			goto error;
 	}
 	for (i = 0; i < priv->txqs_n; ++i) {
 		struct mlx5_txq_ctrl *txq = mlx5_txq_get(dev, i);
@@ -1311,17 +1457,22 @@ mlx5_traffic_enable_hws(struct rte_eth_dev *dev)
 		if (!txq)
 			continue;
 		queue = mlx5_txq_get_sqn(txq);
-		if ((priv->representor || priv->master) &&
-		    priv->sh->config.dv_esw_en) {
+		if ((priv->representor || priv->master) && config->dv_esw_en) {
 			if (mlx5_flow_hw_esw_create_sq_miss_flow(dev, queue)) {
 				mlx5_txq_release(dev, i);
 				goto error;
 			}
 		}
+		if (config->dv_esw_en && config->repr_matching) {
+			if (mlx5_flow_hw_tx_repr_matching_flow(dev, queue)) {
+				mlx5_txq_release(dev, i);
+				goto error;
+			}
+		}
 		mlx5_txq_release(dev, i);
 	}
-	if (priv->sh->config.fdb_def_rule) {
-		if ((priv->master || priv->representor) && priv->sh->config.dv_esw_en) {
+	if (config->fdb_def_rule) {
+		if ((priv->master || priv->representor) && config->dv_esw_en) {
 			if (!mlx5_flow_hw_esw_create_default_jump_flow(dev))
 				priv->fdb_def_rule = 1;
 			else
-- 
2.25.1


  parent reply	other threads:[~2022-09-28  3:34 UTC|newest]

Thread overview: 140+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-23 14:43 [PATCH 00/27] net/mlx5: HW steering PMD update Suanming Mou
2022-09-23 14:43 ` [PATCH 01/27] net/mlx5: fix invalid flow attributes Suanming Mou
2022-09-23 14:43 ` [PATCH 02/27] net/mlx5: fix IPv6 and TCP RSS hash fields Suanming Mou
2022-09-23 14:43 ` [PATCH 03/27] net/mlx5: add shared header reformat support Suanming Mou
2022-09-23 14:43 ` [PATCH 04/27] net/mlx5: add modify field hws support Suanming Mou
2022-09-23 14:43 ` [PATCH 05/27] net/mlx5: validate modify field action template Suanming Mou
2022-09-23 14:43 ` [PATCH 06/27] net/mlx5: enable mark flag for all ports in the same domain Suanming Mou
2022-09-23 14:43 ` [PATCH 07/27] net/mlx5: create port actions Suanming Mou
2022-09-23 14:43 ` [PATCH 08/27] net/mlx5: add extended metadata mode for hardware steering Suanming Mou
2022-09-23 14:43 ` [PATCH 09/27] ethdev: add meter profiles/policies config Suanming Mou
2022-09-23 14:43 ` [PATCH 10/27] net/mlx5: add HW steering meter action Suanming Mou
2022-09-23 14:43 ` [PATCH 11/27] net/mlx5: add HW steering counter action Suanming Mou
2022-09-23 14:43 ` [PATCH 12/27] net/mlx5: support caching queue action Suanming Mou
2022-09-23 14:43 ` [PATCH 13/27] net/mlx5: support DR action template API Suanming Mou
2022-09-23 14:43 ` [PATCH 14/27] net/mlx5: fix indirect action validate Suanming Mou
2022-09-23 14:43 ` [PATCH 15/27] net/mlx5: update indirect actions ops to HW variation Suanming Mou
2022-09-23 14:43 ` [PATCH 16/27] net/mlx5: support indirect count action for HW steering Suanming Mou
2022-09-23 14:43 ` [PATCH 17/27] net/mlx5: add pattern and table attribute validation Suanming Mou
2022-09-23 14:43 ` [PATCH 18/27] net/mlx5: add meta item support in egress Suanming Mou
2022-09-23 14:43 ` [PATCH 19/27] net/mlx5: add support for ASO return register Suanming Mou
2022-09-23 14:43 ` [PATCH 20/27] lib/ethdev: add connection tracking configuration Suanming Mou
2022-09-23 14:43 ` [PATCH 21/27] net/mlx5: add HW steering connection tracking support Suanming Mou
2022-09-23 14:43 ` [PATCH 22/27] net/mlx5: add HW steering VLAN push, pop and VID modify flow actions Suanming Mou
2022-09-23 14:43 ` [PATCH 23/27] net/mlx5: add meter color flow matching in dv Suanming Mou
2022-09-23 14:43 ` [PATCH 24/27] net/mlx5: add meter color flow matching in hws Suanming Mou
2022-09-23 14:43 ` [PATCH 25/27] net/mlx5: implement profile/policy get Suanming Mou
2022-09-23 14:43 ` [PATCH 26/27] net/mlx5: implement METER MARK action for HWS Suanming Mou
2022-09-23 14:43 ` [PATCH 27/27] net/mlx5: implement METER MARK indirect " Suanming Mou
2022-09-28  3:31 ` [PATCH v2 00/17] net/mlx5: HW steering PMD update Suanming Mou
2022-09-28  3:31   ` [PATCH v2 01/17] net/mlx5: fix invalid flow attributes Suanming Mou
2022-09-28  3:31   ` [PATCH v2 02/17] net/mlx5: fix IPv6 and TCP RSS hash fields Suanming Mou
2022-09-28  3:31   ` [PATCH v2 03/17] net/mlx5: add shared header reformat support Suanming Mou
2022-09-28  3:31   ` [PATCH v2 04/17] net/mlx5: add modify field hws support Suanming Mou
2022-09-28  3:31   ` [PATCH v2 05/17] net/mlx5: add HW steering port action Suanming Mou
2022-09-28  3:31   ` [PATCH v2 06/17] net/mlx5: add extended metadata mode for hardware steering Suanming Mou
2022-09-28  3:31   ` [PATCH v2 07/17] net/mlx5: add HW steering meter action Suanming Mou
2022-09-28  3:31   ` [PATCH v2 08/17] net/mlx5: add HW steering counter action Suanming Mou
2022-09-28  3:31   ` [PATCH v2 09/17] net/mlx5: support DR action template API Suanming Mou
2022-09-28  3:31   ` [PATCH v2 10/17] net/mlx5: add HW steering connection tracking support Suanming Mou
2022-09-28  3:31   ` [PATCH v2 11/17] net/mlx5: add HW steering VLAN push, pop and VID modify flow actions Suanming Mou
2022-09-28  3:31   ` [PATCH v2 12/17] net/mlx5: implement METER MARK indirect action for HWS Suanming Mou
2022-09-28  3:31   ` [PATCH v2 13/17] net/mlx5: add HWS AGE action support Suanming Mou
2022-09-28  3:31   ` [PATCH v2 14/17] net/mlx5: add async action push and pull support Suanming Mou
2022-09-28  3:31   ` [PATCH v2 15/17] net/mlx5: support flow integrity in HWS group 0 Suanming Mou
2022-09-28  3:31   ` [PATCH v2 16/17] net/mlx5: support device control for E-Switch default rule Suanming Mou
2022-09-28  3:31   ` Suanming Mou [this message]
2022-09-30 12:52 ` [PATCH v3 00/17] net/mlx5: HW steering PMD update Suanming Mou
2022-09-30 12:52   ` [PATCH v3 01/17] net/mlx5: fix invalid flow attributes Suanming Mou
2022-09-30 12:53   ` [PATCH v3 02/17] net/mlx5: fix IPv6 and TCP RSS hash fields Suanming Mou
2022-09-30 12:53   ` [PATCH v3 03/17] net/mlx5: add shared header reformat support Suanming Mou
2022-09-30 12:53   ` [PATCH v3 04/17] net/mlx5: add modify field hws support Suanming Mou
2022-09-30 12:53   ` [PATCH v3 05/17] net/mlx5: add HW steering port action Suanming Mou
2022-09-30 12:53   ` [PATCH v3 06/17] net/mlx5: add extended metadata mode for hardware steering Suanming Mou
2022-09-30 12:53   ` [PATCH v3 07/17] net/mlx5: add HW steering meter action Suanming Mou
2022-09-30 12:53   ` [PATCH v3 08/17] net/mlx5: add HW steering counter action Suanming Mou
2022-09-30 12:53   ` [PATCH v3 09/17] net/mlx5: support DR action template API Suanming Mou
2022-09-30 12:53   ` [PATCH v3 10/17] net/mlx5: add HW steering connection tracking support Suanming Mou
2022-09-30 12:53   ` [PATCH v3 11/17] net/mlx5: add HW steering VLAN push, pop and VID modify flow actions Suanming Mou
2022-09-30 12:53   ` [PATCH v3 12/17] net/mlx5: implement METER MARK indirect action for HWS Suanming Mou
2022-09-30 12:53   ` [PATCH v3 13/17] net/mlx5: add HWS AGE action support Suanming Mou
2022-09-30 12:53   ` [PATCH v3 14/17] net/mlx5: add async action push and pull support Suanming Mou
2022-09-30 12:53   ` [PATCH v3 15/17] net/mlx5: support flow integrity in HWS group 0 Suanming Mou
2022-09-30 12:53   ` [PATCH v3 16/17] net/mlx5: support device control for E-Switch default rule Suanming Mou
2022-09-30 12:53   ` [PATCH v3 17/17] net/mlx5: support device control of representor matching Suanming Mou
2022-10-19 16:25 ` [PATCH v4 00/18] net/mlx5: HW steering PMD update Suanming Mou
2022-10-19 16:25   ` [PATCH v4 01/18] net/mlx5: fix invalid flow attributes Suanming Mou
2022-10-19 16:25   ` [PATCH v4 02/18] net/mlx5: fix IPv6 and TCP RSS hash fields Suanming Mou
2022-10-19 16:25   ` [PATCH v4 03/18] net/mlx5: add shared header reformat support Suanming Mou
2022-10-19 16:25   ` [PATCH v4 04/18] net/mlx5: add modify field hws support Suanming Mou
2022-10-19 16:25   ` [PATCH v4 05/18] net/mlx5: add HW steering port action Suanming Mou
2022-10-19 16:25   ` [PATCH v4 06/18] net/mlx5: add extended metadata mode for hardware steering Suanming Mou
2022-10-19 16:25   ` [PATCH v4 07/18] net/mlx5: add HW steering meter action Suanming Mou
2022-10-19 16:25   ` [PATCH v4 08/18] net/mlx5: add HW steering counter action Suanming Mou
2022-10-19 16:25   ` [PATCH v4 09/18] net/mlx5: support DR action template API Suanming Mou
2022-10-19 16:25   ` [PATCH v4 10/18] net/mlx5: add HW steering connection tracking support Suanming Mou
2022-10-19 16:25   ` [PATCH v4 11/18] net/mlx5: add HW steering VLAN push, pop and VID modify flow actions Suanming Mou
2022-10-19 16:25   ` [PATCH v4 12/18] net/mlx5: implement METER MARK indirect action for HWS Suanming Mou
2022-10-19 16:25   ` [PATCH v4 13/18] net/mlx5: add HWS AGE action support Suanming Mou
2022-10-19 16:25   ` [PATCH v4 14/18] net/mlx5: add async action push and pull support Suanming Mou
2022-10-19 16:25   ` [PATCH v4 15/18] net/mlx5: support flow integrity in HWS group 0 Suanming Mou
2022-10-19 16:25   ` [PATCH v4 16/18] net/mlx5: support device control for E-Switch default rule Suanming Mou
2022-10-19 16:25   ` [PATCH v4 17/18] net/mlx5: support device control of representor matching Suanming Mou
2022-10-19 16:25   ` [PATCH v4 18/18] net/mlx5: create control flow rules with HWS Suanming Mou
2022-10-20  3:21 ` [PATCH v5 00/18] net/mlx5: HW steering PMD update Suanming Mou
2022-10-20  3:21   ` [PATCH v5 01/18] net/mlx5: fix invalid flow attributes Suanming Mou
2022-10-20  3:21   ` [PATCH v5 02/18] net/mlx5: fix IPv6 and TCP RSS hash fields Suanming Mou
2022-10-20  3:21   ` [PATCH v5 03/18] net/mlx5: add shared header reformat support Suanming Mou
2022-10-20  3:21   ` [PATCH v5 04/18] net/mlx5: add modify field hws support Suanming Mou
2022-10-20  3:21   ` [PATCH v5 05/18] net/mlx5: add HW steering port action Suanming Mou
2022-10-20  3:21   ` [PATCH v5 06/18] net/mlx5: add extended metadata mode for hardware steering Suanming Mou
2022-10-20  3:21   ` [PATCH v5 07/18] net/mlx5: add HW steering meter action Suanming Mou
2022-10-20  3:22   ` [PATCH v5 08/18] net/mlx5: add HW steering counter action Suanming Mou
2022-10-20  3:22   ` [PATCH v5 09/18] net/mlx5: support DR action template API Suanming Mou
2022-10-20  3:22   ` [PATCH v5 10/18] net/mlx5: add HW steering connection tracking support Suanming Mou
2022-10-20  3:22   ` [PATCH v5 11/18] net/mlx5: add HW steering VLAN push, pop and VID modify flow actions Suanming Mou
2022-10-20  3:22   ` [PATCH v5 12/18] net/mlx5: implement METER MARK indirect action for HWS Suanming Mou
2022-10-20  3:22   ` [PATCH v5 13/18] net/mlx5: add HWS AGE action support Suanming Mou
2022-10-20  3:22   ` [PATCH v5 14/18] net/mlx5: add async action push and pull support Suanming Mou
2022-10-20  3:22   ` [PATCH v5 15/18] net/mlx5: support flow integrity in HWS group 0 Suanming Mou
2022-10-20  3:22   ` [PATCH v5 16/18] net/mlx5: support device control for E-Switch default rule Suanming Mou
2022-10-20  3:22   ` [PATCH v5 17/18] net/mlx5: support device control of representor matching Suanming Mou
2022-10-20  3:22   ` [PATCH v5 18/18] net/mlx5: create control flow rules with HWS Suanming Mou
2022-10-20 15:41 ` [PATCH v6 00/18] net/mlx5: HW steering PMD update Suanming Mou
2022-10-20 15:41   ` [PATCH v6 01/18] net/mlx5: fix invalid flow attributes Suanming Mou
2022-10-24  9:43     ` Slava Ovsiienko
2022-10-20 15:41   ` [PATCH v6 02/18] net/mlx5: fix IPv6 and TCP RSS hash fields Suanming Mou
2022-10-24  9:43     ` Slava Ovsiienko
2022-10-20 15:41   ` [PATCH v6 03/18] net/mlx5: add shared header reformat support Suanming Mou
2022-10-24  9:44     ` Slava Ovsiienko
2022-10-20 15:41   ` [PATCH v6 04/18] net/mlx5: add modify field hws support Suanming Mou
2022-10-24  9:44     ` Slava Ovsiienko
2022-10-20 15:41   ` [PATCH v6 05/18] net/mlx5: add HW steering port action Suanming Mou
2022-10-24  9:44     ` Slava Ovsiienko
2022-10-20 15:41   ` [PATCH v6 06/18] net/mlx5: add extended metadata mode for hardware steering Suanming Mou
2022-10-24  9:45     ` Slava Ovsiienko
2022-10-20 15:41   ` [PATCH v6 07/18] net/mlx5: add HW steering meter action Suanming Mou
2022-10-24  9:44     ` Slava Ovsiienko
2022-10-20 15:41   ` [PATCH v6 08/18] net/mlx5: add HW steering counter action Suanming Mou
2022-10-24  9:45     ` Slava Ovsiienko
2022-10-20 15:41   ` [PATCH v6 09/18] net/mlx5: support DR action template API Suanming Mou
2022-10-24  9:45     ` Slava Ovsiienko
2022-10-20 15:41   ` [PATCH v6 10/18] net/mlx5: add HW steering connection tracking support Suanming Mou
2022-10-24  9:46     ` Slava Ovsiienko
2022-10-20 15:41   ` [PATCH v6 11/18] net/mlx5: add HW steering VLAN push, pop and VID modify flow actions Suanming Mou
2022-10-24  9:46     ` Slava Ovsiienko
2022-10-20 15:41   ` [PATCH v6 12/18] net/mlx5: implement METER MARK indirect action for HWS Suanming Mou
2022-10-24  9:46     ` Slava Ovsiienko
2022-10-20 15:41   ` [PATCH v6 13/18] net/mlx5: add HWS AGE action support Suanming Mou
2022-10-24  9:46     ` Slava Ovsiienko
2022-10-20 15:41   ` [PATCH v6 14/18] net/mlx5: add async action push and pull support Suanming Mou
2022-10-24  9:47     ` Slava Ovsiienko
2022-10-20 15:41   ` [PATCH v6 15/18] net/mlx5: support flow integrity in HWS group 0 Suanming Mou
2022-10-24  9:47     ` Slava Ovsiienko
2022-10-20 15:41   ` [PATCH v6 16/18] net/mlx5: support device control for E-Switch default rule Suanming Mou
2022-10-24  9:47     ` Slava Ovsiienko
2022-10-20 15:41   ` [PATCH v6 17/18] net/mlx5: support device control of representor matching Suanming Mou
2022-10-24  9:47     ` Slava Ovsiienko
2022-10-20 15:41   ` [PATCH v6 18/18] net/mlx5: create control flow rules with HWS Suanming Mou
2022-10-24  9:48     ` Slava Ovsiienko
2022-10-24 10:57   ` [PATCH v6 00/18] net/mlx5: HW steering PMD update Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220928033130.9106-18-suanmingm@nvidia.com \
    --to=suanmingm@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=dsosnowski@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).