DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH 0/4] net/mlx5: connection tracking changes
@ 2024-02-21 10:01 Dariusz Sosnowski
  2024-02-21 10:01 ` [PATCH 1/4] net/mlx5: fix conntrack action handle representation Dariusz Sosnowski
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: Dariusz Sosnowski @ 2024-02-21 10:01 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

Patches 1 and 2 contain fixes for existing implementation of
connection tracking flow actions.

Patch 3 adds support for sharing connection tracking flow actions
between ports when ports' flow engines are configured with
RTE_FLOW_PORT_FLAG_SHARE_INDIRECT flag set.

Patch 4 is based on the previous one and removes the limitation on
number of ports when connection tracking flow actions are used
with HW Steering flow engine.

Dariusz Sosnowski (3):
  net/mlx5: fix conntrack action handle representation
  net/mlx5: fix connection tracking action validation
  net/mlx5: remove port from conntrack handle representation

Suanming Mou (1):
  net/mlx5: add cross port CT object sharing

 doc/guides/nics/mlx5.rst               |   4 +-
 doc/guides/rel_notes/release_24_03.rst |   3 +
 drivers/net/mlx5/mlx5_flow.h           |  20 ++-
 drivers/net/mlx5/mlx5_flow_dv.c        |   9 ++
 drivers/net/mlx5/mlx5_flow_hw.c        | 182 +++++++++++++------------
 5 files changed, 126 insertions(+), 92 deletions(-)

--
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/4] net/mlx5: fix conntrack action handle representation
  2024-02-21 10:01 [PATCH 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
@ 2024-02-21 10:01 ` Dariusz Sosnowski
  2024-02-21 10:01 ` [PATCH 2/4] net/mlx5: fix connection tracking action validation Dariusz Sosnowski
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 16+ messages in thread
From: Dariusz Sosnowski @ 2024-02-21 10:01 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad,
	Alexander Kozyrev
  Cc: dev, stable

In mlx5 PMD, handles to indirect connection tracking flow actions
are encoded in 32-bit unsigned integers as follows:

- Bits 31-29 - indirect action type.
- Bits 28-25 - port on which connection tracking action was created.
- Bits 24-0 - index of connection tracking object.

Macro defining a bit shift for owner part in this representation
was incorrectly defined as 22. This patch fixes that, as well as
aligns documented limitations.

Fixes: 463170a7c934 ("net/mlx5: support connection tracking with HWS")
Fixes: 48fbb0e93d06 ("net/mlx5: support flow meter mark indirect action with HWS")
Cc: stable@dpdk.org

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/nics/mlx5.rst     | 4 ++--
 drivers/net/mlx5/mlx5_flow.h | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index fa013b03bb..b78753696a 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -739,8 +739,8 @@ Limitations

   - Cannot co-exist with ASO meter, ASO age action in a single flow rule.
   - Flow rules insertion rate and memory consumption need more optimization.
-  - 256 ports maximum.
-  - 4M connections maximum with ``dv_flow_en`` 1 mode. 16M with ``dv_flow_en`` 2.
+  - 16 ports maximum.
+  - 32M connections maximum.

 - Multi-thread flow insertion:

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index a4d0ff7b13..b4bf96cd64 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -77,7 +77,7 @@ enum mlx5_indirect_type {
 /* Now, the maximal ports will be supported is 16, action number is 32M. */
 #define MLX5_INDIRECT_ACT_CT_MAX_PORT 0x10

-#define MLX5_INDIRECT_ACT_CT_OWNER_SHIFT 22
+#define MLX5_INDIRECT_ACT_CT_OWNER_SHIFT 25
 #define MLX5_INDIRECT_ACT_CT_OWNER_MASK (MLX5_INDIRECT_ACT_CT_MAX_PORT - 1)

 /* 29-31: type, 25-28: owner port, 0-24: index */
--
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 2/4] net/mlx5: fix connection tracking action validation
  2024-02-21 10:01 [PATCH 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
  2024-02-21 10:01 ` [PATCH 1/4] net/mlx5: fix conntrack action handle representation Dariusz Sosnowski
@ 2024-02-21 10:01 ` Dariusz Sosnowski
  2024-02-21 10:01 ` [PATCH 3/4] net/mlx5: add cross port CT object sharing Dariusz Sosnowski
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 16+ messages in thread
From: Dariusz Sosnowski @ 2024-02-21 10:01 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev, stable

In mlx5 PMD, handles to indirect connection tracking flow actions
are encoded as 32-bit unsigned integers, where port ID is stored
in bits 28-25. Because of this, connection tracking flow actions
cannot be created on ports with IDs higher than 15.
This patch adds missing validation.

Fixes: 463170a7c934 ("net/mlx5: support connection tracking with HWS")
Cc: stable@dpdk.org

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_dv.c | 9 +++++++++
 drivers/net/mlx5/mlx5_flow_hw.c | 7 +++++++
 2 files changed, 16 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 6fded15d91..0604f92531 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -13861,6 +13861,13 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev,
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ACTION, NULL,
 					  "Connection is not supported");
+	if (dev->data->port_id >= MLX5_INDIRECT_ACT_CT_MAX_PORT) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "CT supports port indexes up to "
+				   RTE_STR(MLX5_ACTION_CTX_CT_MAX_PORT));
+		return 0;
+	}
 	idx = flow_dv_aso_ct_alloc(dev, error);
 	if (!idx)
 		return rte_flow_error_set(error, rte_errno,
@@ -16558,6 +16565,8 @@ flow_dv_action_create(struct rte_eth_dev *dev,
 	case RTE_FLOW_ACTION_TYPE_CONNTRACK:
 		ret = flow_dv_translate_create_conntrack(dev, action->conf,
 							 err);
+		if (!ret)
+			break;
 		idx = MLX5_INDIRECT_ACT_CT_GEN_IDX(PORT_ID(priv), ret);
 		break;
 	default:
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 3bb3a9a178..2f366e9078 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -10048,6 +10048,13 @@ flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue,
 				   "CT is not enabled");
 		return 0;
 	}
+	if (dev->data->port_id >= MLX5_INDIRECT_ACT_CT_MAX_PORT) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "CT supports port indexes up to "
+				   RTE_STR(MLX5_ACTION_CTX_CT_MAX_PORT));
+		return 0;
+	}
 	ct = mlx5_ipool_zmalloc(pool->cts, &ct_idx);
 	if (!ct) {
 		rte_flow_error_set(error, rte_errno,
--
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 3/4] net/mlx5: add cross port CT object sharing
  2024-02-21 10:01 [PATCH 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
  2024-02-21 10:01 ` [PATCH 1/4] net/mlx5: fix conntrack action handle representation Dariusz Sosnowski
  2024-02-21 10:01 ` [PATCH 2/4] net/mlx5: fix connection tracking action validation Dariusz Sosnowski
@ 2024-02-21 10:01 ` Dariusz Sosnowski
  2024-02-21 10:01 ` [PATCH 4/4] net/mlx5: remove port from conntrack handle representation Dariusz Sosnowski
  2024-02-23 14:23 ` [PATCH v2 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
  4 siblings, 0 replies; 16+ messages in thread
From: Dariusz Sosnowski @ 2024-02-21 10:01 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

From: Suanming Mou <suanmingm@nvidia.com>

This commit adds cross port CT object sharing.

Shared CT object shares the same DevX objects, but allocate port's
own action locally. Once the CT object is shared between two flows
in different ports, the two flows use their own local action with
the same offset index.

The shared CT object can only be created/updated/queried/destroyed
by host port.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/rel_notes/release_24_03.rst |   3 +
 drivers/net/mlx5/mlx5_flow_hw.c        | 145 ++++++++++++++-----------
 2 files changed, 86 insertions(+), 62 deletions(-)

diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 619459baae..2262f350b5 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -133,6 +133,9 @@ New Features
   * Added HW steering support for modify field ``RTE_FLOW_FIELD_ESP_SEQ_NUM`` flow action.
   * Added HW steering support for modify field ``RTE_FLOW_FIELD_ESP_PROTO`` flow action.

+  * Added support for sharing indirect action objects of type ``RTE_FLOW_ACTION_TYPE_CONNTRACK``
+    with HW steering flow engine.
+

 Removed Items
 -------------
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 2f366e9078..89066e7214 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -564,7 +564,7 @@ flow_hw_ct_compile(struct rte_eth_dev *dev,
 	struct mlx5_aso_ct_action *ct;

 	ct = mlx5_ipool_get(priv->hws_ctpool->cts, MLX5_ACTION_CTX_CT_GET_IDX(idx));
-	if (!ct || mlx5_aso_ct_available(priv->sh, queue, ct))
+	if (!ct || (!priv->shared_host && mlx5_aso_ct_available(priv->sh, queue, ct)))
 		return -1;
 	rule_act->action = priv->hws_ctpool->dr_action;
 	rule_act->aso_ct.offset = ct->offset;
@@ -3835,9 +3835,11 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev,
 	if (ret_comp < n_res && priv->hws_mpool)
 		ret_comp += mlx5_aso_pull_completion(&priv->hws_mpool->sq[queue],
 				&res[ret_comp], n_res - ret_comp);
-	if (ret_comp < n_res && priv->hws_ctpool)
-		ret_comp += mlx5_aso_pull_completion(&priv->ct_mng->aso_sqs[queue],
-				&res[ret_comp], n_res - ret_comp);
+	if (!priv->shared_host) {
+		if (ret_comp < n_res && priv->hws_ctpool)
+			ret_comp += mlx5_aso_pull_completion(&priv->ct_mng->aso_sqs[queue],
+					&res[ret_comp], n_res - ret_comp);
+	}
 	if (ret_comp < n_res && priv->quota_ctx.sq)
 		ret_comp += mlx5_aso_pull_completion(&priv->quota_ctx.sq[queue],
 						     &res[ret_comp],
@@ -8797,15 +8799,19 @@ flow_hw_ct_mng_destroy(struct rte_eth_dev *dev,
 }

 static void
-flow_hw_ct_pool_destroy(struct rte_eth_dev *dev __rte_unused,
+flow_hw_ct_pool_destroy(struct rte_eth_dev *dev,
 			struct mlx5_aso_ct_pool *pool)
 {
+	struct mlx5_priv *priv = dev->data->dev_private;
+
 	if (pool->dr_action)
 		mlx5dr_action_destroy(pool->dr_action);
-	if (pool->devx_obj)
-		claim_zero(mlx5_devx_cmd_destroy(pool->devx_obj));
-	if (pool->cts)
-		mlx5_ipool_destroy(pool->cts);
+	if (!priv->shared_host) {
+		if (pool->devx_obj)
+			claim_zero(mlx5_devx_cmd_destroy(pool->devx_obj));
+		if (pool->cts)
+			mlx5_ipool_destroy(pool->cts);
+	}
 	mlx5_free(pool);
 }

@@ -8829,51 +8835,56 @@ flow_hw_ct_pool_create(struct rte_eth_dev *dev,
 		.type = "mlx5_hw_ct_action",
 	};
 	int reg_id;
-	uint32_t flags;
+	uint32_t flags = 0;

-	if (port_attr->flags & RTE_FLOW_PORT_FLAG_SHARE_INDIRECT) {
-		DRV_LOG(ERR, "Connection tracking is not supported "
-			     "in cross vHCA sharing mode");
-		rte_errno = ENOTSUP;
-		return NULL;
-	}
 	pool = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*pool), 0, SOCKET_ID_ANY);
 	if (!pool) {
 		rte_errno = ENOMEM;
 		return NULL;
 	}
-	obj = mlx5_devx_cmd_create_conn_track_offload_obj(priv->sh->cdev->ctx,
-							  priv->sh->cdev->pdn,
-							  log_obj_size);
-	if (!obj) {
-		rte_errno = ENODATA;
-		DRV_LOG(ERR, "Failed to create conn_track_offload_obj using DevX.");
-		goto err;
+	if (!priv->shared_host) {
+		/*
+		 * No need for local cache if CT number is a small number. Since
+		 * flow insertion rate will be very limited in that case. Here let's
+		 * set the number to less than default trunk size 4K.
+		 */
+		if (nb_cts <= cfg.trunk_size) {
+			cfg.per_core_cache = 0;
+			cfg.trunk_size = nb_cts;
+		} else if (nb_cts <= MLX5_HW_IPOOL_SIZE_THRESHOLD) {
+			cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN;
+		}
+		cfg.max_idx = nb_cts;
+		pool->cts = mlx5_ipool_create(&cfg);
+		if (!pool->cts)
+			goto err;
+		obj = mlx5_devx_cmd_create_conn_track_offload_obj(priv->sh->cdev->ctx,
+								  priv->sh->cdev->pdn,
+								  log_obj_size);
+		if (!obj) {
+			rte_errno = ENODATA;
+			DRV_LOG(ERR, "Failed to create conn_track_offload_obj using DevX.");
+			goto err;
+		}
+		pool->devx_obj = obj;
+	} else {
+		struct rte_eth_dev *host_dev = priv->shared_host;
+		struct mlx5_priv *host_priv = host_dev->data->dev_private;
+
+		pool->devx_obj = host_priv->hws_ctpool->devx_obj;
+		pool->cts = host_priv->hws_ctpool->cts;
+		MLX5_ASSERT(pool->cts);
+		MLX5_ASSERT(!port_attr->nb_conn_tracks);
 	}
-	pool->devx_obj = obj;
 	reg_id = mlx5_flow_get_reg_id(dev, MLX5_ASO_CONNTRACK, 0, NULL);
-	flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX;
+	flags |= MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX;
 	if (priv->sh->config.dv_esw_en && priv->master)
 		flags |= MLX5DR_ACTION_FLAG_HWS_FDB;
 	pool->dr_action = mlx5dr_action_create_aso_ct(priv->dr_ctx,
-						      (struct mlx5dr_devx_obj *)obj,
+						      (struct mlx5dr_devx_obj *)pool->devx_obj,
 						      reg_id - REG_C_0, flags);
 	if (!pool->dr_action)
 		goto err;
-	/*
-	 * No need for local cache if CT number is a small number. Since
-	 * flow insertion rate will be very limited in that case. Here let's
-	 * set the number to less than default trunk size 4K.
-	 */
-	if (nb_cts <= cfg.trunk_size) {
-		cfg.per_core_cache = 0;
-		cfg.trunk_size = nb_cts;
-	} else if (nb_cts <= MLX5_HW_IPOOL_SIZE_THRESHOLD) {
-		cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN;
-	}
-	pool->cts = mlx5_ipool_create(&cfg);
-	if (!pool->cts)
-		goto err;
 	pool->sq = priv->ct_mng->aso_sqs;
 	/* Assign the last extra ASO SQ as public SQ. */
 	pool->shared_sq = &priv->ct_mng->aso_sqs[priv->nb_queue - 1];
@@ -9686,14 +9697,16 @@ flow_hw_configure(struct rte_eth_dev *dev,
 	if (!priv->shared_host)
 		flow_hw_create_send_to_kernel_actions(priv);
 	if (port_attr->nb_conn_tracks || (host_priv && host_priv->hws_ctpool)) {
-		mem_size = sizeof(struct mlx5_aso_sq) * nb_q_updated +
-			   sizeof(*priv->ct_mng);
-		priv->ct_mng = mlx5_malloc(MLX5_MEM_ZERO, mem_size,
-					   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
-		if (!priv->ct_mng)
-			goto err;
-		if (mlx5_aso_ct_queue_init(priv->sh, priv->ct_mng, nb_q_updated))
-			goto err;
+		if (!priv->shared_host) {
+			mem_size = sizeof(struct mlx5_aso_sq) * nb_q_updated +
+				sizeof(*priv->ct_mng);
+			priv->ct_mng = mlx5_malloc(MLX5_MEM_ZERO, mem_size,
+						RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
+			if (!priv->ct_mng)
+				goto err;
+			if (mlx5_aso_ct_queue_init(priv->sh, priv->ct_mng, nb_q_updated))
+				goto err;
+		}
 		priv->hws_ctpool = flow_hw_ct_pool_create(dev, port_attr);
 		if (!priv->hws_ctpool)
 			goto err;
@@ -9914,17 +9927,20 @@ flow_hw_clear_port_info(struct rte_eth_dev *dev)
 }

 static int
-flow_hw_conntrack_destroy(struct rte_eth_dev *dev __rte_unused,
+flow_hw_conntrack_destroy(struct rte_eth_dev *dev,
 			  uint32_t idx,
 			  struct rte_flow_error *error)
 {
-	uint16_t owner = (uint16_t)MLX5_ACTION_CTX_CT_GET_OWNER(idx);
 	uint32_t ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
-	struct rte_eth_dev *owndev = &rte_eth_devices[owner];
-	struct mlx5_priv *priv = owndev->data->dev_private;
+	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;

+	if (priv->shared_host)
+		return rte_flow_error_set(error, ENOTSUP,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				NULL,
+				"CT destruction is not allowed to guest port");
 	ct = mlx5_ipool_get(pool->cts, ct_idx);
 	if (!ct) {
 		return rte_flow_error_set(error, EINVAL,
@@ -9947,14 +9963,13 @@ flow_hw_conntrack_query(struct rte_eth_dev *dev, uint32_t queue, uint32_t idx,
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;
-	uint16_t owner = (uint16_t)MLX5_ACTION_CTX_CT_GET_OWNER(idx);
 	uint32_t ct_idx;

-	if (owner != PORT_ID(priv))
-		return rte_flow_error_set(error, EACCES,
+	if (priv->shared_host)
+		return rte_flow_error_set(error, ENOTSUP,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				NULL,
-				"Can't query CT object owned by another port");
+				"CT query is not allowed to guest port");
 	ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
 	ct = mlx5_ipool_get(pool->cts, ct_idx);
 	if (!ct) {
@@ -9984,15 +9999,14 @@ flow_hw_conntrack_update(struct rte_eth_dev *dev, uint32_t queue,
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;
 	const struct rte_flow_action_conntrack *new_prf;
-	uint16_t owner = (uint16_t)MLX5_ACTION_CTX_CT_GET_OWNER(idx);
 	uint32_t ct_idx;
 	int ret = 0;

-	if (PORT_ID(priv) != owner)
-		return rte_flow_error_set(error, EACCES,
-					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-					  NULL,
-					  "Can't update CT object owned by another port");
+	if (priv->shared_host)
+		return rte_flow_error_set(error, ENOTSUP,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				NULL,
+				"CT update is not allowed to guest port");
 	ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
 	ct = mlx5_ipool_get(pool->cts, ct_idx);
 	if (!ct) {
@@ -10042,6 +10056,13 @@ flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue,
 	int ret;
 	bool async = !!(queue != MLX5_HW_INV_QUEUE);

+	if (priv->shared_host) {
+		rte_flow_error_set(error, ENOTSUP,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				NULL,
+				"CT create is not allowed to guest port");
+		return NULL;
+	}
 	if (!pool) {
 		rte_flow_error_set(error, EINVAL,
 				   RTE_FLOW_ERROR_TYPE_ACTION, NULL,
--
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 4/4] net/mlx5: remove port from conntrack handle representation
  2024-02-21 10:01 [PATCH 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
                   ` (2 preceding siblings ...)
  2024-02-21 10:01 ` [PATCH 3/4] net/mlx5: add cross port CT object sharing Dariusz Sosnowski
@ 2024-02-21 10:01 ` Dariusz Sosnowski
  2024-02-23 14:23 ` [PATCH v2 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
  4 siblings, 0 replies; 16+ messages in thread
From: Dariusz Sosnowski @ 2024-02-21 10:01 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

This patch removes the owner port index from integer
representation of indirect action handle in mlx5 PMD for conntrack
flow actions.
This index is not needed when HW Steering flow engine is enabled,
because either:

- port references its own indirect actions or,
- port references indirect actions of the host port when sharing
  indirect actions was configured.

In both cases it is explicitly known which port owns the action.
Port index, included in action handle, introduced unnecessary
limitation and caused undefined behavior issues when application
used more than supported number of ports.

This patch removes the port index from indirect conntrack action handle
representation when HW steering flow engine is used.
It does not affect SW Steering flow engine.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    | 18 +++++++++++---
 drivers/net/mlx5/mlx5_flow_hw.c | 44 +++++++++++----------------------
 2 files changed, 28 insertions(+), 34 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index b4bf96cd64..187f440893 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -80,7 +80,12 @@ enum mlx5_indirect_type {
 #define MLX5_INDIRECT_ACT_CT_OWNER_SHIFT 25
 #define MLX5_INDIRECT_ACT_CT_OWNER_MASK (MLX5_INDIRECT_ACT_CT_MAX_PORT - 1)

-/* 29-31: type, 25-28: owner port, 0-24: index */
+/*
+ * When SW steering flow engine is used, the CT action handles are encoded in a following way:
+ * - bits 31:29 - type
+ * - bits 28:25 - port index of the action owner
+ * - bits 24:0 - action index
+ */
 #define MLX5_INDIRECT_ACT_CT_GEN_IDX(owner, index) \
 	((MLX5_INDIRECT_ACTION_TYPE_CT << MLX5_INDIRECT_ACTION_TYPE_OFFSET) | \
 	 (((owner) & MLX5_INDIRECT_ACT_CT_OWNER_MASK) << \
@@ -93,9 +98,14 @@ enum mlx5_indirect_type {
 #define MLX5_INDIRECT_ACT_CT_GET_IDX(index) \
 	((index) & ((1 << MLX5_INDIRECT_ACT_CT_OWNER_SHIFT) - 1))

-#define MLX5_ACTION_CTX_CT_GET_IDX  MLX5_INDIRECT_ACT_CT_GET_IDX
-#define MLX5_ACTION_CTX_CT_GET_OWNER MLX5_INDIRECT_ACT_CT_GET_OWNER
-#define MLX5_ACTION_CTX_CT_GEN_IDX MLX5_INDIRECT_ACT_CT_GEN_IDX
+/*
+ * When HW steering flow engine is used, the CT action handles are encoded in a following way:
+ * - bits 31:29 - type
+ * - bits 28:0 - action index
+ */
+#define MLX5_INDIRECT_ACT_HWS_CT_GEN_IDX(index) \
+	((struct rte_flow_action_handle *)(uintptr_t) \
+	 ((MLX5_INDIRECT_ACTION_TYPE_CT << MLX5_INDIRECT_ACTION_TYPE_OFFSET) | (index)))

 enum mlx5_indirect_list_type {
 	MLX5_INDIRECT_ACTION_LIST_TYPE_ERR = 0,
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 89066e7214..e26ba1d7da 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -563,7 +563,7 @@ flow_hw_ct_compile(struct rte_eth_dev *dev,
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_aso_ct_action *ct;

-	ct = mlx5_ipool_get(priv->hws_ctpool->cts, MLX5_ACTION_CTX_CT_GET_IDX(idx));
+	ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx);
 	if (!ct || (!priv->shared_host && mlx5_aso_ct_available(priv->sh, queue, ct)))
 		return -1;
 	rule_act->action = priv->hws_ctpool->dr_action;
@@ -2455,8 +2455,7 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
 			break;
 		case RTE_FLOW_ACTION_TYPE_CONNTRACK:
 			if (masks->conf) {
-				ct_idx = MLX5_ACTION_CTX_CT_GET_IDX
-					 ((uint32_t)(uintptr_t)actions->conf);
+				ct_idx = MLX5_INDIRECT_ACTION_IDX_GET(actions->conf);
 				if (flow_hw_ct_compile(dev, MLX5_HW_INV_QUEUE, ct_idx,
 						       &acts->rule_acts[dr_pos]))
 					goto err;
@@ -3172,8 +3171,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 			job->flow->cnt_id = act_data->shared_counter.id;
 			break;
 		case RTE_FLOW_ACTION_TYPE_CONNTRACK:
-			ct_idx = MLX5_ACTION_CTX_CT_GET_IDX
-				 ((uint32_t)(uintptr_t)action->conf);
+			ct_idx = MLX5_INDIRECT_ACTION_IDX_GET(action->conf);
 			if (flow_hw_ct_compile(dev, queue, ct_idx,
 					       &rule_acts[act_data->action_dst]))
 				return -1;
@@ -3787,16 +3785,14 @@ flow_hw_pull_legacy_indirect_comp(struct rte_eth_dev *dev, struct mlx5_hw_q_job
 			aso_mtr = mlx5_ipool_get(priv->hws_mpool->idx_pool, idx);
 			aso_mtr->state = ASO_METER_READY;
 		} else if (type == MLX5_INDIRECT_ACTION_TYPE_CT) {
-			idx = MLX5_ACTION_CTX_CT_GET_IDX
-			((uint32_t)(uintptr_t)job->action);
+			idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action);
 			aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx);
 			aso_ct->state = ASO_CONNTRACK_READY;
 		}
 	} else if (job->type == MLX5_HW_Q_JOB_TYPE_QUERY) {
 		type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action);
 		if (type == MLX5_INDIRECT_ACTION_TYPE_CT) {
-			idx = MLX5_ACTION_CTX_CT_GET_IDX
-			((uint32_t)(uintptr_t)job->action);
+			idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action);
 			aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx);
 			mlx5_aso_ct_obj_analyze(job->query.user,
 						job->query.hw);
@@ -9931,7 +9927,6 @@ flow_hw_conntrack_destroy(struct rte_eth_dev *dev,
 			  uint32_t idx,
 			  struct rte_flow_error *error)
 {
-	uint32_t ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;
@@ -9941,7 +9936,7 @@ flow_hw_conntrack_destroy(struct rte_eth_dev *dev,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				NULL,
 				"CT destruction is not allowed to guest port");
-	ct = mlx5_ipool_get(pool->cts, ct_idx);
+	ct = mlx5_ipool_get(pool->cts, idx);
 	if (!ct) {
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
@@ -9950,7 +9945,7 @@ flow_hw_conntrack_destroy(struct rte_eth_dev *dev,
 	}
 	__atomic_store_n(&ct->state, ASO_CONNTRACK_FREE,
 				 __ATOMIC_RELAXED);
-	mlx5_ipool_free(pool->cts, ct_idx);
+	mlx5_ipool_free(pool->cts, idx);
 	return 0;
 }

@@ -9963,15 +9958,13 @@ flow_hw_conntrack_query(struct rte_eth_dev *dev, uint32_t queue, uint32_t idx,
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;
-	uint32_t ct_idx;

 	if (priv->shared_host)
 		return rte_flow_error_set(error, ENOTSUP,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				NULL,
 				"CT query is not allowed to guest port");
-	ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
-	ct = mlx5_ipool_get(pool->cts, ct_idx);
+	ct = mlx5_ipool_get(pool->cts, idx);
 	if (!ct) {
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
@@ -9999,7 +9992,6 @@ flow_hw_conntrack_update(struct rte_eth_dev *dev, uint32_t queue,
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;
 	const struct rte_flow_action_conntrack *new_prf;
-	uint32_t ct_idx;
 	int ret = 0;

 	if (priv->shared_host)
@@ -10007,8 +9999,7 @@ flow_hw_conntrack_update(struct rte_eth_dev *dev, uint32_t queue,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				NULL,
 				"CT update is not allowed to guest port");
-	ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
-	ct = mlx5_ipool_get(pool->cts, ct_idx);
+	ct = mlx5_ipool_get(pool->cts, idx);
 	if (!ct) {
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
@@ -10069,13 +10060,6 @@ flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue,
 				   "CT is not enabled");
 		return 0;
 	}
-	if (dev->data->port_id >= MLX5_INDIRECT_ACT_CT_MAX_PORT) {
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
-				   "CT supports port indexes up to "
-				   RTE_STR(MLX5_ACTION_CTX_CT_MAX_PORT));
-		return 0;
-	}
 	ct = mlx5_ipool_zmalloc(pool->cts, &ct_idx);
 	if (!ct) {
 		rte_flow_error_set(error, rte_errno,
@@ -10105,8 +10089,7 @@ flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue,
 			return 0;
 		}
 	}
-	return (struct rte_flow_action_handle *)(uintptr_t)
-		MLX5_ACTION_CTX_CT_GEN_IDX(PORT_ID(priv), ct_idx);
+	return MLX5_INDIRECT_ACT_HWS_CT_GEN_IDX(ct_idx);
 }

 /**
@@ -10447,7 +10430,7 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
 	case MLX5_INDIRECT_ACTION_TYPE_CT:
 		if (ct_conf->state)
 			aso = true;
-		ret = flow_hw_conntrack_update(dev, queue, update, act_idx,
+		ret = flow_hw_conntrack_update(dev, queue, update, idx,
 					       job, push, error);
 		break;
 	case MLX5_INDIRECT_ACTION_TYPE_METER_MARK:
@@ -10536,7 +10519,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
 		mlx5_hws_cnt_shared_put(priv->hws_cpool, &act_idx);
 		break;
 	case MLX5_INDIRECT_ACTION_TYPE_CT:
-		ret = flow_hw_conntrack_destroy(dev, act_idx, error);
+		ret = flow_hw_conntrack_destroy(dev, idx, error);
 		break;
 	case MLX5_INDIRECT_ACTION_TYPE_METER_MARK:
 		aso_mtr = mlx5_ipool_get(pool->idx_pool, idx);
@@ -10822,6 +10805,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
 	struct mlx5_hw_q_job *job = NULL;
 	uint32_t act_idx = (uint32_t)(uintptr_t)handle;
 	uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET;
+	uint32_t idx = MLX5_INDIRECT_ACTION_IDX_GET(handle);
 	uint32_t age_idx = act_idx & MLX5_HWS_AGE_IDX_MASK;
 	int ret;
 	bool push = flow_hw_action_push(attr);
@@ -10845,7 +10829,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
 		aso = true;
 		if (job)
 			job->query.user = data;
-		ret = flow_hw_conntrack_query(dev, queue, act_idx, data,
+		ret = flow_hw_conntrack_query(dev, queue, idx, data,
 					      job, push, error);
 		break;
 	case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
--
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v2 0/4] net/mlx5: connection tracking changes
  2024-02-21 10:01 [PATCH 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
                   ` (3 preceding siblings ...)
  2024-02-21 10:01 ` [PATCH 4/4] net/mlx5: remove port from conntrack handle representation Dariusz Sosnowski
@ 2024-02-23 14:23 ` Dariusz Sosnowski
  2024-02-23 14:23   ` [PATCH v2 1/4] net/mlx5: fix conntrack action handle representation Dariusz Sosnowski
                     ` (4 more replies)
  4 siblings, 5 replies; 16+ messages in thread
From: Dariusz Sosnowski @ 2024-02-23 14:23 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

Patches 1 and 2 contain fixes for existing implementation of
connection tracking flow actions.

Patch 3 adds support for sharing connection tracking flow actions
between ports when ports' flow engines are configured with
RTE_FLOW_PORT_FLAG_SHARE_INDIRECT flag set.

Patch 4 is based on the previous one and removes the limitation on
number of ports when connection tracking flow actions are used
with HW Steering flow engine.

v2:
- Rebased on top of v24.03-rc1
- Updated mlx5 docs.

Dariusz Sosnowski (3):
  net/mlx5: fix conntrack action handle representation
  net/mlx5: fix connection tracking action validation
  net/mlx5: remove port from conntrack handle representation

Suanming Mou (1):
  net/mlx5: add cross port CT object sharing

 doc/guides/nics/mlx5.rst               |   4 +-
 doc/guides/rel_notes/release_24_03.rst |   2 +
 drivers/net/mlx5/mlx5_flow.h           |  20 ++-
 drivers/net/mlx5/mlx5_flow_dv.c        |   9 ++
 drivers/net/mlx5/mlx5_flow_hw.c        | 182 +++++++++++++------------
 5 files changed, 125 insertions(+), 92 deletions(-)

--
2.34.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v2 1/4] net/mlx5: fix conntrack action handle representation
  2024-02-23 14:23 ` [PATCH v2 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
@ 2024-02-23 14:23   ` Dariusz Sosnowski
  2024-02-23 14:23   ` [PATCH v2 2/4] net/mlx5: fix connection tracking action validation Dariusz Sosnowski
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 16+ messages in thread
From: Dariusz Sosnowski @ 2024-02-23 14:23 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad,
	Alexander Kozyrev
  Cc: dev, stable

In mlx5 PMD, handles to indirect connection tracking flow actions
are encoded in 32-bit unsigned integers as follows:

- Bits 31-29 - indirect action type.
- Bits 28-25 - port on which connection tracking action was created.
- Bits 24-0 - index of connection tracking object.

Macro defining a bit shift for owner part in this representation
was incorrectly defined as 22. This patch fixes that, as well as
aligns documented limitations.

Fixes: 463170a7c934 ("net/mlx5: support connection tracking with HWS")
Fixes: 48fbb0e93d06 ("net/mlx5: support flow meter mark indirect action with HWS")
Cc: stable@dpdk.org

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/nics/mlx5.rst     | 4 ++--
 drivers/net/mlx5/mlx5_flow.h | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 0d2213497a..90ae3f3047 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -783,8 +783,8 @@ Limitations
 
   - Cannot co-exist with ASO meter, ASO age action in a single flow rule.
   - Flow rules insertion rate and memory consumption need more optimization.
-  - 256 ports maximum.
-  - 4M connections maximum with ``dv_flow_en`` 1 mode. 16M with ``dv_flow_en`` 2.
+  - 16 ports maximum.
+  - 32M connections maximum.
 
 - Multi-thread flow insertion:
 
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index a4d0ff7b13..b4bf96cd64 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -77,7 +77,7 @@ enum mlx5_indirect_type {
 /* Now, the maximal ports will be supported is 16, action number is 32M. */
 #define MLX5_INDIRECT_ACT_CT_MAX_PORT 0x10
 
-#define MLX5_INDIRECT_ACT_CT_OWNER_SHIFT 22
+#define MLX5_INDIRECT_ACT_CT_OWNER_SHIFT 25
 #define MLX5_INDIRECT_ACT_CT_OWNER_MASK (MLX5_INDIRECT_ACT_CT_MAX_PORT - 1)
 
 /* 29-31: type, 25-28: owner port, 0-24: index */
-- 
2.34.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v2 2/4] net/mlx5: fix connection tracking action validation
  2024-02-23 14:23 ` [PATCH v2 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
  2024-02-23 14:23   ` [PATCH v2 1/4] net/mlx5: fix conntrack action handle representation Dariusz Sosnowski
@ 2024-02-23 14:23   ` Dariusz Sosnowski
  2024-02-23 14:23   ` [PATCH v2 3/4] net/mlx5: add cross port CT object sharing Dariusz Sosnowski
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 16+ messages in thread
From: Dariusz Sosnowski @ 2024-02-23 14:23 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev, stable

In mlx5 PMD, handles to indirect connection tracking flow actions
are encoded as 32-bit unsigned integers, where port ID is stored
in bits 28-25. Because of this, connection tracking flow actions
cannot be created on ports with IDs higher than 15.
This patch adds missing validation.

Fixes: 463170a7c934 ("net/mlx5: support connection tracking with HWS")
Cc: stable@dpdk.org

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_dv.c | 9 +++++++++
 drivers/net/mlx5/mlx5_flow_hw.c | 7 +++++++
 2 files changed, 16 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 23a2388320..c78ef1f616 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -13861,6 +13861,13 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev,
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ACTION, NULL,
 					  "Connection is not supported");
+	if (dev->data->port_id >= MLX5_INDIRECT_ACT_CT_MAX_PORT) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "CT supports port indexes up to "
+				   RTE_STR(MLX5_ACTION_CTX_CT_MAX_PORT));
+		return 0;
+	}
 	idx = flow_dv_aso_ct_alloc(dev, error);
 	if (!idx)
 		return rte_flow_error_set(error, rte_errno,
@@ -16558,6 +16565,8 @@ flow_dv_action_create(struct rte_eth_dev *dev,
 	case RTE_FLOW_ACTION_TYPE_CONNTRACK:
 		ret = flow_dv_translate_create_conntrack(dev, action->conf,
 							 err);
+		if (!ret)
+			break;
 		idx = MLX5_INDIRECT_ACT_CT_GEN_IDX(PORT_ID(priv), ret);
 		break;
 	default:
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index bcf43f5457..366a6956d2 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -10048,6 +10048,13 @@ flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue,
 				   "CT is not enabled");
 		return 0;
 	}
+	if (dev->data->port_id >= MLX5_INDIRECT_ACT_CT_MAX_PORT) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "CT supports port indexes up to "
+				   RTE_STR(MLX5_ACTION_CTX_CT_MAX_PORT));
+		return 0;
+	}
 	ct = mlx5_ipool_zmalloc(pool->cts, &ct_idx);
 	if (!ct) {
 		rte_flow_error_set(error, rte_errno,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v2 3/4] net/mlx5: add cross port CT object sharing
  2024-02-23 14:23 ` [PATCH v2 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
  2024-02-23 14:23   ` [PATCH v2 1/4] net/mlx5: fix conntrack action handle representation Dariusz Sosnowski
  2024-02-23 14:23   ` [PATCH v2 2/4] net/mlx5: fix connection tracking action validation Dariusz Sosnowski
@ 2024-02-23 14:23   ` Dariusz Sosnowski
  2024-02-23 14:23   ` [PATCH v2 4/4] net/mlx5: remove port from conntrack handle representation Dariusz Sosnowski
  2024-02-27 13:52   ` [PATCH v3 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
  4 siblings, 0 replies; 16+ messages in thread
From: Dariusz Sosnowski @ 2024-02-23 14:23 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

From: Suanming Mou <suanmingm@nvidia.com>

This commit adds cross port CT object sharing.

Shared CT object shares the same DevX objects, but allocate port's
own action locally. Once the CT object is shared between two flows
in different ports, the two flows use their own local action with
the same offset index.

The shared CT object can only be created/updated/queried/destroyed
by host port.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/rel_notes/release_24_03.rst |   2 +
 drivers/net/mlx5/mlx5_flow_hw.c        | 145 ++++++++++++++-----------
 2 files changed, 85 insertions(+), 62 deletions(-)

diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 879bb4944c..b660c2c7cf 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -130,6 +130,8 @@ New Features
   * Added support for matching a random value.
   * Added support for comparing result between packet fields or value.
   * Added support for accumulating value of field into another one.
+  * Added support for sharing indirect action objects of type ``RTE_FLOW_ACTION_TYPE_CONNTRACK``
+    with HW steering flow engine.
 
 * **Updated Marvell cnxk crypto driver.**
 
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 366a6956d2..f53ed1144b 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -564,7 +564,7 @@ flow_hw_ct_compile(struct rte_eth_dev *dev,
 	struct mlx5_aso_ct_action *ct;
 
 	ct = mlx5_ipool_get(priv->hws_ctpool->cts, MLX5_ACTION_CTX_CT_GET_IDX(idx));
-	if (!ct || mlx5_aso_ct_available(priv->sh, queue, ct))
+	if (!ct || (!priv->shared_host && mlx5_aso_ct_available(priv->sh, queue, ct)))
 		return -1;
 	rule_act->action = priv->hws_ctpool->dr_action;
 	rule_act->aso_ct.offset = ct->offset;
@@ -3835,9 +3835,11 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev,
 	if (ret_comp < n_res && priv->hws_mpool)
 		ret_comp += mlx5_aso_pull_completion(&priv->hws_mpool->sq[queue],
 				&res[ret_comp], n_res - ret_comp);
-	if (ret_comp < n_res && priv->hws_ctpool)
-		ret_comp += mlx5_aso_pull_completion(&priv->ct_mng->aso_sqs[queue],
-				&res[ret_comp], n_res - ret_comp);
+	if (!priv->shared_host) {
+		if (ret_comp < n_res && priv->hws_ctpool)
+			ret_comp += mlx5_aso_pull_completion(&priv->ct_mng->aso_sqs[queue],
+					&res[ret_comp], n_res - ret_comp);
+	}
 	if (ret_comp < n_res && priv->quota_ctx.sq)
 		ret_comp += mlx5_aso_pull_completion(&priv->quota_ctx.sq[queue],
 						     &res[ret_comp],
@@ -8797,15 +8799,19 @@ flow_hw_ct_mng_destroy(struct rte_eth_dev *dev,
 }
 
 static void
-flow_hw_ct_pool_destroy(struct rte_eth_dev *dev __rte_unused,
+flow_hw_ct_pool_destroy(struct rte_eth_dev *dev,
 			struct mlx5_aso_ct_pool *pool)
 {
+	struct mlx5_priv *priv = dev->data->dev_private;
+
 	if (pool->dr_action)
 		mlx5dr_action_destroy(pool->dr_action);
-	if (pool->devx_obj)
-		claim_zero(mlx5_devx_cmd_destroy(pool->devx_obj));
-	if (pool->cts)
-		mlx5_ipool_destroy(pool->cts);
+	if (!priv->shared_host) {
+		if (pool->devx_obj)
+			claim_zero(mlx5_devx_cmd_destroy(pool->devx_obj));
+		if (pool->cts)
+			mlx5_ipool_destroy(pool->cts);
+	}
 	mlx5_free(pool);
 }
 
@@ -8829,51 +8835,56 @@ flow_hw_ct_pool_create(struct rte_eth_dev *dev,
 		.type = "mlx5_hw_ct_action",
 	};
 	int reg_id;
-	uint32_t flags;
+	uint32_t flags = 0;
 
-	if (port_attr->flags & RTE_FLOW_PORT_FLAG_SHARE_INDIRECT) {
-		DRV_LOG(ERR, "Connection tracking is not supported "
-			     "in cross vHCA sharing mode");
-		rte_errno = ENOTSUP;
-		return NULL;
-	}
 	pool = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*pool), 0, SOCKET_ID_ANY);
 	if (!pool) {
 		rte_errno = ENOMEM;
 		return NULL;
 	}
-	obj = mlx5_devx_cmd_create_conn_track_offload_obj(priv->sh->cdev->ctx,
-							  priv->sh->cdev->pdn,
-							  log_obj_size);
-	if (!obj) {
-		rte_errno = ENODATA;
-		DRV_LOG(ERR, "Failed to create conn_track_offload_obj using DevX.");
-		goto err;
+	if (!priv->shared_host) {
+		/*
+		 * No need for local cache if CT number is a small number. Since
+		 * flow insertion rate will be very limited in that case. Here let's
+		 * set the number to less than default trunk size 4K.
+		 */
+		if (nb_cts <= cfg.trunk_size) {
+			cfg.per_core_cache = 0;
+			cfg.trunk_size = nb_cts;
+		} else if (nb_cts <= MLX5_HW_IPOOL_SIZE_THRESHOLD) {
+			cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN;
+		}
+		cfg.max_idx = nb_cts;
+		pool->cts = mlx5_ipool_create(&cfg);
+		if (!pool->cts)
+			goto err;
+		obj = mlx5_devx_cmd_create_conn_track_offload_obj(priv->sh->cdev->ctx,
+								  priv->sh->cdev->pdn,
+								  log_obj_size);
+		if (!obj) {
+			rte_errno = ENODATA;
+			DRV_LOG(ERR, "Failed to create conn_track_offload_obj using DevX.");
+			goto err;
+		}
+		pool->devx_obj = obj;
+	} else {
+		struct rte_eth_dev *host_dev = priv->shared_host;
+		struct mlx5_priv *host_priv = host_dev->data->dev_private;
+
+		pool->devx_obj = host_priv->hws_ctpool->devx_obj;
+		pool->cts = host_priv->hws_ctpool->cts;
+		MLX5_ASSERT(pool->cts);
+		MLX5_ASSERT(!port_attr->nb_conn_tracks);
 	}
-	pool->devx_obj = obj;
 	reg_id = mlx5_flow_get_reg_id(dev, MLX5_ASO_CONNTRACK, 0, NULL);
-	flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX;
+	flags |= MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX;
 	if (priv->sh->config.dv_esw_en && priv->master)
 		flags |= MLX5DR_ACTION_FLAG_HWS_FDB;
 	pool->dr_action = mlx5dr_action_create_aso_ct(priv->dr_ctx,
-						      (struct mlx5dr_devx_obj *)obj,
+						      (struct mlx5dr_devx_obj *)pool->devx_obj,
 						      reg_id - REG_C_0, flags);
 	if (!pool->dr_action)
 		goto err;
-	/*
-	 * No need for local cache if CT number is a small number. Since
-	 * flow insertion rate will be very limited in that case. Here let's
-	 * set the number to less than default trunk size 4K.
-	 */
-	if (nb_cts <= cfg.trunk_size) {
-		cfg.per_core_cache = 0;
-		cfg.trunk_size = nb_cts;
-	} else if (nb_cts <= MLX5_HW_IPOOL_SIZE_THRESHOLD) {
-		cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN;
-	}
-	pool->cts = mlx5_ipool_create(&cfg);
-	if (!pool->cts)
-		goto err;
 	pool->sq = priv->ct_mng->aso_sqs;
 	/* Assign the last extra ASO SQ as public SQ. */
 	pool->shared_sq = &priv->ct_mng->aso_sqs[priv->nb_queue - 1];
@@ -9686,14 +9697,16 @@ flow_hw_configure(struct rte_eth_dev *dev,
 	if (!priv->shared_host)
 		flow_hw_create_send_to_kernel_actions(priv);
 	if (port_attr->nb_conn_tracks || (host_priv && host_priv->hws_ctpool)) {
-		mem_size = sizeof(struct mlx5_aso_sq) * nb_q_updated +
-			   sizeof(*priv->ct_mng);
-		priv->ct_mng = mlx5_malloc(MLX5_MEM_ZERO, mem_size,
-					   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
-		if (!priv->ct_mng)
-			goto err;
-		if (mlx5_aso_ct_queue_init(priv->sh, priv->ct_mng, nb_q_updated))
-			goto err;
+		if (!priv->shared_host) {
+			mem_size = sizeof(struct mlx5_aso_sq) * nb_q_updated +
+				sizeof(*priv->ct_mng);
+			priv->ct_mng = mlx5_malloc(MLX5_MEM_ZERO, mem_size,
+						RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
+			if (!priv->ct_mng)
+				goto err;
+			if (mlx5_aso_ct_queue_init(priv->sh, priv->ct_mng, nb_q_updated))
+				goto err;
+		}
 		priv->hws_ctpool = flow_hw_ct_pool_create(dev, port_attr);
 		if (!priv->hws_ctpool)
 			goto err;
@@ -9914,17 +9927,20 @@ flow_hw_clear_port_info(struct rte_eth_dev *dev)
 }
 
 static int
-flow_hw_conntrack_destroy(struct rte_eth_dev *dev __rte_unused,
+flow_hw_conntrack_destroy(struct rte_eth_dev *dev,
 			  uint32_t idx,
 			  struct rte_flow_error *error)
 {
-	uint16_t owner = (uint16_t)MLX5_ACTION_CTX_CT_GET_OWNER(idx);
 	uint32_t ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
-	struct rte_eth_dev *owndev = &rte_eth_devices[owner];
-	struct mlx5_priv *priv = owndev->data->dev_private;
+	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;
 
+	if (priv->shared_host)
+		return rte_flow_error_set(error, ENOTSUP,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				NULL,
+				"CT destruction is not allowed to guest port");
 	ct = mlx5_ipool_get(pool->cts, ct_idx);
 	if (!ct) {
 		return rte_flow_error_set(error, EINVAL,
@@ -9947,14 +9963,13 @@ flow_hw_conntrack_query(struct rte_eth_dev *dev, uint32_t queue, uint32_t idx,
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;
-	uint16_t owner = (uint16_t)MLX5_ACTION_CTX_CT_GET_OWNER(idx);
 	uint32_t ct_idx;
 
-	if (owner != PORT_ID(priv))
-		return rte_flow_error_set(error, EACCES,
+	if (priv->shared_host)
+		return rte_flow_error_set(error, ENOTSUP,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				NULL,
-				"Can't query CT object owned by another port");
+				"CT query is not allowed to guest port");
 	ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
 	ct = mlx5_ipool_get(pool->cts, ct_idx);
 	if (!ct) {
@@ -9984,15 +9999,14 @@ flow_hw_conntrack_update(struct rte_eth_dev *dev, uint32_t queue,
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;
 	const struct rte_flow_action_conntrack *new_prf;
-	uint16_t owner = (uint16_t)MLX5_ACTION_CTX_CT_GET_OWNER(idx);
 	uint32_t ct_idx;
 	int ret = 0;
 
-	if (PORT_ID(priv) != owner)
-		return rte_flow_error_set(error, EACCES,
-					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-					  NULL,
-					  "Can't update CT object owned by another port");
+	if (priv->shared_host)
+		return rte_flow_error_set(error, ENOTSUP,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				NULL,
+				"CT update is not allowed to guest port");
 	ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
 	ct = mlx5_ipool_get(pool->cts, ct_idx);
 	if (!ct) {
@@ -10042,6 +10056,13 @@ flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue,
 	int ret;
 	bool async = !!(queue != MLX5_HW_INV_QUEUE);
 
+	if (priv->shared_host) {
+		rte_flow_error_set(error, ENOTSUP,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				NULL,
+				"CT create is not allowed to guest port");
+		return NULL;
+	}
 	if (!pool) {
 		rte_flow_error_set(error, EINVAL,
 				   RTE_FLOW_ERROR_TYPE_ACTION, NULL,
-- 
2.34.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v2 4/4] net/mlx5: remove port from conntrack handle representation
  2024-02-23 14:23 ` [PATCH v2 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
                     ` (2 preceding siblings ...)
  2024-02-23 14:23   ` [PATCH v2 3/4] net/mlx5: add cross port CT object sharing Dariusz Sosnowski
@ 2024-02-23 14:23   ` Dariusz Sosnowski
  2024-02-27 13:52   ` [PATCH v3 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
  4 siblings, 0 replies; 16+ messages in thread
From: Dariusz Sosnowski @ 2024-02-23 14:23 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

This patch removes the owner port index from integer
representation of indirect action handle in mlx5 PMD for conntrack
flow actions.
This index is not needed when HW Steering flow engine is enabled,
because either:

- port references its own indirect actions or,
- port references indirect actions of the host port when sharing
  indirect actions was configured.

In both cases it is explicitly known which port owns the action.
Port index, included in action handle, introduced unnecessary
limitation and caused undefined behavior issues when application
used more than supported number of ports.

This patch removes the port index from indirect conntrack action handle
representation when HW steering flow engine is used.
It does not affect SW Steering flow engine.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/nics/mlx5.rst        |  2 +-
 drivers/net/mlx5/mlx5_flow.h    | 18 +++++++++++---
 drivers/net/mlx5/mlx5_flow_hw.c | 44 +++++++++++----------------------
 3 files changed, 29 insertions(+), 35 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 90ae3f3047..7729fe4151 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -783,7 +783,7 @@ Limitations
 
   - Cannot co-exist with ASO meter, ASO age action in a single flow rule.
   - Flow rules insertion rate and memory consumption need more optimization.
-  - 16 ports maximum.
+  - 16 ports maximum (with ``dv_flow_en=1``).
   - 32M connections maximum.
 
 - Multi-thread flow insertion:
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index b4bf96cd64..187f440893 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -80,7 +80,12 @@ enum mlx5_indirect_type {
 #define MLX5_INDIRECT_ACT_CT_OWNER_SHIFT 25
 #define MLX5_INDIRECT_ACT_CT_OWNER_MASK (MLX5_INDIRECT_ACT_CT_MAX_PORT - 1)
 
-/* 29-31: type, 25-28: owner port, 0-24: index */
+/*
+ * When SW steering flow engine is used, the CT action handles are encoded in a following way:
+ * - bits 31:29 - type
+ * - bits 28:25 - port index of the action owner
+ * - bits 24:0 - action index
+ */
 #define MLX5_INDIRECT_ACT_CT_GEN_IDX(owner, index) \
 	((MLX5_INDIRECT_ACTION_TYPE_CT << MLX5_INDIRECT_ACTION_TYPE_OFFSET) | \
 	 (((owner) & MLX5_INDIRECT_ACT_CT_OWNER_MASK) << \
@@ -93,9 +98,14 @@ enum mlx5_indirect_type {
 #define MLX5_INDIRECT_ACT_CT_GET_IDX(index) \
 	((index) & ((1 << MLX5_INDIRECT_ACT_CT_OWNER_SHIFT) - 1))
 
-#define MLX5_ACTION_CTX_CT_GET_IDX  MLX5_INDIRECT_ACT_CT_GET_IDX
-#define MLX5_ACTION_CTX_CT_GET_OWNER MLX5_INDIRECT_ACT_CT_GET_OWNER
-#define MLX5_ACTION_CTX_CT_GEN_IDX MLX5_INDIRECT_ACT_CT_GEN_IDX
+/*
+ * When HW steering flow engine is used, the CT action handles are encoded in a following way:
+ * - bits 31:29 - type
+ * - bits 28:0 - action index
+ */
+#define MLX5_INDIRECT_ACT_HWS_CT_GEN_IDX(index) \
+	((struct rte_flow_action_handle *)(uintptr_t) \
+	 ((MLX5_INDIRECT_ACTION_TYPE_CT << MLX5_INDIRECT_ACTION_TYPE_OFFSET) | (index)))
 
 enum mlx5_indirect_list_type {
 	MLX5_INDIRECT_ACTION_LIST_TYPE_ERR = 0,
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index f53ed1144b..905c10a90c 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -563,7 +563,7 @@ flow_hw_ct_compile(struct rte_eth_dev *dev,
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_aso_ct_action *ct;
 
-	ct = mlx5_ipool_get(priv->hws_ctpool->cts, MLX5_ACTION_CTX_CT_GET_IDX(idx));
+	ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx);
 	if (!ct || (!priv->shared_host && mlx5_aso_ct_available(priv->sh, queue, ct)))
 		return -1;
 	rule_act->action = priv->hws_ctpool->dr_action;
@@ -2455,8 +2455,7 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
 			break;
 		case RTE_FLOW_ACTION_TYPE_CONNTRACK:
 			if (masks->conf) {
-				ct_idx = MLX5_ACTION_CTX_CT_GET_IDX
-					 ((uint32_t)(uintptr_t)actions->conf);
+				ct_idx = MLX5_INDIRECT_ACTION_IDX_GET(actions->conf);
 				if (flow_hw_ct_compile(dev, MLX5_HW_INV_QUEUE, ct_idx,
 						       &acts->rule_acts[dr_pos]))
 					goto err;
@@ -3172,8 +3171,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 			job->flow->cnt_id = act_data->shared_counter.id;
 			break;
 		case RTE_FLOW_ACTION_TYPE_CONNTRACK:
-			ct_idx = MLX5_ACTION_CTX_CT_GET_IDX
-				 ((uint32_t)(uintptr_t)action->conf);
+			ct_idx = MLX5_INDIRECT_ACTION_IDX_GET(action->conf);
 			if (flow_hw_ct_compile(dev, queue, ct_idx,
 					       &rule_acts[act_data->action_dst]))
 				return -1;
@@ -3787,16 +3785,14 @@ flow_hw_pull_legacy_indirect_comp(struct rte_eth_dev *dev, struct mlx5_hw_q_job
 			aso_mtr = mlx5_ipool_get(priv->hws_mpool->idx_pool, idx);
 			aso_mtr->state = ASO_METER_READY;
 		} else if (type == MLX5_INDIRECT_ACTION_TYPE_CT) {
-			idx = MLX5_ACTION_CTX_CT_GET_IDX
-			((uint32_t)(uintptr_t)job->action);
+			idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action);
 			aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx);
 			aso_ct->state = ASO_CONNTRACK_READY;
 		}
 	} else if (job->type == MLX5_HW_Q_JOB_TYPE_QUERY) {
 		type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action);
 		if (type == MLX5_INDIRECT_ACTION_TYPE_CT) {
-			idx = MLX5_ACTION_CTX_CT_GET_IDX
-			((uint32_t)(uintptr_t)job->action);
+			idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action);
 			aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx);
 			mlx5_aso_ct_obj_analyze(job->query.user,
 						job->query.hw);
@@ -9931,7 +9927,6 @@ flow_hw_conntrack_destroy(struct rte_eth_dev *dev,
 			  uint32_t idx,
 			  struct rte_flow_error *error)
 {
-	uint32_t ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;
@@ -9941,7 +9936,7 @@ flow_hw_conntrack_destroy(struct rte_eth_dev *dev,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				NULL,
 				"CT destruction is not allowed to guest port");
-	ct = mlx5_ipool_get(pool->cts, ct_idx);
+	ct = mlx5_ipool_get(pool->cts, idx);
 	if (!ct) {
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
@@ -9950,7 +9945,7 @@ flow_hw_conntrack_destroy(struct rte_eth_dev *dev,
 	}
 	__atomic_store_n(&ct->state, ASO_CONNTRACK_FREE,
 				 __ATOMIC_RELAXED);
-	mlx5_ipool_free(pool->cts, ct_idx);
+	mlx5_ipool_free(pool->cts, idx);
 	return 0;
 }
 
@@ -9963,15 +9958,13 @@ flow_hw_conntrack_query(struct rte_eth_dev *dev, uint32_t queue, uint32_t idx,
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;
-	uint32_t ct_idx;
 
 	if (priv->shared_host)
 		return rte_flow_error_set(error, ENOTSUP,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				NULL,
 				"CT query is not allowed to guest port");
-	ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
-	ct = mlx5_ipool_get(pool->cts, ct_idx);
+	ct = mlx5_ipool_get(pool->cts, idx);
 	if (!ct) {
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
@@ -9999,7 +9992,6 @@ flow_hw_conntrack_update(struct rte_eth_dev *dev, uint32_t queue,
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;
 	const struct rte_flow_action_conntrack *new_prf;
-	uint32_t ct_idx;
 	int ret = 0;
 
 	if (priv->shared_host)
@@ -10007,8 +9999,7 @@ flow_hw_conntrack_update(struct rte_eth_dev *dev, uint32_t queue,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				NULL,
 				"CT update is not allowed to guest port");
-	ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
-	ct = mlx5_ipool_get(pool->cts, ct_idx);
+	ct = mlx5_ipool_get(pool->cts, idx);
 	if (!ct) {
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
@@ -10069,13 +10060,6 @@ flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue,
 				   "CT is not enabled");
 		return 0;
 	}
-	if (dev->data->port_id >= MLX5_INDIRECT_ACT_CT_MAX_PORT) {
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
-				   "CT supports port indexes up to "
-				   RTE_STR(MLX5_ACTION_CTX_CT_MAX_PORT));
-		return 0;
-	}
 	ct = mlx5_ipool_zmalloc(pool->cts, &ct_idx);
 	if (!ct) {
 		rte_flow_error_set(error, rte_errno,
@@ -10105,8 +10089,7 @@ flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue,
 			return 0;
 		}
 	}
-	return (struct rte_flow_action_handle *)(uintptr_t)
-		MLX5_ACTION_CTX_CT_GEN_IDX(PORT_ID(priv), ct_idx);
+	return MLX5_INDIRECT_ACT_HWS_CT_GEN_IDX(ct_idx);
 }
 
 /**
@@ -10447,7 +10430,7 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
 	case MLX5_INDIRECT_ACTION_TYPE_CT:
 		if (ct_conf->state)
 			aso = true;
-		ret = flow_hw_conntrack_update(dev, queue, update, act_idx,
+		ret = flow_hw_conntrack_update(dev, queue, update, idx,
 					       job, push, error);
 		break;
 	case MLX5_INDIRECT_ACTION_TYPE_METER_MARK:
@@ -10536,7 +10519,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
 		mlx5_hws_cnt_shared_put(priv->hws_cpool, &act_idx);
 		break;
 	case MLX5_INDIRECT_ACTION_TYPE_CT:
-		ret = flow_hw_conntrack_destroy(dev, act_idx, error);
+		ret = flow_hw_conntrack_destroy(dev, idx, error);
 		break;
 	case MLX5_INDIRECT_ACTION_TYPE_METER_MARK:
 		aso_mtr = mlx5_ipool_get(pool->idx_pool, idx);
@@ -10822,6 +10805,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
 	struct mlx5_hw_q_job *job = NULL;
 	uint32_t act_idx = (uint32_t)(uintptr_t)handle;
 	uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET;
+	uint32_t idx = MLX5_INDIRECT_ACTION_IDX_GET(handle);
 	uint32_t age_idx = act_idx & MLX5_HWS_AGE_IDX_MASK;
 	int ret;
 	bool push = flow_hw_action_push(attr);
@@ -10845,7 +10829,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
 		aso = true;
 		if (job)
 			job->query.user = data;
-		ret = flow_hw_conntrack_query(dev, queue, act_idx, data,
+		ret = flow_hw_conntrack_query(dev, queue, idx, data,
 					      job, push, error);
 		break;
 	case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
-- 
2.34.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v3 0/4] net/mlx5: connection tracking changes
  2024-02-23 14:23 ` [PATCH v2 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
                     ` (3 preceding siblings ...)
  2024-02-23 14:23   ` [PATCH v2 4/4] net/mlx5: remove port from conntrack handle representation Dariusz Sosnowski
@ 2024-02-27 13:52   ` Dariusz Sosnowski
  2024-02-27 13:52     ` [PATCH v3 1/4] net/mlx5: fix conntrack action handle representation Dariusz Sosnowski
                       ` (4 more replies)
  4 siblings, 5 replies; 16+ messages in thread
From: Dariusz Sosnowski @ 2024-02-27 13:52 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

Patches 1 and 2 contain fixes for existing implementation of
connection tracking flow actions.

Patch 3 adds support for sharing connection tracking flow actions
between ports when ports' flow engines are configured with
RTE_FLOW_PORT_FLAG_SHARE_INDIRECT flag set.

Patch 4 is based on the previous one and removes the limitation on
number of ports when connection tracking flow actions are used
with HW Steering flow engine.

Depends-on: series-31246 ("net/mlx5: move meter init functions")

v3:
- Rebased.
- Added Depends-on tag.

v2:
- Rebased on top of v24.03-rc1
- Updated mlx5 docs.

Dariusz Sosnowski (3):
  net/mlx5: fix conntrack action handle representation
  net/mlx5: fix connection tracking action validation
  net/mlx5: remove port from conntrack handle representation

Suanming Mou (1):
  net/mlx5: add cross port CT object sharing

 doc/guides/nics/mlx5.rst               |   4 +-
 doc/guides/rel_notes/release_24_03.rst |   2 +
 drivers/net/mlx5/mlx5_flow.h           |  20 ++-
 drivers/net/mlx5/mlx5_flow_dv.c        |   9 ++
 drivers/net/mlx5/mlx5_flow_hw.c        | 180 +++++++++++++------------
 5 files changed, 123 insertions(+), 92 deletions(-)

--
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v3 1/4] net/mlx5: fix conntrack action handle representation
  2024-02-27 13:52   ` [PATCH v3 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
@ 2024-02-27 13:52     ` Dariusz Sosnowski
  2024-02-27 13:52     ` [PATCH v3 2/4] net/mlx5: fix connection tracking action validation Dariusz Sosnowski
                       ` (3 subsequent siblings)
  4 siblings, 0 replies; 16+ messages in thread
From: Dariusz Sosnowski @ 2024-02-27 13:52 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad,
	Alexander Kozyrev
  Cc: dev, stable

In mlx5 PMD, handles to indirect connection tracking flow actions
are encoded in 32-bit unsigned integers as follows:

- Bits 31-29 - indirect action type.
- Bits 28-25 - port on which connection tracking action was created.
- Bits 24-0 - index of connection tracking object.

Macro defining a bit shift for owner part in this representation
was incorrectly defined as 22. This patch fixes that, as well as
aligns documented limitations.

Fixes: 463170a7c934 ("net/mlx5: support connection tracking with HWS")
Fixes: 48fbb0e93d06 ("net/mlx5: support flow meter mark indirect action with HWS")
Cc: stable@dpdk.org

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/nics/mlx5.rst     | 4 ++--
 drivers/net/mlx5/mlx5_flow.h | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 0079176ba3..db47d70b70 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -815,8 +815,8 @@ Limitations
 
   - Cannot co-exist with ASO meter, ASO age action in a single flow rule.
   - Flow rules insertion rate and memory consumption need more optimization.
-  - 256 ports maximum.
-  - 4M connections maximum with ``dv_flow_en`` 1 mode. 16M with ``dv_flow_en`` 2.
+  - 16 ports maximum.
+  - 32M connections maximum.
 
 - Multi-thread flow insertion:
 
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index a4d0ff7b13..b4bf96cd64 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -77,7 +77,7 @@ enum mlx5_indirect_type {
 /* Now, the maximal ports will be supported is 16, action number is 32M. */
 #define MLX5_INDIRECT_ACT_CT_MAX_PORT 0x10
 
-#define MLX5_INDIRECT_ACT_CT_OWNER_SHIFT 22
+#define MLX5_INDIRECT_ACT_CT_OWNER_SHIFT 25
 #define MLX5_INDIRECT_ACT_CT_OWNER_MASK (MLX5_INDIRECT_ACT_CT_MAX_PORT - 1)
 
 /* 29-31: type, 25-28: owner port, 0-24: index */
-- 
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v3 2/4] net/mlx5: fix connection tracking action validation
  2024-02-27 13:52   ` [PATCH v3 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
  2024-02-27 13:52     ` [PATCH v3 1/4] net/mlx5: fix conntrack action handle representation Dariusz Sosnowski
@ 2024-02-27 13:52     ` Dariusz Sosnowski
  2024-02-27 13:52     ` [PATCH v3 3/4] net/mlx5: add cross port CT object sharing Dariusz Sosnowski
                       ` (2 subsequent siblings)
  4 siblings, 0 replies; 16+ messages in thread
From: Dariusz Sosnowski @ 2024-02-27 13:52 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev, stable

In mlx5 PMD, handles to indirect connection tracking flow actions
are encoded as 32-bit unsigned integers, where port ID is stored
in bits 28-25. Because of this, connection tracking flow actions
cannot be created on ports with IDs higher than 15.
This patch adds missing validation.

Fixes: 463170a7c934 ("net/mlx5: support connection tracking with HWS")
Cc: stable@dpdk.org

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_dv.c | 9 +++++++++
 drivers/net/mlx5/mlx5_flow_hw.c | 7 +++++++
 2 files changed, 16 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 75a8a223ab..ddf19e9a51 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -13889,6 +13889,13 @@ flow_dv_translate_create_conntrack(struct rte_eth_dev *dev,
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ACTION, NULL,
 					  "Connection is not supported");
+	if (dev->data->port_id >= MLX5_INDIRECT_ACT_CT_MAX_PORT) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "CT supports port indexes up to "
+				   RTE_STR(MLX5_ACTION_CTX_CT_MAX_PORT));
+		return 0;
+	}
 	idx = flow_dv_aso_ct_alloc(dev, error);
 	if (!idx)
 		return rte_flow_error_set(error, rte_errno,
@@ -16586,6 +16593,8 @@ flow_dv_action_create(struct rte_eth_dev *dev,
 	case RTE_FLOW_ACTION_TYPE_CONNTRACK:
 		ret = flow_dv_translate_create_conntrack(dev, action->conf,
 							 err);
+		if (!ret)
+			break;
 		idx = MLX5_INDIRECT_ACT_CT_GEN_IDX(PORT_ID(priv), ret);
 		break;
 	default:
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 2a1281732a..a8e2c9cc9e 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -10344,6 +10344,13 @@ flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue,
 				   "CT is not enabled");
 		return 0;
 	}
+	if (dev->data->port_id >= MLX5_INDIRECT_ACT_CT_MAX_PORT) {
+		rte_flow_error_set(error, EINVAL,
+				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+				   "CT supports port indexes up to "
+				   RTE_STR(MLX5_ACTION_CTX_CT_MAX_PORT));
+		return 0;
+	}
 	ct = mlx5_ipool_zmalloc(pool->cts, &ct_idx);
 	if (!ct) {
 		rte_flow_error_set(error, rte_errno,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v3 3/4] net/mlx5: add cross port CT object sharing
  2024-02-27 13:52   ` [PATCH v3 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
  2024-02-27 13:52     ` [PATCH v3 1/4] net/mlx5: fix conntrack action handle representation Dariusz Sosnowski
  2024-02-27 13:52     ` [PATCH v3 2/4] net/mlx5: fix connection tracking action validation Dariusz Sosnowski
@ 2024-02-27 13:52     ` Dariusz Sosnowski
  2024-02-27 13:52     ` [PATCH v3 4/4] net/mlx5: remove port from conntrack handle representation Dariusz Sosnowski
  2024-02-28 10:12     ` [PATCH v3 0/4] net/mlx5: connection tracking changes Raslan Darawsheh
  4 siblings, 0 replies; 16+ messages in thread
From: Dariusz Sosnowski @ 2024-02-27 13:52 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

From: Suanming Mou <suanmingm@nvidia.com>

This commit adds cross port CT object sharing.

Shared CT object shares the same DevX objects, but allocate port's
own action locally. Once the CT object is shared between two flows
in different ports, the two flows use their own local action with
the same offset index.

The shared CT object can only be created/updated/queried/destroyed
by host port.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/rel_notes/release_24_03.rst |   2 +
 drivers/net/mlx5/mlx5_flow_hw.c        | 143 ++++++++++++++-----------
 2 files changed, 83 insertions(+), 62 deletions(-)

diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst
index 76d2e60f59..23ac6568ac 100644
--- a/doc/guides/rel_notes/release_24_03.rst
+++ b/doc/guides/rel_notes/release_24_03.rst
@@ -135,6 +135,8 @@ New Features
   * Added support for copy inner fields in HWS flow engine.
   * Added support for sharing indirect action objects of type ``RTE_FLOW_ACTION_TYPE_METER_MARK``
     in HWS flow engine.
+  * Added support for sharing indirect action objects of type ``RTE_FLOW_ACTION_TYPE_CONNTRACK``
+    with HW steering flow engine.
 
 * **Updated Marvell cnxk crypto driver.**
 
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index a8e2c9cc9e..2550e0604f 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -564,7 +564,7 @@ flow_hw_ct_compile(struct rte_eth_dev *dev,
 	struct mlx5_aso_ct_action *ct;
 
 	ct = mlx5_ipool_get(priv->hws_ctpool->cts, MLX5_ACTION_CTX_CT_GET_IDX(idx));
-	if (!ct || mlx5_aso_ct_available(priv->sh, queue, ct))
+	if (!ct || (!priv->shared_host && mlx5_aso_ct_available(priv->sh, queue, ct)))
 		return -1;
 	rule_act->action = priv->hws_ctpool->dr_action;
 	rule_act->aso_ct.offset = ct->offset;
@@ -3845,10 +3845,10 @@ __flow_hw_pull_indir_action_comp(struct rte_eth_dev *dev,
 		if (ret_comp < n_res && priv->hws_mpool)
 			ret_comp += mlx5_aso_pull_completion(&priv->hws_mpool->sq[queue],
 					&res[ret_comp], n_res - ret_comp);
+		if (ret_comp < n_res && priv->hws_ctpool)
+			ret_comp += mlx5_aso_pull_completion(&priv->ct_mng->aso_sqs[queue],
+					&res[ret_comp], n_res - ret_comp);
 	}
-	if (ret_comp < n_res && priv->hws_ctpool)
-		ret_comp += mlx5_aso_pull_completion(&priv->ct_mng->aso_sqs[queue],
-				&res[ret_comp], n_res - ret_comp);
 	if (ret_comp < n_res && priv->quota_ctx.sq)
 		ret_comp += mlx5_aso_pull_completion(&priv->quota_ctx.sq[queue],
 						     &res[ret_comp],
@@ -9027,15 +9027,19 @@ flow_hw_ct_mng_destroy(struct rte_eth_dev *dev,
 }
 
 static void
-flow_hw_ct_pool_destroy(struct rte_eth_dev *dev __rte_unused,
+flow_hw_ct_pool_destroy(struct rte_eth_dev *dev,
 			struct mlx5_aso_ct_pool *pool)
 {
+	struct mlx5_priv *priv = dev->data->dev_private;
+
 	if (pool->dr_action)
 		mlx5dr_action_destroy(pool->dr_action);
-	if (pool->devx_obj)
-		claim_zero(mlx5_devx_cmd_destroy(pool->devx_obj));
-	if (pool->cts)
-		mlx5_ipool_destroy(pool->cts);
+	if (!priv->shared_host) {
+		if (pool->devx_obj)
+			claim_zero(mlx5_devx_cmd_destroy(pool->devx_obj));
+		if (pool->cts)
+			mlx5_ipool_destroy(pool->cts);
+	}
 	mlx5_free(pool);
 }
 
@@ -9059,51 +9063,56 @@ flow_hw_ct_pool_create(struct rte_eth_dev *dev,
 		.type = "mlx5_hw_ct_action",
 	};
 	int reg_id;
-	uint32_t flags;
+	uint32_t flags = 0;
 
-	if (port_attr->flags & RTE_FLOW_PORT_FLAG_SHARE_INDIRECT) {
-		DRV_LOG(ERR, "Connection tracking is not supported "
-			     "in cross vHCA sharing mode");
-		rte_errno = ENOTSUP;
-		return NULL;
-	}
 	pool = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*pool), 0, SOCKET_ID_ANY);
 	if (!pool) {
 		rte_errno = ENOMEM;
 		return NULL;
 	}
-	obj = mlx5_devx_cmd_create_conn_track_offload_obj(priv->sh->cdev->ctx,
-							  priv->sh->cdev->pdn,
-							  log_obj_size);
-	if (!obj) {
-		rte_errno = ENODATA;
-		DRV_LOG(ERR, "Failed to create conn_track_offload_obj using DevX.");
-		goto err;
+	if (!priv->shared_host) {
+		/*
+		 * No need for local cache if CT number is a small number. Since
+		 * flow insertion rate will be very limited in that case. Here let's
+		 * set the number to less than default trunk size 4K.
+		 */
+		if (nb_cts <= cfg.trunk_size) {
+			cfg.per_core_cache = 0;
+			cfg.trunk_size = nb_cts;
+		} else if (nb_cts <= MLX5_HW_IPOOL_SIZE_THRESHOLD) {
+			cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN;
+		}
+		cfg.max_idx = nb_cts;
+		pool->cts = mlx5_ipool_create(&cfg);
+		if (!pool->cts)
+			goto err;
+		obj = mlx5_devx_cmd_create_conn_track_offload_obj(priv->sh->cdev->ctx,
+								  priv->sh->cdev->pdn,
+								  log_obj_size);
+		if (!obj) {
+			rte_errno = ENODATA;
+			DRV_LOG(ERR, "Failed to create conn_track_offload_obj using DevX.");
+			goto err;
+		}
+		pool->devx_obj = obj;
+	} else {
+		struct rte_eth_dev *host_dev = priv->shared_host;
+		struct mlx5_priv *host_priv = host_dev->data->dev_private;
+
+		pool->devx_obj = host_priv->hws_ctpool->devx_obj;
+		pool->cts = host_priv->hws_ctpool->cts;
+		MLX5_ASSERT(pool->cts);
+		MLX5_ASSERT(!port_attr->nb_conn_tracks);
 	}
-	pool->devx_obj = obj;
 	reg_id = mlx5_flow_get_reg_id(dev, MLX5_ASO_CONNTRACK, 0, NULL);
-	flags = MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX;
+	flags |= MLX5DR_ACTION_FLAG_HWS_RX | MLX5DR_ACTION_FLAG_HWS_TX;
 	if (priv->sh->config.dv_esw_en && priv->master)
 		flags |= MLX5DR_ACTION_FLAG_HWS_FDB;
 	pool->dr_action = mlx5dr_action_create_aso_ct(priv->dr_ctx,
-						      (struct mlx5dr_devx_obj *)obj,
+						      (struct mlx5dr_devx_obj *)pool->devx_obj,
 						      reg_id - REG_C_0, flags);
 	if (!pool->dr_action)
 		goto err;
-	/*
-	 * No need for local cache if CT number is a small number. Since
-	 * flow insertion rate will be very limited in that case. Here let's
-	 * set the number to less than default trunk size 4K.
-	 */
-	if (nb_cts <= cfg.trunk_size) {
-		cfg.per_core_cache = 0;
-		cfg.trunk_size = nb_cts;
-	} else if (nb_cts <= MLX5_HW_IPOOL_SIZE_THRESHOLD) {
-		cfg.per_core_cache = MLX5_HW_IPOOL_CACHE_MIN;
-	}
-	pool->cts = mlx5_ipool_create(&cfg);
-	if (!pool->cts)
-		goto err;
 	pool->sq = priv->ct_mng->aso_sqs;
 	/* Assign the last extra ASO SQ as public SQ. */
 	pool->shared_sq = &priv->ct_mng->aso_sqs[priv->nb_queue - 1];
@@ -9980,14 +9989,16 @@ flow_hw_configure(struct rte_eth_dev *dev,
 	if (!priv->shared_host)
 		flow_hw_create_send_to_kernel_actions(priv);
 	if (port_attr->nb_conn_tracks || (host_priv && host_priv->hws_ctpool)) {
-		mem_size = sizeof(struct mlx5_aso_sq) * nb_q_updated +
-			   sizeof(*priv->ct_mng);
-		priv->ct_mng = mlx5_malloc(MLX5_MEM_ZERO, mem_size,
-					   RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
-		if (!priv->ct_mng)
-			goto err;
-		if (mlx5_aso_ct_queue_init(priv->sh, priv->ct_mng, nb_q_updated))
-			goto err;
+		if (!priv->shared_host) {
+			mem_size = sizeof(struct mlx5_aso_sq) * nb_q_updated +
+				sizeof(*priv->ct_mng);
+			priv->ct_mng = mlx5_malloc(MLX5_MEM_ZERO, mem_size,
+						RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);
+			if (!priv->ct_mng)
+				goto err;
+			if (mlx5_aso_ct_queue_init(priv->sh, priv->ct_mng, nb_q_updated))
+				goto err;
+		}
 		priv->hws_ctpool = flow_hw_ct_pool_create(dev, port_attr);
 		if (!priv->hws_ctpool)
 			goto err;
@@ -10210,17 +10221,20 @@ flow_hw_clear_port_info(struct rte_eth_dev *dev)
 }
 
 static int
-flow_hw_conntrack_destroy(struct rte_eth_dev *dev __rte_unused,
+flow_hw_conntrack_destroy(struct rte_eth_dev *dev,
 			  uint32_t idx,
 			  struct rte_flow_error *error)
 {
-	uint16_t owner = (uint16_t)MLX5_ACTION_CTX_CT_GET_OWNER(idx);
 	uint32_t ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
-	struct rte_eth_dev *owndev = &rte_eth_devices[owner];
-	struct mlx5_priv *priv = owndev->data->dev_private;
+	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;
 
+	if (priv->shared_host)
+		return rte_flow_error_set(error, ENOTSUP,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				NULL,
+				"CT destruction is not allowed to guest port");
 	ct = mlx5_ipool_get(pool->cts, ct_idx);
 	if (!ct) {
 		return rte_flow_error_set(error, EINVAL,
@@ -10243,14 +10257,13 @@ flow_hw_conntrack_query(struct rte_eth_dev *dev, uint32_t queue, uint32_t idx,
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;
-	uint16_t owner = (uint16_t)MLX5_ACTION_CTX_CT_GET_OWNER(idx);
 	uint32_t ct_idx;
 
-	if (owner != PORT_ID(priv))
-		return rte_flow_error_set(error, EACCES,
+	if (priv->shared_host)
+		return rte_flow_error_set(error, ENOTSUP,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				NULL,
-				"Can't query CT object owned by another port");
+				"CT query is not allowed to guest port");
 	ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
 	ct = mlx5_ipool_get(pool->cts, ct_idx);
 	if (!ct) {
@@ -10280,15 +10293,14 @@ flow_hw_conntrack_update(struct rte_eth_dev *dev, uint32_t queue,
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;
 	const struct rte_flow_action_conntrack *new_prf;
-	uint16_t owner = (uint16_t)MLX5_ACTION_CTX_CT_GET_OWNER(idx);
 	uint32_t ct_idx;
 	int ret = 0;
 
-	if (PORT_ID(priv) != owner)
-		return rte_flow_error_set(error, EACCES,
-					  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
-					  NULL,
-					  "Can't update CT object owned by another port");
+	if (priv->shared_host)
+		return rte_flow_error_set(error, ENOTSUP,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				NULL,
+				"CT update is not allowed to guest port");
 	ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
 	ct = mlx5_ipool_get(pool->cts, ct_idx);
 	if (!ct) {
@@ -10338,6 +10350,13 @@ flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue,
 	int ret;
 	bool async = !!(queue != MLX5_HW_INV_QUEUE);
 
+	if (priv->shared_host) {
+		rte_flow_error_set(error, ENOTSUP,
+				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				NULL,
+				"CT create is not allowed to guest port");
+		return NULL;
+	}
 	if (!pool) {
 		rte_flow_error_set(error, EINVAL,
 				   RTE_FLOW_ERROR_TYPE_ACTION, NULL,
-- 
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v3 4/4] net/mlx5: remove port from conntrack handle representation
  2024-02-27 13:52   ` [PATCH v3 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
                       ` (2 preceding siblings ...)
  2024-02-27 13:52     ` [PATCH v3 3/4] net/mlx5: add cross port CT object sharing Dariusz Sosnowski
@ 2024-02-27 13:52     ` Dariusz Sosnowski
  2024-02-28 10:12     ` [PATCH v3 0/4] net/mlx5: connection tracking changes Raslan Darawsheh
  4 siblings, 0 replies; 16+ messages in thread
From: Dariusz Sosnowski @ 2024-02-27 13:52 UTC (permalink / raw)
  To: Viacheslav Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad; +Cc: dev

This patch removes the owner port index from integer
representation of indirect action handle in mlx5 PMD for conntrack
flow actions.
This index is not needed when HW Steering flow engine is enabled,
because either:

- port references its own indirect actions or,
- port references indirect actions of the host port when sharing
  indirect actions was configured.

In both cases it is explicitly known which port owns the action.
Port index, included in action handle, introduced unnecessary
limitation and caused undefined behavior issues when application
used more than supported number of ports.

This patch removes the port index from indirect conntrack action handle
representation when HW steering flow engine is used.
It does not affect SW Steering flow engine.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 doc/guides/nics/mlx5.rst        |  2 +-
 drivers/net/mlx5/mlx5_flow.h    | 18 +++++++++++---
 drivers/net/mlx5/mlx5_flow_hw.c | 44 +++++++++++----------------------
 3 files changed, 29 insertions(+), 35 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index db47d70b70..329b98f68f 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -815,7 +815,7 @@ Limitations
 
   - Cannot co-exist with ASO meter, ASO age action in a single flow rule.
   - Flow rules insertion rate and memory consumption need more optimization.
-  - 16 ports maximum.
+  - 16 ports maximum (with ``dv_flow_en=1``).
   - 32M connections maximum.
 
 - Multi-thread flow insertion:
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index b4bf96cd64..187f440893 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -80,7 +80,12 @@ enum mlx5_indirect_type {
 #define MLX5_INDIRECT_ACT_CT_OWNER_SHIFT 25
 #define MLX5_INDIRECT_ACT_CT_OWNER_MASK (MLX5_INDIRECT_ACT_CT_MAX_PORT - 1)
 
-/* 29-31: type, 25-28: owner port, 0-24: index */
+/*
+ * When SW steering flow engine is used, the CT action handles are encoded in a following way:
+ * - bits 31:29 - type
+ * - bits 28:25 - port index of the action owner
+ * - bits 24:0 - action index
+ */
 #define MLX5_INDIRECT_ACT_CT_GEN_IDX(owner, index) \
 	((MLX5_INDIRECT_ACTION_TYPE_CT << MLX5_INDIRECT_ACTION_TYPE_OFFSET) | \
 	 (((owner) & MLX5_INDIRECT_ACT_CT_OWNER_MASK) << \
@@ -93,9 +98,14 @@ enum mlx5_indirect_type {
 #define MLX5_INDIRECT_ACT_CT_GET_IDX(index) \
 	((index) & ((1 << MLX5_INDIRECT_ACT_CT_OWNER_SHIFT) - 1))
 
-#define MLX5_ACTION_CTX_CT_GET_IDX  MLX5_INDIRECT_ACT_CT_GET_IDX
-#define MLX5_ACTION_CTX_CT_GET_OWNER MLX5_INDIRECT_ACT_CT_GET_OWNER
-#define MLX5_ACTION_CTX_CT_GEN_IDX MLX5_INDIRECT_ACT_CT_GEN_IDX
+/*
+ * When HW steering flow engine is used, the CT action handles are encoded in a following way:
+ * - bits 31:29 - type
+ * - bits 28:0 - action index
+ */
+#define MLX5_INDIRECT_ACT_HWS_CT_GEN_IDX(index) \
+	((struct rte_flow_action_handle *)(uintptr_t) \
+	 ((MLX5_INDIRECT_ACTION_TYPE_CT << MLX5_INDIRECT_ACTION_TYPE_OFFSET) | (index)))
 
 enum mlx5_indirect_list_type {
 	MLX5_INDIRECT_ACTION_LIST_TYPE_ERR = 0,
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 2550e0604f..e48a927bf0 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -563,7 +563,7 @@ flow_hw_ct_compile(struct rte_eth_dev *dev,
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_aso_ct_action *ct;
 
-	ct = mlx5_ipool_get(priv->hws_ctpool->cts, MLX5_ACTION_CTX_CT_GET_IDX(idx));
+	ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx);
 	if (!ct || (!priv->shared_host && mlx5_aso_ct_available(priv->sh, queue, ct)))
 		return -1;
 	rule_act->action = priv->hws_ctpool->dr_action;
@@ -2462,8 +2462,7 @@ __flow_hw_actions_translate(struct rte_eth_dev *dev,
 			break;
 		case RTE_FLOW_ACTION_TYPE_CONNTRACK:
 			if (masks->conf) {
-				ct_idx = MLX5_ACTION_CTX_CT_GET_IDX
-					 ((uint32_t)(uintptr_t)actions->conf);
+				ct_idx = MLX5_INDIRECT_ACTION_IDX_GET(actions->conf);
 				if (flow_hw_ct_compile(dev, MLX5_HW_INV_QUEUE, ct_idx,
 						       &acts->rule_acts[dr_pos]))
 					goto err;
@@ -3180,8 +3179,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
 			job->flow->cnt_id = act_data->shared_counter.id;
 			break;
 		case RTE_FLOW_ACTION_TYPE_CONNTRACK:
-			ct_idx = MLX5_ACTION_CTX_CT_GET_IDX
-				 ((uint32_t)(uintptr_t)action->conf);
+			ct_idx = MLX5_INDIRECT_ACTION_IDX_GET(action->conf);
 			if (flow_hw_ct_compile(dev, queue, ct_idx,
 					       &rule_acts[act_data->action_dst]))
 				return -1;
@@ -3796,16 +3794,14 @@ flow_hw_pull_legacy_indirect_comp(struct rte_eth_dev *dev, struct mlx5_hw_q_job
 			aso_mtr = mlx5_ipool_get(priv->hws_mpool->idx_pool, idx);
 			aso_mtr->state = ASO_METER_READY;
 		} else if (type == MLX5_INDIRECT_ACTION_TYPE_CT) {
-			idx = MLX5_ACTION_CTX_CT_GET_IDX
-			((uint32_t)(uintptr_t)job->action);
+			idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action);
 			aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx);
 			aso_ct->state = ASO_CONNTRACK_READY;
 		}
 	} else if (job->type == MLX5_HW_Q_JOB_TYPE_QUERY) {
 		type = MLX5_INDIRECT_ACTION_TYPE_GET(job->action);
 		if (type == MLX5_INDIRECT_ACTION_TYPE_CT) {
-			idx = MLX5_ACTION_CTX_CT_GET_IDX
-			((uint32_t)(uintptr_t)job->action);
+			idx = MLX5_INDIRECT_ACTION_IDX_GET(job->action);
 			aso_ct = mlx5_ipool_get(priv->hws_ctpool->cts, idx);
 			mlx5_aso_ct_obj_analyze(job->query.user,
 						job->query.hw);
@@ -10225,7 +10221,6 @@ flow_hw_conntrack_destroy(struct rte_eth_dev *dev,
 			  uint32_t idx,
 			  struct rte_flow_error *error)
 {
-	uint32_t ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;
@@ -10235,7 +10230,7 @@ flow_hw_conntrack_destroy(struct rte_eth_dev *dev,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				NULL,
 				"CT destruction is not allowed to guest port");
-	ct = mlx5_ipool_get(pool->cts, ct_idx);
+	ct = mlx5_ipool_get(pool->cts, idx);
 	if (!ct) {
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
@@ -10244,7 +10239,7 @@ flow_hw_conntrack_destroy(struct rte_eth_dev *dev,
 	}
 	__atomic_store_n(&ct->state, ASO_CONNTRACK_FREE,
 				 __ATOMIC_RELAXED);
-	mlx5_ipool_free(pool->cts, ct_idx);
+	mlx5_ipool_free(pool->cts, idx);
 	return 0;
 }
 
@@ -10257,15 +10252,13 @@ flow_hw_conntrack_query(struct rte_eth_dev *dev, uint32_t queue, uint32_t idx,
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;
-	uint32_t ct_idx;
 
 	if (priv->shared_host)
 		return rte_flow_error_set(error, ENOTSUP,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				NULL,
 				"CT query is not allowed to guest port");
-	ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
-	ct = mlx5_ipool_get(pool->cts, ct_idx);
+	ct = mlx5_ipool_get(pool->cts, idx);
 	if (!ct) {
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
@@ -10293,7 +10286,6 @@ flow_hw_conntrack_update(struct rte_eth_dev *dev, uint32_t queue,
 	struct mlx5_aso_ct_pool *pool = priv->hws_ctpool;
 	struct mlx5_aso_ct_action *ct;
 	const struct rte_flow_action_conntrack *new_prf;
-	uint32_t ct_idx;
 	int ret = 0;
 
 	if (priv->shared_host)
@@ -10301,8 +10293,7 @@ flow_hw_conntrack_update(struct rte_eth_dev *dev, uint32_t queue,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				NULL,
 				"CT update is not allowed to guest port");
-	ct_idx = MLX5_ACTION_CTX_CT_GET_IDX(idx);
-	ct = mlx5_ipool_get(pool->cts, ct_idx);
+	ct = mlx5_ipool_get(pool->cts, idx);
 	if (!ct) {
 		return rte_flow_error_set(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
@@ -10363,13 +10354,6 @@ flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue,
 				   "CT is not enabled");
 		return 0;
 	}
-	if (dev->data->port_id >= MLX5_INDIRECT_ACT_CT_MAX_PORT) {
-		rte_flow_error_set(error, EINVAL,
-				   RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
-				   "CT supports port indexes up to "
-				   RTE_STR(MLX5_ACTION_CTX_CT_MAX_PORT));
-		return 0;
-	}
 	ct = mlx5_ipool_zmalloc(pool->cts, &ct_idx);
 	if (!ct) {
 		rte_flow_error_set(error, rte_errno,
@@ -10399,8 +10383,7 @@ flow_hw_conntrack_create(struct rte_eth_dev *dev, uint32_t queue,
 			return 0;
 		}
 	}
-	return (struct rte_flow_action_handle *)(uintptr_t)
-		MLX5_ACTION_CTX_CT_GEN_IDX(PORT_ID(priv), ct_idx);
+	return MLX5_INDIRECT_ACT_HWS_CT_GEN_IDX(ct_idx);
 }
 
 /**
@@ -10741,7 +10724,7 @@ flow_hw_action_handle_update(struct rte_eth_dev *dev, uint32_t queue,
 	case MLX5_INDIRECT_ACTION_TYPE_CT:
 		if (ct_conf->state)
 			aso = true;
-		ret = flow_hw_conntrack_update(dev, queue, update, act_idx,
+		ret = flow_hw_conntrack_update(dev, queue, update, idx,
 					       job, push, error);
 		break;
 	case MLX5_INDIRECT_ACTION_TYPE_METER_MARK:
@@ -10830,7 +10813,7 @@ flow_hw_action_handle_destroy(struct rte_eth_dev *dev, uint32_t queue,
 		mlx5_hws_cnt_shared_put(priv->hws_cpool, &act_idx);
 		break;
 	case MLX5_INDIRECT_ACTION_TYPE_CT:
-		ret = flow_hw_conntrack_destroy(dev, act_idx, error);
+		ret = flow_hw_conntrack_destroy(dev, idx, error);
 		break;
 	case MLX5_INDIRECT_ACTION_TYPE_METER_MARK:
 		aso_mtr = mlx5_ipool_get(pool->idx_pool, idx);
@@ -11116,6 +11099,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
 	struct mlx5_hw_q_job *job = NULL;
 	uint32_t act_idx = (uint32_t)(uintptr_t)handle;
 	uint32_t type = act_idx >> MLX5_INDIRECT_ACTION_TYPE_OFFSET;
+	uint32_t idx = MLX5_INDIRECT_ACTION_IDX_GET(handle);
 	uint32_t age_idx = act_idx & MLX5_HWS_AGE_IDX_MASK;
 	int ret;
 	bool push = flow_hw_action_push(attr);
@@ -11139,7 +11123,7 @@ flow_hw_action_handle_query(struct rte_eth_dev *dev, uint32_t queue,
 		aso = true;
 		if (job)
 			job->query.user = data;
-		ret = flow_hw_conntrack_query(dev, queue, act_idx, data,
+		ret = flow_hw_conntrack_query(dev, queue, idx, data,
 					      job, push, error);
 		break;
 	case MLX5_INDIRECT_ACTION_TYPE_QUOTA:
-- 
2.25.1


^ permalink raw reply	[flat|nested] 16+ messages in thread

* RE: [PATCH v3 0/4] net/mlx5: connection tracking changes
  2024-02-27 13:52   ` [PATCH v3 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
                       ` (3 preceding siblings ...)
  2024-02-27 13:52     ` [PATCH v3 4/4] net/mlx5: remove port from conntrack handle representation Dariusz Sosnowski
@ 2024-02-28 10:12     ` Raslan Darawsheh
  4 siblings, 0 replies; 16+ messages in thread
From: Raslan Darawsheh @ 2024-02-28 10:12 UTC (permalink / raw)
  To: Dariusz Sosnowski, Slava Ovsiienko, Ori Kam, Suanming Mou, Matan Azrad
  Cc: dev

Hi,

> -----Original Message-----
> From: Dariusz Sosnowski <dsosnowski@nvidia.com>
> Sent: Tuesday, February 27, 2024 3:52 PM
> To: Slava Ovsiienko <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>;
> Suanming Mou <suanmingm@nvidia.com>; Matan Azrad
> <matan@nvidia.com>
> Cc: dev@dpdk.org
> Subject: [PATCH v3 0/4] net/mlx5: connection tracking changes
> 
> Patches 1 and 2 contain fixes for existing implementation of connection
> tracking flow actions.
> 
> Patch 3 adds support for sharing connection tracking flow actions between
> ports when ports' flow engines are configured with
> RTE_FLOW_PORT_FLAG_SHARE_INDIRECT flag set.
> 
> Patch 4 is based on the previous one and removes the limitation on number of
> ports when connection tracking flow actions are used with HW Steering flow
> engine.
> 
> Depends-on: series-31246 ("net/mlx5: move meter init functions")
> 
> v3:
> - Rebased.
> - Added Depends-on tag.
> 
> v2:
> - Rebased on top of v24.03-rc1
> - Updated mlx5 docs.
> 
> Dariusz Sosnowski (3):
>   net/mlx5: fix conntrack action handle representation
>   net/mlx5: fix connection tracking action validation
>   net/mlx5: remove port from conntrack handle representation
> 
> Suanming Mou (1):
>   net/mlx5: add cross port CT object sharing
> 
>  doc/guides/nics/mlx5.rst               |   4 +-
>  doc/guides/rel_notes/release_24_03.rst |   2 +
>  drivers/net/mlx5/mlx5_flow.h           |  20 ++-
>  drivers/net/mlx5/mlx5_flow_dv.c        |   9 ++
>  drivers/net/mlx5/mlx5_flow_hw.c        | 180 +++++++++++++------------
>  5 files changed, 123 insertions(+), 92 deletions(-)
> 
> --
> 2.25.1
Series applied to next-net-mlx,
Kindest regards,
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2024-02-28 10:13 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-21 10:01 [PATCH 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
2024-02-21 10:01 ` [PATCH 1/4] net/mlx5: fix conntrack action handle representation Dariusz Sosnowski
2024-02-21 10:01 ` [PATCH 2/4] net/mlx5: fix connection tracking action validation Dariusz Sosnowski
2024-02-21 10:01 ` [PATCH 3/4] net/mlx5: add cross port CT object sharing Dariusz Sosnowski
2024-02-21 10:01 ` [PATCH 4/4] net/mlx5: remove port from conntrack handle representation Dariusz Sosnowski
2024-02-23 14:23 ` [PATCH v2 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
2024-02-23 14:23   ` [PATCH v2 1/4] net/mlx5: fix conntrack action handle representation Dariusz Sosnowski
2024-02-23 14:23   ` [PATCH v2 2/4] net/mlx5: fix connection tracking action validation Dariusz Sosnowski
2024-02-23 14:23   ` [PATCH v2 3/4] net/mlx5: add cross port CT object sharing Dariusz Sosnowski
2024-02-23 14:23   ` [PATCH v2 4/4] net/mlx5: remove port from conntrack handle representation Dariusz Sosnowski
2024-02-27 13:52   ` [PATCH v3 0/4] net/mlx5: connection tracking changes Dariusz Sosnowski
2024-02-27 13:52     ` [PATCH v3 1/4] net/mlx5: fix conntrack action handle representation Dariusz Sosnowski
2024-02-27 13:52     ` [PATCH v3 2/4] net/mlx5: fix connection tracking action validation Dariusz Sosnowski
2024-02-27 13:52     ` [PATCH v3 3/4] net/mlx5: add cross port CT object sharing Dariusz Sosnowski
2024-02-27 13:52     ` [PATCH v3 4/4] net/mlx5: remove port from conntrack handle representation Dariusz Sosnowski
2024-02-28 10:12     ` [PATCH v3 0/4] net/mlx5: connection tracking changes Raslan Darawsheh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).