DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH 0/8] net/mlx5 counter optimize
@ 2020-04-07  3:59 Suanming Mou
  2020-04-07  3:59 ` [dpdk-dev] [PATCH 1/8] net/mlx5: fix incorrect counter container usage Suanming Mou
                   ` (8 more replies)
  0 siblings, 9 replies; 10+ messages in thread
From: Suanming Mou @ 2020-04-07  3:59 UTC (permalink / raw)
  Cc: dev, rasland

In the plan of save the memory consumption for rte_flow, the counter
memory consumption will be optimized from two perspective.

Change the counter object saving as index instead of pointer in rte_flow.
In this case, since currently the counters are allocated from the pool,
counter can use the index as it is in the pool to address the object. The
counter index ID is made up of the pool index and counter offset in the
pool.

Split the counter struct members are used only in batch and none batch.
Currently, there are two kinds of counters, one as batch and others as
none batch. The most widely used batch counters only use limited members
in the counter struct. Split the members only used by none batch counters
to the extend counter struct, and allocate the memory only for the none
batch counter pools saves memory for batch counters.


Suanming Mou (8):
  net/mlx5: fix incorrect counter container usage
  net/mlx5: optimize counter release query generation
  net/mlx5: change verbs counter allocator to indexed
  common/mlx5: add batch counter id offset
  net/mlx5: change Direct Verbs counter to indexed
  net/mlx5: optimize flow counter handle type
  net/mlx5: split the counter struct
  net/mlx5: reorganize fallback counter management

 drivers/common/mlx5/mlx5_prm.h     |   9 +
 drivers/net/mlx5/mlx5.c            |   6 +-
 drivers/net/mlx5/mlx5.h            |  52 +++--
 drivers/net/mlx5/mlx5_flow.c       |  28 ++-
 drivers/net/mlx5/mlx5_flow.h       |  10 +-
 drivers/net/mlx5/mlx5_flow_dv.c    | 445 ++++++++++++++++++-------------------
 drivers/net/mlx5/mlx5_flow_verbs.c | 173 ++++++++++----
 7 files changed, 428 insertions(+), 295 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH 1/8] net/mlx5: fix incorrect counter container usage
  2020-04-07  3:59 [dpdk-dev] [PATCH 0/8] net/mlx5 counter optimize Suanming Mou
@ 2020-04-07  3:59 ` Suanming Mou
  2020-04-07  3:59 ` [dpdk-dev] [PATCH 2/8] net/mlx5: optimize counter release query generation Suanming Mou
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Suanming Mou @ 2020-04-07  3:59 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko; +Cc: dev, rasland, stable

As none-batch counter pool allocates only one counter every time, after
the new allocated counter pop out, the pool will be empty and moved to
the end of the container list in the container.

Currently, the new non-batch counter allocation maybe happened with new
counter pool allocated, it means the new counter comes from a new pool.
While new pool is allocated, the container resize and switch happens.
In this case, after the pool becomes empty, it should be added to the new
container pool list as the pool belongs.

Update the container pointer accordingly with pool allocation to avoid
add the pool to the incorrect container.

Fixes: 5382d28c2110 ("net/mlx5: accelerate DV flow counter transactions")
Cc: stable@dpdk.org

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5_flow_dv.c | 38 ++++++++++++++++++++++----------------
 1 file changed, 22 insertions(+), 16 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 6aa6e83..f022751 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -4153,11 +4153,13 @@ struct field_modify_info modify_tcp[] = {
  *   The devX counter handle.
  * @param[in] batch
  *   Whether the pool is for counter that was allocated by batch command.
+ * @param[in/out] cont_cur
+ *   Pointer to the container pointer, it will be update in pool resize.
  *
  * @return
- *   A new pool pointer on success, NULL otherwise and rte_errno is set.
+ *   The pool container pointer on success, NULL otherwise and rte_errno is set.
  */
-static struct mlx5_flow_counter_pool *
+static struct mlx5_pools_container *
 flow_dv_pool_create(struct rte_eth_dev *dev, struct mlx5_devx_obj *dcs,
 		    uint32_t batch)
 {
@@ -4191,12 +4193,12 @@ struct field_modify_info modify_tcp[] = {
 	 */
 	rte_atomic64_set(&pool->query_gen, 0x2);
 	TAILQ_INIT(&pool->counters);
-	TAILQ_INSERT_TAIL(&cont->pool_list, pool, next);
+	TAILQ_INSERT_HEAD(&cont->pool_list, pool, next);
 	cont->pools[n_valid] = pool;
 	/* Pool initialization must be updated before host thread access. */
 	rte_cio_wmb();
 	rte_atomic16_add(&cont->n_valid, 1);
-	return pool;
+	return cont;
 }
 
 /**
@@ -4210,33 +4212,35 @@ struct field_modify_info modify_tcp[] = {
  *   Whether the pool is for counter that was allocated by batch command.
  *
  * @return
- *   The free counter pool pointer and @p cnt_free is set on success,
+ *   The counter container pointer and @p cnt_free is set on success,
  *   NULL otherwise and rte_errno is set.
  */
-static struct mlx5_flow_counter_pool *
+static struct mlx5_pools_container *
 flow_dv_counter_pool_prepare(struct rte_eth_dev *dev,
 			     struct mlx5_flow_counter **cnt_free,
 			     uint32_t batch)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_pools_container *cont;
 	struct mlx5_flow_counter_pool *pool;
 	struct mlx5_devx_obj *dcs = NULL;
 	struct mlx5_flow_counter *cnt;
 	uint32_t i;
 
+	cont = MLX5_CNT_CONTAINER(priv->sh, batch, 0);
 	if (!batch) {
 		/* bulk_bitmap must be 0 for single counter allocation. */
 		dcs = mlx5_devx_cmd_flow_counter_alloc(priv->sh->ctx, 0);
 		if (!dcs)
 			return NULL;
-		pool = flow_dv_find_pool_by_id
-			(MLX5_CNT_CONTAINER(priv->sh, batch, 0), dcs->id);
+		pool = flow_dv_find_pool_by_id(cont, dcs->id);
 		if (!pool) {
-			pool = flow_dv_pool_create(dev, dcs, batch);
-			if (!pool) {
+			cont = flow_dv_pool_create(dev, dcs, batch);
+			if (!cont) {
 				mlx5_devx_cmd_destroy(dcs);
 				return NULL;
 			}
+			pool = TAILQ_FIRST(&cont->pool_list);
 		} else if (dcs->id < pool->min_dcs->id) {
 			rte_atomic64_set(&pool->a64_dcs,
 					 (int64_t)(uintptr_t)dcs);
@@ -4245,7 +4249,7 @@ struct field_modify_info modify_tcp[] = {
 		TAILQ_INSERT_HEAD(&pool->counters, cnt, next);
 		cnt->dcs = dcs;
 		*cnt_free = cnt;
-		return pool;
+		return cont;
 	}
 	/* bulk_bitmap is in 128 counters units. */
 	if (priv->config.hca_attr.flow_counter_bulk_alloc_bitmap & 0x4)
@@ -4254,18 +4258,19 @@ struct field_modify_info modify_tcp[] = {
 		rte_errno = ENODATA;
 		return NULL;
 	}
-	pool = flow_dv_pool_create(dev, dcs, batch);
-	if (!pool) {
+	cont = flow_dv_pool_create(dev, dcs, batch);
+	if (!cont) {
 		mlx5_devx_cmd_destroy(dcs);
 		return NULL;
 	}
+	pool = TAILQ_FIRST(&cont->pool_list);
 	for (i = 0; i < MLX5_COUNTERS_PER_POOL; ++i) {
 		cnt = &pool->counters_raw[i];
 		cnt->pool = pool;
 		TAILQ_INSERT_HEAD(&pool->counters, cnt, next);
 	}
 	*cnt_free = &pool->counters_raw[0];
-	return pool;
+	return cont;
 }
 
 /**
@@ -4366,9 +4371,10 @@ struct field_modify_info modify_tcp[] = {
 		cnt_free = NULL;
 	}
 	if (!cnt_free) {
-		pool = flow_dv_counter_pool_prepare(dev, &cnt_free, batch);
-		if (!pool)
+		cont = flow_dv_counter_pool_prepare(dev, &cnt_free, batch);
+		if (!cont)
 			return NULL;
+		pool = TAILQ_FIRST(&cont->pool_list);
 	}
 	cnt_free->batch = batch;
 	/* Create a DV counter action only in the first time usage. */
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH 2/8] net/mlx5: optimize counter release query generation
  2020-04-07  3:59 [dpdk-dev] [PATCH 0/8] net/mlx5 counter optimize Suanming Mou
  2020-04-07  3:59 ` [dpdk-dev] [PATCH 1/8] net/mlx5: fix incorrect counter container usage Suanming Mou
@ 2020-04-07  3:59 ` Suanming Mou
  2020-04-07  3:59 ` [dpdk-dev] [PATCH 3/8] net/mlx5: change verbs counter allocator to indexed Suanming Mou
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Suanming Mou @ 2020-04-07  3:59 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko; +Cc: dev, rasland

Query generation was introduced to avoid counter to be reallocated
before the counter statistics be fully updated. Since the counters
be released between query trigger and query handler may miss the
packets arrived in the trigger and handler gap period. In this case,
user can only allocate the counter while pool query_gen is greater
than the counter query_gen + 1 which indicates a new round of query
finished, the statistic is fully updated.

Split the pool query_gen to start_query_gen and end_query_gen helps
to have a better identify for the counter released in the gap period.
And it helps the counter released before query trigger or after query
handler can be reallocated more efficiently.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.h         |  3 ++-
 drivers/net/mlx5/mlx5_flow.c    | 14 +++++++++++++-
 drivers/net/mlx5/mlx5_flow_dv.c | 18 ++++++++++++++----
 3 files changed, 29 insertions(+), 6 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 34ab475..fe7e684 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -283,7 +283,8 @@ struct mlx5_flow_counter_pool {
 		rte_atomic64_t a64_dcs;
 	};
 	/* The devx object of the minimum counter ID. */
-	rte_atomic64_t query_gen;
+	rte_atomic64_t start_query_gen; /* Query start round. */
+	rte_atomic64_t end_query_gen; /* Query end round. */
 	uint32_t n_counters: 16; /* Number of devx allocated counters. */
 	rte_spinlock_t sl; /* The pool lock. */
 	struct mlx5_counter_stats_raw *raw;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 6438a14..d2e9cc4 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -5655,6 +5655,13 @@ struct mlx5_flow_counter *
 	dcs = (struct mlx5_devx_obj *)(uintptr_t)rte_atomic64_read
 							      (&pool->a64_dcs);
 	offset = batch ? 0 : dcs->id % MLX5_COUNTERS_PER_POOL;
+	/*
+	 * Identify the counters released between query trigger and query
+	 * handle more effiecntly. The counter released in this gap period
+	 * should wait for a new round of query as the new arrived packets
+	 * will not be taken into account.
+	 */
+	rte_atomic64_add(&pool->start_query_gen, 1);
 	ret = mlx5_devx_cmd_flow_counter_query(dcs, 0, MLX5_COUNTERS_PER_POOL -
 					       offset, NULL, NULL,
 					       pool->raw_hw->mem_mng->dm->id,
@@ -5663,6 +5670,7 @@ struct mlx5_flow_counter *
 					       sh->devx_comp,
 					       (uint64_t)(uintptr_t)pool);
 	if (ret) {
+		rte_atomic64_sub(&pool->start_query_gen, 1);
 		DRV_LOG(ERR, "Failed to trigger asynchronous query for dcs ID"
 			" %d", pool->min_dcs->id);
 		pool->raw_hw = NULL;
@@ -5702,13 +5710,17 @@ struct mlx5_flow_counter *
 	struct mlx5_counter_stats_raw *raw_to_free;
 
 	if (unlikely(status)) {
+		rte_atomic64_sub(&pool->start_query_gen, 1);
 		raw_to_free = pool->raw_hw;
 	} else {
 		raw_to_free = pool->raw;
 		rte_spinlock_lock(&pool->sl);
 		pool->raw = pool->raw_hw;
 		rte_spinlock_unlock(&pool->sl);
-		rte_atomic64_add(&pool->query_gen, 1);
+		MLX5_ASSERT(rte_atomic64_read(&pool->end_query_gen) + 1 ==
+			    rte_atomic64_read(&pool->start_query_gen));
+		rte_atomic64_set(&pool->end_query_gen,
+				 rte_atomic64_read(&pool->start_query_gen));
 		/* Be sure the new raw counters data is updated in memory. */
 		rte_cio_wmb();
 	}
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index f022751..074f05f 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -4190,8 +4190,12 @@ struct field_modify_info modify_tcp[] = {
 	/*
 	 * The generation of the new allocated counters in this pool is 0, 2 in
 	 * the pool generation makes all the counters valid for allocation.
+	 * The start and end query generation protect the counters be released
+	 * between the query and update gap period will not be reallocated
+	 * without the last query finished and stats updated to the memory.
 	 */
-	rte_atomic64_set(&pool->query_gen, 0x2);
+	rte_atomic64_set(&pool->start_query_gen, 0x2);
+	rte_atomic64_set(&pool->end_query_gen, 0x2);
 	TAILQ_INIT(&pool->counters);
 	TAILQ_INSERT_HEAD(&cont->pool_list, pool, next);
 	cont->pools[n_valid] = pool;
@@ -4365,8 +4369,8 @@ struct field_modify_info modify_tcp[] = {
 		 * updated too.
 		 */
 		cnt_free = TAILQ_FIRST(&pool->counters);
-		if (cnt_free && cnt_free->query_gen + 1 <
-		    rte_atomic64_read(&pool->query_gen))
+		if (cnt_free && cnt_free->query_gen <
+		    rte_atomic64_read(&pool->end_query_gen))
 			break;
 		cnt_free = NULL;
 	}
@@ -4441,7 +4445,13 @@ struct field_modify_info modify_tcp[] = {
 
 		/* Put the counter in the end - the last updated one. */
 		TAILQ_INSERT_TAIL(&pool->counters, counter, next);
-		counter->query_gen = rte_atomic64_read(&pool->query_gen);
+		/*
+		 * Counters released between query trigger and handler need
+		 * to wait the next round of query. Since the packets arrive
+		 * in the gap period will not be taken into account to the
+		 * old counter.
+		 */
+		counter->query_gen = rte_atomic64_read(&pool->start_query_gen);
 	}
 }
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH 3/8] net/mlx5: change verbs counter allocator to indexed
  2020-04-07  3:59 [dpdk-dev] [PATCH 0/8] net/mlx5 counter optimize Suanming Mou
  2020-04-07  3:59 ` [dpdk-dev] [PATCH 1/8] net/mlx5: fix incorrect counter container usage Suanming Mou
  2020-04-07  3:59 ` [dpdk-dev] [PATCH 2/8] net/mlx5: optimize counter release query generation Suanming Mou
@ 2020-04-07  3:59 ` Suanming Mou
  2020-04-07  3:59 ` [dpdk-dev] [PATCH 4/8] common/mlx5: add batch counter id offset Suanming Mou
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Suanming Mou @ 2020-04-07  3:59 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko; +Cc: dev, rasland

This is part of the counter optimize which will save the indexed counter
id instead of the counter pointer in the rte_flow.

Place the verbs counter into the container pool helps the counter to be
indexed correctly independent with the raw counter.

The counter pointer in rte_flow will be changed to indexed value after
the DV counter is also changed to indexed.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.h            |   9 +++
 drivers/net/mlx5/mlx5_flow_dv.c    |   2 -
 drivers/net/mlx5/mlx5_flow_verbs.c | 137 ++++++++++++++++++++++++++++++-------
 3 files changed, 122 insertions(+), 26 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index fe7e684..6b10dfb 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -238,6 +238,15 @@ struct mlx5_drop {
 
 #define MLX5_COUNTERS_PER_POOL 512
 #define MLX5_MAX_PENDING_QUERIES 4
+#define MLX5_CNT_CONTAINER_RESIZE 64
+/*
+ * The pool index and offset of counter in the pool arrary makes up the
+ * counter index. In case the counter is from pool 0 and offset 0, it
+ * should plus 1 to avoid index 0, since 0 means invalid counter index
+ * currently.
+ */
+#define MLX5_MAKE_CNT_IDX(pi, offset) \
+	((pi) * MLX5_COUNTERS_PER_POOL + (offset) + 1)
 
 struct mlx5_flow_counter_pool;
 
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 074f05f..dc11304 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -3820,8 +3820,6 @@ struct field_modify_info modify_tcp[] = {
 	return 0;
 }
 
-#define MLX5_CNT_CONTAINER_RESIZE 64
-
 /**
  * Get or create a flow counter.
  *
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index ccd3395..c053778 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -38,6 +38,36 @@
 	(!!((item_flags) & MLX5_FLOW_LAYER_TUNNEL) ? IBV_FLOW_SPEC_INNER : 0)
 
 /**
+ * Get Verbs flow counter by index.
+ *
+ * @param[in] dev
+ *   Pointer to the Ethernet device structure.
+ * @param[in] idx
+ *   mlx5 flow counter index in the container.
+ * @param[out] ppool
+ *   mlx5 flow counter pool in the container,
+ *
+ * @return
+ *   A pointer to the counter, NULL otherwise.
+ */
+static struct mlx5_flow_counter *
+flow_verbs_counter_get_by_idx(struct rte_eth_dev *dev,
+			      uint32_t idx,
+			      struct mlx5_flow_counter_pool **ppool)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_pools_container *cont = MLX5_CNT_CONTAINER(priv->sh, 0, 0);
+	struct mlx5_flow_counter_pool *pool;
+
+	idx--;
+	pool = cont->pools[idx / MLX5_COUNTERS_PER_POOL];
+	MLX5_ASSERT(pool);
+	if (ppool)
+		*ppool = pool;
+	return &pool->counters_raw[idx % MLX5_COUNTERS_PER_POOL];
+}
+
+/**
  * Create Verbs flow counter with Verbs library.
  *
  * @param[in] dev
@@ -121,21 +151,70 @@
 flow_verbs_counter_new(struct rte_eth_dev *dev, uint32_t shared, uint32_t id)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_flow_counter *cnt;
+	struct mlx5_pools_container *cont = MLX5_CNT_CONTAINER(priv->sh, 0, 0);
+	struct mlx5_flow_counter_pool *pool = NULL;
+	struct mlx5_flow_counter *cnt = NULL;
+	uint32_t n_valid = rte_atomic16_read(&cont->n_valid);
+	uint32_t pool_idx;
+	uint32_t i;
 	int ret;
 
 	if (shared) {
-		TAILQ_FOREACH(cnt, &priv->sh->cmng.flow_counters, next) {
-			if (cnt->shared && cnt->id == id) {
-				cnt->ref_cnt++;
-				return cnt;
+		for (pool_idx = 0; pool_idx < n_valid; ++pool_idx) {
+			pool = cont->pools[pool_idx];
+			for (i = 0; i < MLX5_COUNTERS_PER_POOL; ++i) {
+				cnt = &pool->counters_raw[i];
+				if (cnt->shared && cnt->id == id) {
+					cnt->ref_cnt++;
+					return (struct mlx5_flow_counter *)
+					       (uintptr_t)
+					       MLX5_MAKE_CNT_IDX(pool_idx, i);
+				}
 			}
 		}
 	}
-	cnt = rte_calloc(__func__, 1, sizeof(*cnt), 0);
+	for (pool_idx = 0; pool_idx < n_valid; ++pool_idx) {
+		pool = cont->pools[pool_idx];
+		if (!pool)
+			continue;
+		cnt = TAILQ_FIRST(&pool->counters);
+		if (cnt)
+			break;
+	}
 	if (!cnt) {
-		rte_errno = ENOMEM;
-		return NULL;
+		struct mlx5_flow_counter_pool **pools;
+		uint32_t size;
+
+		if (n_valid == cont->n) {
+			/* Resize the container pool arrary. */
+			size = sizeof(struct mlx5_flow_counter_pool *) *
+				     (n_valid + MLX5_CNT_CONTAINER_RESIZE);
+			pools = rte_zmalloc(__func__, size, 0);
+			if (!pools)
+				return NULL;
+			if (n_valid) {
+				memcpy(pools, cont->pools,
+				       sizeof(struct mlx5_flow_counter_pool *) *
+				       n_valid);
+				rte_free(cont->pools);
+			}
+			cont->pools = pools;
+			cont->n += MLX5_CNT_CONTAINER_RESIZE;
+		}
+		/* Allocate memory for new pool*/
+		size = sizeof(*pool) + sizeof(*cnt) * MLX5_COUNTERS_PER_POOL;
+		pool = rte_calloc(__func__, 1, size, 0);
+		if (!pool)
+			return NULL;
+		for (i = 0; i < MLX5_COUNTERS_PER_POOL; ++i) {
+			cnt = &pool->counters_raw[i];
+			TAILQ_INSERT_HEAD(&pool->counters, cnt, next);
+		}
+		cnt = &pool->counters_raw[0];
+		cont->pools[n_valid] = pool;
+		pool_idx = n_valid;
+		rte_atomic16_add(&cont->n_valid, 1);
+		TAILQ_INSERT_HEAD(&cont->pool_list, pool, next);
 	}
 	cnt->id = id;
 	cnt->shared = shared;
@@ -145,11 +224,11 @@
 	/* Create counter with Verbs. */
 	ret = flow_verbs_counter_create(dev, cnt);
 	if (!ret) {
-		TAILQ_INSERT_HEAD(&priv->sh->cmng.flow_counters, cnt, next);
-		return cnt;
+		TAILQ_REMOVE(&pool->counters, cnt, next);
+		return (struct mlx5_flow_counter *)(uintptr_t)
+		       MLX5_MAKE_CNT_IDX(pool_idx, (cnt - pool->counters_raw));
 	}
 	/* Some error occurred in Verbs library. */
-	rte_free(cnt);
 	rte_errno = -ret;
 	return NULL;
 }
@@ -166,16 +245,20 @@
 flow_verbs_counter_release(struct rte_eth_dev *dev,
 			   struct mlx5_flow_counter *counter)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_flow_counter_pool *pool;
+	struct mlx5_flow_counter *cnt;
 
+	cnt = flow_verbs_counter_get_by_idx(dev, (uintptr_t)(void *)counter,
+					    &pool);
 	if (--counter->ref_cnt == 0) {
 #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42)
-		claim_zero(mlx5_glue->destroy_counter_set(counter->cs));
+		claim_zero(mlx5_glue->destroy_counter_set(cnt->cs));
+		cnt->cs = NULL;
 #elif defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
-		claim_zero(mlx5_glue->destroy_counters(counter->cs));
+		claim_zero(mlx5_glue->destroy_counters(cnt->cs));
+		cnt->cs = NULL;
 #endif
-		TAILQ_REMOVE(&priv->sh->cmng.flow_counters, counter, next);
-		rte_free(counter);
+		TAILQ_INSERT_HEAD(&pool->counters, cnt, next);
 	}
 }
 
@@ -193,11 +276,14 @@
 #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42) || \
 	defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
 	if (flow->counter && flow->counter->cs) {
+		struct mlx5_flow_counter *cnt = flow_verbs_counter_get_by_idx
+						(dev, (uintptr_t)(void *)
+						flow->counter, NULL);
 		struct rte_flow_query_count *qc = data;
 		uint64_t counters[2] = {0, 0};
 #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42)
 		struct ibv_query_counter_set_attr query_cs_attr = {
-			.cs = flow->counter->cs,
+			.cs = cnt->cs,
 			.query_flags = IBV_COUNTER_SET_FORCE_UPDATE,
 		};
 		struct ibv_counter_set_data query_out = {
@@ -208,7 +294,7 @@
 						       &query_out);
 #elif defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
 		int err = mlx5_glue->query_counters
-			       (flow->counter->cs, counters,
+			       (cnt->cs, counters,
 				RTE_DIM(counters),
 				IBV_READ_COUNTERS_ATTR_PREFER_CACHED);
 #endif
@@ -220,11 +306,11 @@
 				 "cannot read counter");
 		qc->hits_set = 1;
 		qc->bytes_set = 1;
-		qc->hits = counters[0] - flow->counter->hits;
-		qc->bytes = counters[1] - flow->counter->bytes;
+		qc->hits = counters[0] - cnt->hits;
+		qc->bytes = counters[1] - cnt->bytes;
 		if (qc->reset) {
-			flow->counter->hits = counters[0];
-			flow->counter->bytes = counters[1];
+			cnt->hits = counters[0];
+			cnt->bytes = counters[1];
 		}
 		return 0;
 	}
@@ -976,6 +1062,7 @@
 {
 	const struct rte_flow_action_count *count = action->conf;
 	struct rte_flow *flow = dev_flow->flow;
+	struct mlx5_flow_counter *cnt = NULL;
 #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42) || \
 	defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
 	unsigned int size = sizeof(struct ibv_flow_spec_counter_action);
@@ -995,11 +1082,13 @@
 						  "cannot get counter"
 						  " context.");
 	}
+	cnt = flow_verbs_counter_get_by_idx(dev, (uintptr_t)(void *)
+					    flow->counter, NULL);
 #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42)
-	counter.counter_set_handle = flow->counter->cs->handle;
+	counter.counter_set_handle = cnt->cs->handle;
 	flow_verbs_spec_add(&dev_flow->verbs, &counter, size);
 #elif defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
-	counter.counters = flow->counter->cs;
+	counter.counters = cnt->cs;
 	flow_verbs_spec_add(&dev_flow->verbs, &counter, size);
 #endif
 	return 0;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH 4/8] common/mlx5: add batch counter id offset
  2020-04-07  3:59 [dpdk-dev] [PATCH 0/8] net/mlx5 counter optimize Suanming Mou
                   ` (2 preceding siblings ...)
  2020-04-07  3:59 ` [dpdk-dev] [PATCH 3/8] net/mlx5: change verbs counter allocator to indexed Suanming Mou
@ 2020-04-07  3:59 ` Suanming Mou
  2020-04-07  3:59 ` [dpdk-dev] [PATCH 5/8] net/mlx5: change Direct Verbs counter to indexed Suanming Mou
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Suanming Mou @ 2020-04-07  3:59 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko; +Cc: dev, rasland

This commit is a part for the DV counter optimization.

The batch counter dcs id starts from 0x800000 and none batch counter
starts from 0. As currently, the counter is changed to be indexed by
pool index and the offset of the counter in the pool counters_raw array.
It means now the counter index is same for batch and none batch counter.
Add the 0x800000 batch counter offset to the batch counter index helps
indicate the counter index is from batch or none batch container pool.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/common/mlx5/mlx5_prm.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 00fd7c1..4ab1c75 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -767,6 +767,15 @@ enum {
 
 #define MLX5_ADAPTER_PAGE_SHIFT 12
 #define MLX5_LOG_RQ_STRIDE_SHIFT 4
+/**
+ * The batch counter dcs id starts from 0x800000 and none batch counter
+ * starts from 0. As currently, the counter is changed to be indexed by
+ * pool index and the offset of the counter in the pool counters_raw array.
+ * It means now the counter index is same for batch and none batch counter.
+ * Add the 0x800000 batch counter offset to the batch counter index helps
+ * indicate the counter index is from batch or none batch container pool.
+ */
+#define MLX5_CNT_BATCH_OFFSET 0x800000
 
 /* Flow counters. */
 struct mlx5_ifc_alloc_flow_counter_out_bits {
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH 5/8] net/mlx5: change Direct Verbs counter to indexed
  2020-04-07  3:59 [dpdk-dev] [PATCH 0/8] net/mlx5 counter optimize Suanming Mou
                   ` (3 preceding siblings ...)
  2020-04-07  3:59 ` [dpdk-dev] [PATCH 4/8] common/mlx5: add batch counter id offset Suanming Mou
@ 2020-04-07  3:59 ` Suanming Mou
  2020-04-07  3:59 ` [dpdk-dev] [PATCH 6/8] net/mlx5: optimize flow counter handle type Suanming Mou
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Suanming Mou @ 2020-04-07  3:59 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko; +Cc: dev, rasland

This part of the counter optimize change the DV counter to indexed as
what have already done in verbs. In this case, all the mlx5 flow counter
can be addressed by index.

The counter index is composed of pool index and the counter offset in
the pool counter array. The batch and none batch counter dcs ID offset
0x800000 is used to avoid the mix up for the index. As batch counter dcs
ID starts from 0x800000 and none batch counter dcs starts from 0, the
0x800000 offset is added to the batch counter index to indicate the
index of batch counter.

The counter pointer in rte_flow struct will be aligned to index instead
of pointer. It will save 4 bytes memory for every rte_flow. With
millions of rte_flow, it will save MBytes memory.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.h         |   3 +-
 drivers/net/mlx5/mlx5_flow_dv.c | 205 ++++++++++++++++++++++++++++------------
 2 files changed, 147 insertions(+), 61 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 6b10dfb..6f01eea 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -290,11 +290,12 @@ struct mlx5_flow_counter_pool {
 	union {
 		struct mlx5_devx_obj *min_dcs;
 		rte_atomic64_t a64_dcs;
+		int dcs_id; /* Fallback pool counter id range. */
 	};
 	/* The devx object of the minimum counter ID. */
 	rte_atomic64_t start_query_gen; /* Query start round. */
 	rte_atomic64_t end_query_gen; /* Query end round. */
-	uint32_t n_counters: 16; /* Number of devx allocated counters. */
+	uint32_t index; /* Pool index in container. */
 	rte_spinlock_t sl; /* The pool lock. */
 	struct mlx5_counter_stats_raw *raw;
 	struct mlx5_counter_stats_raw *raw_hw; /* The raw on HW working. */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index dc11304..91e130c 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -73,6 +73,13 @@
 	uint32_t attr;
 };
 
+static struct mlx5_flow_counter_pool *
+flow_dv_find_pool_by_id(struct mlx5_pools_container *cont, bool fallback,
+			int id);
+static struct mlx5_pools_container *
+flow_dv_pool_create(struct rte_eth_dev *dev, struct mlx5_devx_obj *dcs,
+		    uint32_t batch);
+
 /**
  * Initialize flow attributes structure according to flow items' types.
  *
@@ -3831,37 +3838,38 @@ struct field_modify_info modify_tcp[] = {
  *   Counter identifier.
  *
  * @return
- *   pointer to flow counter on success, NULL otherwise and rte_errno is set.
+ *   Index to flow counter on success, 0 otherwise and rte_errno is set.
  */
-static struct mlx5_flow_counter *
+static uint32_t
 flow_dv_counter_alloc_fallback(struct rte_eth_dev *dev, uint32_t shared,
 			       uint32_t id)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_pools_container *cont = MLX5_CNT_CONTAINER(priv->sh, 0, 0);
+	struct mlx5_flow_counter_pool *pool;
 	struct mlx5_flow_counter *cnt = NULL;
 	struct mlx5_devx_obj *dcs = NULL;
+	uint32_t offset;
 
 	if (!priv->config.devx) {
 		rte_errno = ENOTSUP;
-		return NULL;
-	}
-	if (shared) {
-		TAILQ_FOREACH(cnt, &priv->sh->cmng.flow_counters, next) {
-			if (cnt->shared && cnt->id == id) {
-				cnt->ref_cnt++;
-				return cnt;
-			}
-		}
+		return 0;
 	}
 	dcs = mlx5_devx_cmd_flow_counter_alloc(priv->sh->ctx, 0);
 	if (!dcs)
-		return NULL;
-	cnt = rte_calloc(__func__, 1, sizeof(*cnt), 0);
-	if (!cnt) {
-		claim_zero(mlx5_devx_cmd_destroy(cnt->dcs));
-		rte_errno = ENOMEM;
-		return NULL;
+		return 0;
+	pool = flow_dv_find_pool_by_id(cont, true, dcs->id);
+	if (!pool) {
+		cont = flow_dv_pool_create(dev, dcs, 0);
+		if (!cont) {
+			mlx5_devx_cmd_destroy(dcs);
+			rte_errno = ENOMEM;
+			return 0;
+		}
+		pool = TAILQ_FIRST(&cont->pool_list);
 	}
+	offset = dcs->id % MLX5_COUNTERS_PER_POOL;
+	cnt = &pool->counters_raw[offset];
 	struct mlx5_flow_counter tmpl = {
 		.shared = shared,
 		.ref_cnt = 1,
@@ -3872,12 +3880,10 @@ struct field_modify_info modify_tcp[] = {
 	if (!tmpl.action) {
 		claim_zero(mlx5_devx_cmd_destroy(cnt->dcs));
 		rte_errno = errno;
-		rte_free(cnt);
-		return NULL;
+		return 0;
 	}
 	*cnt = tmpl;
-	TAILQ_INSERT_HEAD(&priv->sh->cmng.flow_counters, cnt, next);
-	return cnt;
+	return MLX5_MAKE_CNT_IDX(pool->index, offset);
 }
 
 /**
@@ -3889,17 +3895,16 @@ struct field_modify_info modify_tcp[] = {
  *   Pointer to the counter handler.
  */
 static void
-flow_dv_counter_release_fallback(struct rte_eth_dev *dev,
+flow_dv_counter_release_fallback(struct rte_eth_dev *dev __rte_unused,
 				 struct mlx5_flow_counter *counter)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
-
 	if (!counter)
 		return;
 	if (--counter->ref_cnt == 0) {
-		TAILQ_REMOVE(&priv->sh->cmng.flow_counters, counter, next);
+		claim_zero(mlx5_glue->destroy_flow_action(counter->action));
 		claim_zero(mlx5_devx_cmd_destroy(counter->dcs));
-		rte_free(counter);
+		counter->action = NULL;
+		counter->dcs = NULL;
 	}
 }
 
@@ -3947,10 +3952,49 @@ struct field_modify_info modify_tcp[] = {
 }
 
 /**
+ * Get DV flow counter by index.
+ *
+ * @param[in] dev
+ *   Pointer to the Ethernet device structure.
+ * @param[in] idx
+ *   mlx5 flow counter index in the container.
+ * @param[out] ppool
+ *   mlx5 flow counter pool in the container,
+ *
+ * @return
+ *   Pointer to the counter, NULL otherwise.
+ */
+static struct mlx5_flow_counter *
+flow_dv_counter_get_by_idx(struct rte_eth_dev *dev,
+			   uint32_t idx,
+			   struct mlx5_flow_counter_pool **ppool)
+{
+	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_pools_container *cont;
+	struct mlx5_flow_counter_pool *pool;
+	uint32_t batch = 0;
+
+	idx--;
+	if (idx >= MLX5_CNT_BATCH_OFFSET) {
+		idx -= MLX5_CNT_BATCH_OFFSET;
+		batch = 1;
+	}
+	cont = MLX5_CNT_CONTAINER(priv->sh, batch, 0);
+	MLX5_ASSERT(idx / MLX5_COUNTERS_PER_POOL < cont->n);
+	pool = cont->pools[idx / MLX5_COUNTERS_PER_POOL];
+	MLX5_ASSERT(pool);
+	if (ppool)
+		*ppool = pool;
+	return &pool->counters_raw[idx % MLX5_COUNTERS_PER_POOL];
+}
+
+/**
  * Get a pool by devx counter ID.
  *
  * @param[in] cont
  *   Pointer to the counter container.
+ * @param[in] fallback
+ *   Fallback mode.
  * @param[in] id
  *   The counter devx ID.
  *
@@ -3958,17 +4002,29 @@ struct field_modify_info modify_tcp[] = {
  *   The counter pool pointer if exists, NULL otherwise,
  */
 static struct mlx5_flow_counter_pool *
-flow_dv_find_pool_by_id(struct mlx5_pools_container *cont, int id)
+flow_dv_find_pool_by_id(struct mlx5_pools_container *cont, bool fallback,
+			int id)
 {
-	struct mlx5_flow_counter_pool *pool;
+	uint32_t i;
+	uint32_t n_valid = rte_atomic16_read(&cont->n_valid);
 
-	TAILQ_FOREACH(pool, &cont->pool_list, next) {
-		int base = (pool->min_dcs->id / MLX5_COUNTERS_PER_POOL) *
-				MLX5_COUNTERS_PER_POOL;
+	for (i = 0; i < n_valid; i++) {
+		struct mlx5_flow_counter_pool *pool = cont->pools[i];
+		int base = ((fallback ? pool->dcs_id : pool->min_dcs->id) /
+			   MLX5_COUNTERS_PER_POOL) * MLX5_COUNTERS_PER_POOL;
 
-		if (id >= base && id < base + MLX5_COUNTERS_PER_POOL)
+		if (id >= base && id < base + MLX5_COUNTERS_PER_POOL) {
+			/*
+			 * Move the pool to the head, as counter allocate
+			 * always gets the first pool in the container.
+			 */
+			if (pool != TAILQ_FIRST(&cont->pool_list)) {
+				TAILQ_REMOVE(&cont->pool_list, pool, next);
+				TAILQ_INSERT_HEAD(&cont->pool_list, pool, next);
+			}
 			return pool;
-	};
+		}
+	}
 	return NULL;
 }
 
@@ -4180,7 +4236,10 @@ struct field_modify_info modify_tcp[] = {
 		rte_errno = ENOMEM;
 		return NULL;
 	}
-	pool->min_dcs = dcs;
+	if (priv->counter_fallback)
+		pool->dcs_id = dcs->id;
+	else
+		pool->min_dcs = dcs;
 	pool->raw = cont->init_mem_mng->raws + n_valid %
 						     MLX5_CNT_CONTAINER_RESIZE;
 	pool->raw_hw = NULL;
@@ -4196,6 +4255,7 @@ struct field_modify_info modify_tcp[] = {
 	rte_atomic64_set(&pool->end_query_gen, 0x2);
 	TAILQ_INIT(&pool->counters);
 	TAILQ_INSERT_HEAD(&cont->pool_list, pool, next);
+	pool->index = n_valid;
 	cont->pools[n_valid] = pool;
 	/* Pool initialization must be updated before host thread access. */
 	rte_cio_wmb();
@@ -4235,7 +4295,7 @@ struct field_modify_info modify_tcp[] = {
 		dcs = mlx5_devx_cmd_flow_counter_alloc(priv->sh->ctx, 0);
 		if (!dcs)
 			return NULL;
-		pool = flow_dv_find_pool_by_id(cont, dcs->id);
+		pool = flow_dv_find_pool_by_id(cont, false, dcs->id);
 		if (!pool) {
 			cont = flow_dv_pool_create(dev, dcs, batch);
 			if (!cont) {
@@ -4282,23 +4342,30 @@ struct field_modify_info modify_tcp[] = {
  *   Pointer to the relevant counter pool container.
  * @param[in] id
  *   The shared counter ID to search.
+ * @param[out] ppool
+ *   mlx5 flow counter pool in the container,
  *
  * @return
  *   NULL if not existed, otherwise pointer to the shared counter.
  */
 static struct mlx5_flow_counter *
-flow_dv_counter_shared_search(struct mlx5_pools_container *cont,
-			      uint32_t id)
+flow_dv_counter_shared_search(struct mlx5_pools_container *cont, uint32_t id,
+			      struct mlx5_flow_counter_pool **ppool)
 {
 	static struct mlx5_flow_counter *cnt;
 	struct mlx5_flow_counter_pool *pool;
-	int i;
+	uint32_t i;
+	uint32_t n_valid = rte_atomic16_read(&cont->n_valid);
 
-	TAILQ_FOREACH(pool, &cont->pool_list, next) {
+	for (i = 0; i < n_valid; i++) {
+		pool = cont->pools[i];
 		for (i = 0; i < MLX5_COUNTERS_PER_POOL; ++i) {
 			cnt = &pool->counters_raw[i];
-			if (cnt->ref_cnt && cnt->shared && cnt->id == id)
+			if (cnt->ref_cnt && cnt->shared && cnt->id == id) {
+				if (ppool)
+					*ppool = pool;
 				return cnt;
+			}
 		}
 	}
 	return NULL;
@@ -4337,24 +4404,28 @@ struct field_modify_info modify_tcp[] = {
 	uint32_t batch = (group && !shared) ? 1 : 0;
 	struct mlx5_pools_container *cont = MLX5_CNT_CONTAINER(priv->sh, batch,
 							       0);
+	uint32_t cnt_idx;
 
-	if (priv->counter_fallback)
-		return flow_dv_counter_alloc_fallback(dev, shared, id);
 	if (!priv->config.devx) {
 		rte_errno = ENOTSUP;
 		return NULL;
 	}
 	if (shared) {
-		cnt_free = flow_dv_counter_shared_search(cont, id);
+		cnt_free = flow_dv_counter_shared_search(cont, id, &pool);
 		if (cnt_free) {
 			if (cnt_free->ref_cnt + 1 == 0) {
 				rte_errno = E2BIG;
 				return NULL;
 			}
 			cnt_free->ref_cnt++;
-			return cnt_free;
+			cnt_idx = pool->index * MLX5_COUNTERS_PER_POOL +
+				  (cnt_free - pool->counters_raw) + 1;
+			return (struct mlx5_flow_counter *)(uintptr_t)cnt_idx;
 		}
 	}
+	if (priv->counter_fallback)
+		return (struct mlx5_flow_counter *)(uintptr_t)
+		       flow_dv_counter_alloc_fallback(dev, shared, id);
 	/* Pools which has a free counters are in the start. */
 	TAILQ_FOREACH(pool, &cont->pool_list, next) {
 		/*
@@ -4414,7 +4485,10 @@ struct field_modify_info modify_tcp[] = {
 		TAILQ_REMOVE(&cont->pool_list, pool, next);
 		TAILQ_INSERT_TAIL(&cont->pool_list, pool, next);
 	}
-	return cnt_free;
+	cnt_idx = MLX5_MAKE_CNT_IDX(pool->index,
+				    (cnt_free - pool->counters_raw));
+	cnt_idx += batch * MLX5_CNT_BATCH_OFFSET;
+	return (struct mlx5_flow_counter *)(uintptr_t)cnt_idx;
 }
 
 /**
@@ -4430,26 +4504,26 @@ struct field_modify_info modify_tcp[] = {
 			struct mlx5_flow_counter *counter)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_flow_counter_pool *pool;
+	struct mlx5_flow_counter *cnt;
 
 	if (!counter)
 		return;
+	cnt = flow_dv_counter_get_by_idx(dev, (uintptr_t)counter, &pool);
 	if (priv->counter_fallback) {
-		flow_dv_counter_release_fallback(dev, counter);
+		flow_dv_counter_release_fallback(dev, cnt);
 		return;
 	}
-	if (--counter->ref_cnt == 0) {
-		struct mlx5_flow_counter_pool *pool =
-				flow_dv_counter_pool_get(counter);
-
+	if (--cnt->ref_cnt == 0) {
 		/* Put the counter in the end - the last updated one. */
-		TAILQ_INSERT_TAIL(&pool->counters, counter, next);
+		TAILQ_INSERT_TAIL(&pool->counters, cnt, next);
 		/*
 		 * Counters released between query trigger and handler need
 		 * to wait the next round of query. Since the packets arrive
 		 * in the gap period will not be taken into account to the
 		 * old counter.
 		 */
-		counter->query_gen = rte_atomic64_read(&pool->start_query_gen);
+		cnt->query_gen = rte_atomic64_read(&pool->start_query_gen);
 	}
 }
 
@@ -7517,7 +7591,8 @@ struct field_modify_info modify_tcp[] = {
 			if (flow->counter == NULL)
 				goto cnt_err;
 			dev_flow->dv.actions[actions_n++] =
-				flow->counter->action;
+				  (flow_dv_counter_get_by_idx(dev,
+				  (uintptr_t)flow->counter, NULL))->action;
 			action_flags |= MLX5_FLOW_ACTION_COUNT;
 			break;
 cnt_err:
@@ -8447,7 +8522,11 @@ struct field_modify_info modify_tcp[] = {
 					  "counters are not supported");
 	if (flow->counter) {
 		uint64_t pkts, bytes;
-		int err = _flow_dv_query_count(dev, flow->counter, &pkts,
+		struct mlx5_flow_counter *cnt;
+
+		cnt = flow_dv_counter_get_by_idx(dev, (uintptr_t)flow->counter,
+						 NULL);
+		int err = _flow_dv_query_count(dev, cnt, &pkts,
 					       &bytes);
 
 		if (err)
@@ -8456,11 +8535,11 @@ struct field_modify_info modify_tcp[] = {
 					NULL, "cannot read counters");
 		qc->hits_set = 1;
 		qc->bytes_set = 1;
-		qc->hits = pkts - flow->counter->hits;
-		qc->bytes = bytes - flow->counter->bytes;
+		qc->hits = pkts - cnt->hits;
+		qc->bytes = bytes - cnt->bytes;
 		if (qc->reset) {
-			flow->counter->hits = pkts;
-			flow->counter->bytes = bytes;
+			cnt->hits = pkts;
+			cnt->bytes = bytes;
 		}
 		return 0;
 	}
@@ -8710,9 +8789,12 @@ struct field_modify_info modify_tcp[] = {
 	}
 	/* Create meter count actions */
 	for (i = 0; i <= RTE_MTR_DROPPED; i++) {
+		struct mlx5_flow_counter *cnt;
 		if (!fm->policer_stats.cnt[i])
 			continue;
-		mtb->count_actns[i] = fm->policer_stats.cnt[i]->action;
+		cnt = flow_dv_counter_get_by_idx(dev,
+		      (uintptr_t)fm->policer_stats.cnt[i], NULL);
+		mtb->count_actns[i] = cnt->action;
 	}
 	/* Create drop action. */
 	mtb->drop_actn = mlx5_glue->dr_create_flow_action_drop();
@@ -8944,15 +9026,18 @@ struct field_modify_info modify_tcp[] = {
  */
 static int
 flow_dv_counter_query(struct rte_eth_dev *dev,
-		      struct mlx5_flow_counter *cnt, bool clear,
+		      struct mlx5_flow_counter *counter, bool clear,
 		      uint64_t *pkts, uint64_t *bytes)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
+	struct mlx5_flow_counter *cnt;
 	uint64_t inn_pkts, inn_bytes;
 	int ret;
 
 	if (!priv->config.devx)
 		return -1;
+
+	cnt = flow_dv_counter_get_by_idx(dev, (uintptr_t)counter, NULL);
 	ret = _flow_dv_query_count(dev, cnt, &inn_pkts, &inn_bytes);
 	if (ret)
 		return -1;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH 6/8] net/mlx5: optimize flow counter handle type
  2020-04-07  3:59 [dpdk-dev] [PATCH 0/8] net/mlx5 counter optimize Suanming Mou
                   ` (4 preceding siblings ...)
  2020-04-07  3:59 ` [dpdk-dev] [PATCH 5/8] net/mlx5: change Direct Verbs counter to indexed Suanming Mou
@ 2020-04-07  3:59 ` Suanming Mou
  2020-04-07  3:59 ` [dpdk-dev] [PATCH 7/8] net/mlx5: split the counter struct Suanming Mou
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Suanming Mou @ 2020-04-07  3:59 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko; +Cc: dev, rasland

Currently, DV and verbs counters are both changed to indexed. It means
while creating the flow with counter, flow can save the indexed value to
address the counter.

Save the 4 bytes indexed value in the rte_flow instead of 8 bytes pointer
helps to save memory with millions of flows.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.h            |  6 ++---
 drivers/net/mlx5/mlx5_flow.c       | 14 +++++-----
 drivers/net/mlx5/mlx5_flow.h       | 10 +++----
 drivers/net/mlx5/mlx5_flow_dv.c    | 54 ++++++++++++++++++--------------------
 drivers/net/mlx5/mlx5_flow_verbs.c | 36 +++++++++++--------------
 5 files changed, 56 insertions(+), 64 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 6f01eea..1501e61 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -764,9 +764,9 @@ void mlx5_flow_async_pool_query_handle(struct mlx5_ibv_shared *sh,
 				       uint64_t async_id, int status);
 void mlx5_set_query_alarm(struct mlx5_ibv_shared *sh);
 void mlx5_flow_query_alarm(void *arg);
-struct mlx5_flow_counter *mlx5_counter_alloc(struct rte_eth_dev *dev);
-void mlx5_counter_free(struct rte_eth_dev *dev, struct mlx5_flow_counter *cnt);
-int mlx5_counter_query(struct rte_eth_dev *dev, struct mlx5_flow_counter *cnt,
+uint32_t mlx5_counter_alloc(struct rte_eth_dev *dev);
+void mlx5_counter_free(struct rte_eth_dev *dev, uint32_t cnt);
+int mlx5_counter_query(struct rte_eth_dev *dev, uint32_t cnt,
 		       bool clear, uint64_t *pkts, uint64_t *bytes);
 int mlx5_flow_dev_dump(struct rte_eth_dev *dev, FILE *file,
 		       struct rte_flow_error *error);
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index d2e9cc4..3b358b6 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -5494,9 +5494,9 @@ struct mlx5_meter_domains_infos *
  *   Pointer to Ethernet device structure.
  *
  * @return
- *   Pointer to allocated counter  on success, NULL otherwise.
+ *   Index to allocated counter  on success, 0 otherwise.
  */
-struct mlx5_flow_counter *
+uint32_t
 mlx5_counter_alloc(struct rte_eth_dev *dev)
 {
 	const struct mlx5_flow_driver_ops *fops;
@@ -5509,7 +5509,7 @@ struct mlx5_flow_counter *
 	DRV_LOG(ERR,
 		"port %u counter allocate is not supported.",
 		 dev->data->port_id);
-	return NULL;
+	return 0;
 }
 
 /**
@@ -5518,10 +5518,10 @@ struct mlx5_flow_counter *
  * @param[in] dev
  *   Pointer to Ethernet device structure.
  * @param[in] cnt
- *   Pointer to counter to be free.
+ *   Index to counter to be free.
  */
 void
-mlx5_counter_free(struct rte_eth_dev *dev, struct mlx5_flow_counter *cnt)
+mlx5_counter_free(struct rte_eth_dev *dev, uint32_t cnt)
 {
 	const struct mlx5_flow_driver_ops *fops;
 	struct rte_flow_attr attr = { .transfer = 0 };
@@ -5542,7 +5542,7 @@ struct mlx5_flow_counter *
  * @param[in] dev
  *   Pointer to Ethernet device structure.
  * @param[in] cnt
- *   Pointer to counter to query.
+ *   Index to counter to query.
  * @param[in] clear
  *   Set to clear counter statistics.
  * @param[out] pkts
@@ -5554,7 +5554,7 @@ struct mlx5_flow_counter *
  *   0 on success, a negative errno value otherwise.
  */
 int
-mlx5_counter_query(struct rte_eth_dev *dev, struct mlx5_flow_counter *cnt,
+mlx5_counter_query(struct rte_eth_dev *dev, uint32_t cnt,
 		   bool clear, uint64_t *pkts, uint64_t *bytes)
 {
 	const struct mlx5_flow_driver_ops *fops;
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 0f0e59d..daa1f84 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -632,7 +632,7 @@ struct mlx5_flow {
 
 /* Meter policer statistics */
 struct mlx5_flow_policer_stats {
-	struct mlx5_flow_counter *cnt[RTE_COLORS + 1];
+	uint32_t cnt[RTE_COLORS + 1];
 	/**< Color counter, extra for drop. */
 	uint64_t stats_mask;
 	/**< Statistics mask for the colors. */
@@ -729,7 +729,7 @@ struct rte_flow {
 	TAILQ_ENTRY(rte_flow) next; /**< Pointer to the next flow structure. */
 	enum mlx5_flow_drv_type drv_type; /**< Driver type. */
 	struct mlx5_flow_rss rss; /**< RSS context. */
-	struct mlx5_flow_counter *counter; /**< Holds flow counter. */
+	uint32_t counter; /**< Holds flow counter. */
 	struct mlx5_flow_mreg_copy_resource *mreg_copy;
 	/**< pointer to metadata register copy table resource. */
 	struct mlx5_flow_meter *meter; /**< Holds flow meter. */
@@ -780,12 +780,12 @@ typedef int (*mlx5_flow_destroy_policer_rules_t)
 					(struct rte_eth_dev *dev,
 					 const struct mlx5_flow_meter *fm,
 					 const struct rte_flow_attr *attr);
-typedef struct mlx5_flow_counter * (*mlx5_flow_counter_alloc_t)
+typedef uint32_t (*mlx5_flow_counter_alloc_t)
 				   (struct rte_eth_dev *dev);
 typedef void (*mlx5_flow_counter_free_t)(struct rte_eth_dev *dev,
-					 struct mlx5_flow_counter *cnt);
+					 uint32_t cnt);
 typedef int (*mlx5_flow_counter_query_t)(struct rte_eth_dev *dev,
-					 struct mlx5_flow_counter *cnt,
+					 uint32_t cnt,
 					 bool clear, uint64_t *pkts,
 					 uint64_t *bytes);
 struct mlx5_flow_driver_ops {
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 91e130c..b7daa8f 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -3892,7 +3892,7 @@ struct field_modify_info modify_tcp[] = {
  * @param[in] dev
  *   Pointer to the Ethernet device structure.
  * @param[in] counter
- *   Pointer to the counter handler.
+ *   Index to the counter handler.
  */
 static void
 flow_dv_counter_release_fallback(struct rte_eth_dev *dev __rte_unused,
@@ -4384,9 +4384,9 @@ struct field_modify_info modify_tcp[] = {
  *   Counter flow group.
  *
  * @return
- *   pointer to flow counter on success, NULL otherwise and rte_errno is set.
+ *   Index to flow counter on success, 0 otherwise and rte_errno is set.
  */
-static struct mlx5_flow_counter *
+static uint32_t
 flow_dv_counter_alloc(struct rte_eth_dev *dev, uint32_t shared, uint32_t id,
 		      uint16_t group)
 {
@@ -4408,24 +4408,24 @@ struct field_modify_info modify_tcp[] = {
 
 	if (!priv->config.devx) {
 		rte_errno = ENOTSUP;
-		return NULL;
+		return 0;
 	}
 	if (shared) {
 		cnt_free = flow_dv_counter_shared_search(cont, id, &pool);
 		if (cnt_free) {
 			if (cnt_free->ref_cnt + 1 == 0) {
 				rte_errno = E2BIG;
-				return NULL;
+				return 0;
 			}
 			cnt_free->ref_cnt++;
 			cnt_idx = pool->index * MLX5_COUNTERS_PER_POOL +
 				  (cnt_free - pool->counters_raw) + 1;
-			return (struct mlx5_flow_counter *)(uintptr_t)cnt_idx;
+			return cnt_idx;
 		}
 	}
 	if (priv->counter_fallback)
-		return (struct mlx5_flow_counter *)(uintptr_t)
-		       flow_dv_counter_alloc_fallback(dev, shared, id);
+		return flow_dv_counter_alloc_fallback(dev, shared, id);
+
 	/* Pools which has a free counters are in the start. */
 	TAILQ_FOREACH(pool, &cont->pool_list, next) {
 		/*
@@ -4446,7 +4446,7 @@ struct field_modify_info modify_tcp[] = {
 	if (!cnt_free) {
 		cont = flow_dv_counter_pool_prepare(dev, &cnt_free, batch);
 		if (!cont)
-			return NULL;
+			return 0;
 		pool = TAILQ_FIRST(&cont->pool_list);
 	}
 	cnt_free->batch = batch;
@@ -4466,13 +4466,13 @@ struct field_modify_info modify_tcp[] = {
 					(dcs->obj, offset);
 		if (!cnt_free->action) {
 			rte_errno = errno;
-			return NULL;
+			return 0;
 		}
 	}
 	/* Update the counter reset values. */
 	if (_flow_dv_query_count(dev, cnt_free, &cnt_free->hits,
 				 &cnt_free->bytes))
-		return NULL;
+		return 0;
 	cnt_free->shared = shared;
 	cnt_free->ref_cnt = 1;
 	cnt_free->id = id;
@@ -4488,7 +4488,7 @@ struct field_modify_info modify_tcp[] = {
 	cnt_idx = MLX5_MAKE_CNT_IDX(pool->index,
 				    (cnt_free - pool->counters_raw));
 	cnt_idx += batch * MLX5_CNT_BATCH_OFFSET;
-	return (struct mlx5_flow_counter *)(uintptr_t)cnt_idx;
+	return cnt_idx;
 }
 
 /**
@@ -4497,11 +4497,10 @@ struct field_modify_info modify_tcp[] = {
  * @param[in] dev
  *   Pointer to the Ethernet device structure.
  * @param[in] counter
- *   Pointer to the counter handler.
+ *   Index to the counter handler.
  */
 static void
-flow_dv_counter_release(struct rte_eth_dev *dev,
-			struct mlx5_flow_counter *counter)
+flow_dv_counter_release(struct rte_eth_dev *dev, uint32_t counter)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_flow_counter_pool *pool;
@@ -4509,7 +4508,7 @@ struct field_modify_info modify_tcp[] = {
 
 	if (!counter)
 		return;
-	cnt = flow_dv_counter_get_by_idx(dev, (uintptr_t)counter, &pool);
+	cnt = flow_dv_counter_get_by_idx(dev, counter, &pool);
 	if (priv->counter_fallback) {
 		flow_dv_counter_release_fallback(dev, cnt);
 		return;
@@ -7588,11 +7587,11 @@ struct field_modify_info modify_tcp[] = {
 							count->shared,
 							count->id,
 							dev_flow->dv.group);
-			if (flow->counter == NULL)
+			if (!flow->counter)
 				goto cnt_err;
 			dev_flow->dv.actions[actions_n++] =
 				  (flow_dv_counter_get_by_idx(dev,
-				  (uintptr_t)flow->counter, NULL))->action;
+				  flow->counter, NULL))->action;
 			action_flags |= MLX5_FLOW_ACTION_COUNT;
 			break;
 cnt_err:
@@ -8465,7 +8464,7 @@ struct field_modify_info modify_tcp[] = {
 	__flow_dv_remove(dev, flow);
 	if (flow->counter) {
 		flow_dv_counter_release(dev, flow->counter);
-		flow->counter = NULL;
+		flow->counter = 0;
 	}
 	if (flow->meter) {
 		mlx5_flow_meter_detach(flow->meter);
@@ -8524,7 +8523,7 @@ struct field_modify_info modify_tcp[] = {
 		uint64_t pkts, bytes;
 		struct mlx5_flow_counter *cnt;
 
-		cnt = flow_dv_counter_get_by_idx(dev, (uintptr_t)flow->counter,
+		cnt = flow_dv_counter_get_by_idx(dev, flow->counter,
 						 NULL);
 		int err = _flow_dv_query_count(dev, cnt, &pkts,
 					       &bytes);
@@ -8793,7 +8792,7 @@ struct field_modify_info modify_tcp[] = {
 		if (!fm->policer_stats.cnt[i])
 			continue;
 		cnt = flow_dv_counter_get_by_idx(dev,
-		      (uintptr_t)fm->policer_stats.cnt[i], NULL);
+		      fm->policer_stats.cnt[i], NULL);
 		mtb->count_actns[i] = cnt->action;
 	}
 	/* Create drop action. */
@@ -9013,7 +9012,7 @@ struct field_modify_info modify_tcp[] = {
  * @param[in] dev
  *   Pointer to the Ethernet device structure.
  * @param[in] cnt
- *   Pointer to the flow counter.
+ *   Index to the flow counter.
  * @param[in] clear
  *   Set to clear the counter statistics.
  * @param[out] pkts
@@ -9025,8 +9024,7 @@ struct field_modify_info modify_tcp[] = {
  *   0 on success, otherwise return -1.
  */
 static int
-flow_dv_counter_query(struct rte_eth_dev *dev,
-		      struct mlx5_flow_counter *counter, bool clear,
+flow_dv_counter_query(struct rte_eth_dev *dev, uint32_t counter, bool clear,
 		      uint64_t *pkts, uint64_t *bytes)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
@@ -9037,7 +9035,7 @@ struct field_modify_info modify_tcp[] = {
 	if (!priv->config.devx)
 		return -1;
 
-	cnt = flow_dv_counter_get_by_idx(dev, (uintptr_t)counter, NULL);
+	cnt = flow_dv_counter_get_by_idx(dev, counter, NULL);
 	ret = _flow_dv_query_count(dev, cnt, &inn_pkts, &inn_bytes);
 	if (ret)
 		return -1;
@@ -9110,10 +9108,10 @@ struct field_modify_info modify_tcp[] = {
 /*
  * Mutex-protected thunk to lock-free flow_dv_counter_alloc().
  */
-static struct mlx5_flow_counter *
+static uint32_t
 flow_dv_counter_allocate(struct rte_eth_dev *dev)
 {
-	struct mlx5_flow_counter *cnt;
+	uint32_t cnt;
 
 	flow_dv_shared_lock(dev);
 	cnt = flow_dv_counter_alloc(dev, 0, 0, 1);
@@ -9125,7 +9123,7 @@ struct field_modify_info modify_tcp[] = {
  * Mutex-protected thunk to lock-free flow_dv_counter_release().
  */
 static void
-flow_dv_counter_free(struct rte_eth_dev *dev, struct mlx5_flow_counter *cnt)
+flow_dv_counter_free(struct rte_eth_dev *dev, uint32_t cnt)
 {
 	flow_dv_shared_lock(dev);
 	flow_dv_counter_release(dev, cnt);
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index c053778..227f963 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -145,9 +145,9 @@
  *   Counter identifier.
  *
  * @return
- *   A pointer to the counter, NULL otherwise and rte_errno is set.
+ *   Index to the counter, 0 otherwise and rte_errno is set.
  */
-static struct mlx5_flow_counter *
+static uint32_t
 flow_verbs_counter_new(struct rte_eth_dev *dev, uint32_t shared, uint32_t id)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
@@ -166,9 +166,7 @@
 				cnt = &pool->counters_raw[i];
 				if (cnt->shared && cnt->id == id) {
 					cnt->ref_cnt++;
-					return (struct mlx5_flow_counter *)
-					       (uintptr_t)
-					       MLX5_MAKE_CNT_IDX(pool_idx, i);
+					return MLX5_MAKE_CNT_IDX(pool_idx, i);
 				}
 			}
 		}
@@ -191,7 +189,7 @@
 				     (n_valid + MLX5_CNT_CONTAINER_RESIZE);
 			pools = rte_zmalloc(__func__, size, 0);
 			if (!pools)
-				return NULL;
+				return 0;
 			if (n_valid) {
 				memcpy(pools, cont->pools,
 				       sizeof(struct mlx5_flow_counter_pool *) *
@@ -205,7 +203,7 @@
 		size = sizeof(*pool) + sizeof(*cnt) * MLX5_COUNTERS_PER_POOL;
 		pool = rte_calloc(__func__, 1, size, 0);
 		if (!pool)
-			return NULL;
+			return 0;
 		for (i = 0; i < MLX5_COUNTERS_PER_POOL; ++i) {
 			cnt = &pool->counters_raw[i];
 			TAILQ_INSERT_HEAD(&pool->counters, cnt, next);
@@ -225,12 +223,11 @@
 	ret = flow_verbs_counter_create(dev, cnt);
 	if (!ret) {
 		TAILQ_REMOVE(&pool->counters, cnt, next);
-		return (struct mlx5_flow_counter *)(uintptr_t)
-		       MLX5_MAKE_CNT_IDX(pool_idx, (cnt - pool->counters_raw));
+		return MLX5_MAKE_CNT_IDX(pool_idx, (cnt - pool->counters_raw));
 	}
 	/* Some error occurred in Verbs library. */
 	rte_errno = -ret;
-	return NULL;
+	return 0;
 }
 
 /**
@@ -239,18 +236,17 @@
  * @param[in] dev
  *   Pointer to the Ethernet device structure.
  * @param[in] counter
- *   Pointer to the counter handler.
+ *   Index to the counter handler.
  */
 static void
-flow_verbs_counter_release(struct rte_eth_dev *dev,
-			   struct mlx5_flow_counter *counter)
+flow_verbs_counter_release(struct rte_eth_dev *dev, uint32_t counter)
 {
 	struct mlx5_flow_counter_pool *pool;
 	struct mlx5_flow_counter *cnt;
 
-	cnt = flow_verbs_counter_get_by_idx(dev, (uintptr_t)(void *)counter,
+	cnt = flow_verbs_counter_get_by_idx(dev, counter,
 					    &pool);
-	if (--counter->ref_cnt == 0) {
+	if (--cnt->ref_cnt == 0) {
 #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42)
 		claim_zero(mlx5_glue->destroy_counter_set(cnt->cs));
 		cnt->cs = NULL;
@@ -275,10 +271,9 @@
 {
 #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42) || \
 	defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
-	if (flow->counter && flow->counter->cs) {
+	if (flow->counter) {
 		struct mlx5_flow_counter *cnt = flow_verbs_counter_get_by_idx
-						(dev, (uintptr_t)(void *)
-						flow->counter, NULL);
+						(dev, flow->counter, NULL);
 		struct rte_flow_query_count *qc = data;
 		uint64_t counters[2] = {0, 0};
 #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42)
@@ -1082,8 +1077,7 @@
 						  "cannot get counter"
 						  " context.");
 	}
-	cnt = flow_verbs_counter_get_by_idx(dev, (uintptr_t)(void *)
-					    flow->counter, NULL);
+	cnt = flow_verbs_counter_get_by_idx(dev, flow->counter, NULL);
 #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42)
 	counter.counter_set_handle = cnt->cs->handle;
 	flow_verbs_spec_add(&dev_flow->verbs, &counter, size);
@@ -1775,7 +1769,7 @@
 	}
 	if (flow->counter) {
 		flow_verbs_counter_release(dev, flow->counter);
-		flow->counter = NULL;
+		flow->counter = 0;
 	}
 }
 
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH 7/8] net/mlx5: split the counter struct
  2020-04-07  3:59 [dpdk-dev] [PATCH 0/8] net/mlx5 counter optimize Suanming Mou
                   ` (5 preceding siblings ...)
  2020-04-07  3:59 ` [dpdk-dev] [PATCH 6/8] net/mlx5: optimize flow counter handle type Suanming Mou
@ 2020-04-07  3:59 ` Suanming Mou
  2020-04-07  3:59 ` [dpdk-dev] [PATCH 8/8] net/mlx5: reorganize fallback counter management Suanming Mou
  2020-04-08 12:52 ` [dpdk-dev] [PATCH 0/8] net/mlx5 counter optimize Raslan Darawsheh
  8 siblings, 0 replies; 10+ messages in thread
From: Suanming Mou @ 2020-04-07  3:59 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko; +Cc: dev, rasland

Currently, the counter struct saves both the members used by batch
counters and none batch counters. The members which are only used
by none batch counters cost 16 bytes extra memory for batch counters.
As normally there will be limited none batch counters, mix the none
batch counter and batch counter members becomes quite expensive for
batch counter. If 1 million batch counters are created, it means 16 MB
memory which will not be used by the batch counters are allocated.

Split the mlx5_flow_counter struct for batch and none batch counters
helps save the memory.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.c            |   6 +-
 drivers/net/mlx5/mlx5.h            |  32 ++++---
 drivers/net/mlx5/mlx5_flow_dv.c    | 173 ++++++++++++++++++-------------------
 drivers/net/mlx5/mlx5_flow_verbs.c |  58 ++++++++-----
 4 files changed, 145 insertions(+), 124 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 6a11b14..efdd53c 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -391,9 +391,11 @@ struct mlx5_flow_id_pool *
 					claim_zero
 					(mlx5_glue->destroy_flow_action
 					       (pool->counters_raw[j].action));
-				if (!batch && pool->counters_raw[j].dcs)
+				if (!batch && MLX5_GET_POOL_CNT_EXT
+				    (pool, j)->dcs)
 					claim_zero(mlx5_devx_cmd_destroy
-						  (pool->counters_raw[j].dcs));
+						  (MLX5_GET_POOL_CNT_EXT
+						  (pool, j)->dcs));
 			}
 			TAILQ_REMOVE(&sh->cmng.ccont[i].pool_list, pool,
 				     next);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 1501e61..6bbb5dd 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -247,6 +247,11 @@ struct mlx5_drop {
  */
 #define MLX5_MAKE_CNT_IDX(pi, offset) \
 	((pi) * MLX5_COUNTERS_PER_POOL + (offset) + 1)
+#define MLX5_CNT_TO_CNT_EXT(pool, cnt) (&((struct mlx5_flow_counter_ext *) \
+			    ((pool) + 1))[((cnt) - (pool)->counters_raw)])
+#define MLX5_GET_POOL_CNT_EXT(pool, offset) \
+			      (&((struct mlx5_flow_counter_ext *) \
+			      ((pool) + 1))[offset])
 
 struct mlx5_flow_counter_pool;
 
@@ -255,15 +260,25 @@ struct flow_counter_stats {
 	uint64_t bytes;
 };
 
-/* Counters information. */
+/* Generic counters information. */
 struct mlx5_flow_counter {
 	TAILQ_ENTRY(mlx5_flow_counter) next;
 	/**< Pointer to the next flow counter structure. */
+	union {
+		uint64_t hits; /**< Reset value of hits packets. */
+		int64_t query_gen; /**< Generation of the last release. */
+	};
+	uint64_t bytes; /**< Reset value of bytes. */
+	void *action; /**< Pointer to the dv action. */
+};
+
+/* Extend counters information for none batch counters. */
+struct mlx5_flow_counter_ext {
 	uint32_t shared:1; /**< Share counter ID with other flow rules. */
 	uint32_t batch: 1;
 	/**< Whether the counter was allocated by batch command. */
 	uint32_t ref_cnt:30; /**< Reference counter. */
-	uint32_t id; /**< Counter ID. */
+	uint32_t id; /**< User counter ID. */
 	union {  /**< Holds the counters for the rule. */
 #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42)
 		struct ibv_counter_set *cs;
@@ -271,19 +286,13 @@ struct mlx5_flow_counter {
 		struct ibv_counters *cs;
 #endif
 		struct mlx5_devx_obj *dcs; /**< Counter Devx object. */
-		struct mlx5_flow_counter_pool *pool; /**< The counter pool. */
 	};
-	union {
-		uint64_t hits; /**< Reset value of hits packets. */
-		int64_t query_gen; /**< Generation of the last release. */
-	};
-	uint64_t bytes; /**< Reset value of bytes. */
-	void *action; /**< Pointer to the dv action. */
 };
 
+
 TAILQ_HEAD(mlx5_counters, mlx5_flow_counter);
 
-/* Counter pool structure - query is in pool resolution. */
+/* Generic counter pool structure - query is in pool resolution. */
 struct mlx5_flow_counter_pool {
 	TAILQ_ENTRY(mlx5_flow_counter_pool) next;
 	struct mlx5_counters counters; /* Free counter list. */
@@ -299,7 +308,8 @@ struct mlx5_flow_counter_pool {
 	rte_spinlock_t sl; /* The pool lock. */
 	struct mlx5_counter_stats_raw *raw;
 	struct mlx5_counter_stats_raw *raw_hw; /* The raw on HW working. */
-	struct mlx5_flow_counter counters_raw[]; /* The pool counters memory. */
+	struct mlx5_flow_counter counters_raw[MLX5_COUNTERS_PER_POOL];
+	/* The pool counters memory. */
 };
 
 struct mlx5_counter_stats_raw;
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index b7daa8f..e051f8b 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -3847,7 +3847,7 @@ struct field_modify_info modify_tcp[] = {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_pools_container *cont = MLX5_CNT_CONTAINER(priv->sh, 0, 0);
 	struct mlx5_flow_counter_pool *pool;
-	struct mlx5_flow_counter *cnt = NULL;
+	struct mlx5_flow_counter_ext *cnt_ext;
 	struct mlx5_devx_obj *dcs = NULL;
 	uint32_t offset;
 
@@ -3869,20 +3869,18 @@ struct field_modify_info modify_tcp[] = {
 		pool = TAILQ_FIRST(&cont->pool_list);
 	}
 	offset = dcs->id % MLX5_COUNTERS_PER_POOL;
-	cnt = &pool->counters_raw[offset];
-	struct mlx5_flow_counter tmpl = {
-		.shared = shared,
-		.ref_cnt = 1,
-		.id = id,
-		.dcs = dcs,
-	};
-	tmpl.action = mlx5_glue->dv_create_flow_action_counter(dcs->obj, 0);
-	if (!tmpl.action) {
-		claim_zero(mlx5_devx_cmd_destroy(cnt->dcs));
+	cnt_ext = MLX5_GET_POOL_CNT_EXT(pool, offset);
+	cnt_ext->shared = shared;
+	cnt_ext->ref_cnt = 1;
+	cnt_ext->id = id;
+	cnt_ext->dcs = dcs;
+	pool->counters_raw[offset].action =
+	      mlx5_glue->dv_create_flow_action_counter(dcs->obj, 0);
+	if (!pool->counters_raw[offset].action) {
+		claim_zero(mlx5_devx_cmd_destroy(dcs));
 		rte_errno = errno;
 		return 0;
 	}
-	*cnt = tmpl;
 	return MLX5_MAKE_CNT_IDX(pool->index, offset);
 }
 
@@ -3892,20 +3890,16 @@ struct field_modify_info modify_tcp[] = {
  * @param[in] dev
  *   Pointer to the Ethernet device structure.
  * @param[in] counter
- *   Index to the counter handler.
+ *   Extend counter handler.
  */
 static void
 flow_dv_counter_release_fallback(struct rte_eth_dev *dev __rte_unused,
-				 struct mlx5_flow_counter *counter)
+				 struct mlx5_flow_counter_ext *counter)
 {
 	if (!counter)
 		return;
-	if (--counter->ref_cnt == 0) {
-		claim_zero(mlx5_glue->destroy_flow_action(counter->action));
-		claim_zero(mlx5_devx_cmd_destroy(counter->dcs));
-		counter->action = NULL;
-		counter->dcs = NULL;
-	}
+	claim_zero(mlx5_devx_cmd_destroy(counter->dcs));
+	counter->dcs = NULL;
 }
 
 /**
@@ -3925,7 +3919,7 @@ struct field_modify_info modify_tcp[] = {
  */
 static inline int
 _flow_dv_query_count_fallback(struct rte_eth_dev *dev __rte_unused,
-		     struct mlx5_flow_counter *cnt, uint64_t *pkts,
+		     struct mlx5_flow_counter_ext *cnt, uint64_t *pkts,
 		     uint64_t *bytes)
 {
 	return mlx5_devx_cmd_flow_counter_query(cnt->dcs, 0, 0, pkts, bytes,
@@ -3933,25 +3927,6 @@ struct field_modify_info modify_tcp[] = {
 }
 
 /**
- * Get a pool by a counter.
- *
- * @param[in] cnt
- *   Pointer to the counter.
- *
- * @return
- *   The counter pool.
- */
-static struct mlx5_flow_counter_pool *
-flow_dv_counter_pool_get(struct mlx5_flow_counter *cnt)
-{
-	if (!cnt->batch) {
-		cnt -= cnt->dcs->id % MLX5_COUNTERS_PER_POOL;
-		return (struct mlx5_flow_counter_pool *)cnt - 1;
-	}
-	return cnt->pool;
-}
-
-/**
  * Get DV flow counter by index.
  *
  * @param[in] dev
@@ -4159,7 +4134,7 @@ struct field_modify_info modify_tcp[] = {
  * @param[in] dev
  *   Pointer to the Ethernet device structure.
  * @param[in] cnt
- *   Pointer to the flow counter.
+ *   Index to the flow counter.
  * @param[out] pkts
  *   The statistics value of packets.
  * @param[out] bytes
@@ -4169,17 +4144,23 @@ struct field_modify_info modify_tcp[] = {
  *   0 on success, otherwise a negative errno value and rte_errno is set.
  */
 static inline int
-_flow_dv_query_count(struct rte_eth_dev *dev,
-		     struct mlx5_flow_counter *cnt, uint64_t *pkts,
+_flow_dv_query_count(struct rte_eth_dev *dev, uint32_t counter, uint64_t *pkts,
 		     uint64_t *bytes)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_flow_counter_pool *pool =
-			flow_dv_counter_pool_get(cnt);
-	int offset = cnt - &pool->counters_raw[0];
+	struct mlx5_flow_counter_pool *pool = NULL;
+	struct mlx5_flow_counter *cnt;
+	struct mlx5_flow_counter_ext *cnt_ext = NULL;
+	int offset;
 
-	if (priv->counter_fallback)
-		return _flow_dv_query_count_fallback(dev, cnt, pkts, bytes);
+	cnt = flow_dv_counter_get_by_idx(dev, counter, &pool);
+	MLX5_ASSERT(pool);
+	if (counter < MLX5_CNT_BATCH_OFFSET) {
+		cnt_ext = MLX5_CNT_TO_CNT_EXT(pool, cnt);
+		if (priv->counter_fallback)
+			return _flow_dv_query_count_fallback(dev, cnt_ext,
+							     pkts, bytes);
+	}
 
 	rte_spinlock_lock(&pool->sl);
 	/*
@@ -4187,10 +4168,11 @@ struct field_modify_info modify_tcp[] = {
 	 * current allocated in parallel to the host reading.
 	 * In this case the new counter values must be reported as 0.
 	 */
-	if (unlikely(!cnt->batch && cnt->dcs->id < pool->raw->min_dcs_id)) {
+	if (unlikely(cnt_ext && cnt_ext->dcs->id < pool->raw->min_dcs_id)) {
 		*pkts = 0;
 		*bytes = 0;
 	} else {
+		offset = cnt - &pool->counters_raw[0];
 		*pkts = rte_be_to_cpu_64(pool->raw->data[offset].hits);
 		*bytes = rte_be_to_cpu_64(pool->raw->data[offset].bytes);
 	}
@@ -4229,8 +4211,10 @@ struct field_modify_info modify_tcp[] = {
 		if (!cont)
 			return NULL;
 	}
-	size = sizeof(*pool) + MLX5_COUNTERS_PER_POOL *
-			sizeof(struct mlx5_flow_counter);
+	size = sizeof(*pool);
+	if (!batch)
+		size += MLX5_COUNTERS_PER_POOL *
+			sizeof(struct mlx5_flow_counter_ext);
 	pool = rte_calloc(__func__, 1, size, 0);
 	if (!pool) {
 		rte_errno = ENOMEM;
@@ -4307,9 +4291,10 @@ struct field_modify_info modify_tcp[] = {
 			rte_atomic64_set(&pool->a64_dcs,
 					 (int64_t)(uintptr_t)dcs);
 		}
-		cnt = &pool->counters_raw[dcs->id % MLX5_COUNTERS_PER_POOL];
+		i = dcs->id % MLX5_COUNTERS_PER_POOL;
+		cnt = &pool->counters_raw[i];
 		TAILQ_INSERT_HEAD(&pool->counters, cnt, next);
-		cnt->dcs = dcs;
+		MLX5_GET_POOL_CNT_EXT(pool, i)->dcs = dcs;
 		*cnt_free = cnt;
 		return cont;
 	}
@@ -4328,7 +4313,6 @@ struct field_modify_info modify_tcp[] = {
 	pool = TAILQ_FIRST(&cont->pool_list);
 	for (i = 0; i < MLX5_COUNTERS_PER_POOL; ++i) {
 		cnt = &pool->counters_raw[i];
-		cnt->pool = pool;
 		TAILQ_INSERT_HEAD(&pool->counters, cnt, next);
 	}
 	*cnt_free = &pool->counters_raw[0];
@@ -4346,13 +4330,13 @@ struct field_modify_info modify_tcp[] = {
  *   mlx5 flow counter pool in the container,
  *
  * @return
- *   NULL if not existed, otherwise pointer to the shared counter.
+ *   NULL if not existed, otherwise pointer to the shared extend counter.
  */
-static struct mlx5_flow_counter *
+static struct mlx5_flow_counter_ext *
 flow_dv_counter_shared_search(struct mlx5_pools_container *cont, uint32_t id,
 			      struct mlx5_flow_counter_pool **ppool)
 {
-	static struct mlx5_flow_counter *cnt;
+	static struct mlx5_flow_counter_ext *cnt;
 	struct mlx5_flow_counter_pool *pool;
 	uint32_t i;
 	uint32_t n_valid = rte_atomic16_read(&cont->n_valid);
@@ -4360,10 +4344,10 @@ struct field_modify_info modify_tcp[] = {
 	for (i = 0; i < n_valid; i++) {
 		pool = cont->pools[i];
 		for (i = 0; i < MLX5_COUNTERS_PER_POOL; ++i) {
-			cnt = &pool->counters_raw[i];
+			cnt = MLX5_GET_POOL_CNT_EXT(pool, i);
 			if (cnt->ref_cnt && cnt->shared && cnt->id == id) {
 				if (ppool)
-					*ppool = pool;
+					*ppool = cont->pools[i];
 				return cnt;
 			}
 		}
@@ -4393,6 +4377,7 @@ struct field_modify_info modify_tcp[] = {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_flow_counter_pool *pool = NULL;
 	struct mlx5_flow_counter *cnt_free = NULL;
+	struct mlx5_flow_counter_ext *cnt_ext = NULL;
 	/*
 	 * Currently group 0 flow counter cannot be assigned to a flow if it is
 	 * not the first one in the batch counter allocation, so it is better
@@ -4411,15 +4396,16 @@ struct field_modify_info modify_tcp[] = {
 		return 0;
 	}
 	if (shared) {
-		cnt_free = flow_dv_counter_shared_search(cont, id, &pool);
-		if (cnt_free) {
-			if (cnt_free->ref_cnt + 1 == 0) {
+		cnt_ext = flow_dv_counter_shared_search(cont, id, &pool);
+		if (cnt_ext) {
+			if (cnt_ext->ref_cnt + 1 == 0) {
 				rte_errno = E2BIG;
 				return 0;
 			}
-			cnt_free->ref_cnt++;
+			cnt_ext->ref_cnt++;
 			cnt_idx = pool->index * MLX5_COUNTERS_PER_POOL +
-				  (cnt_free - pool->counters_raw) + 1;
+				  (cnt_ext->dcs->id % MLX5_COUNTERS_PER_POOL)
+				  + 1;
 			return cnt_idx;
 		}
 	}
@@ -4449,7 +4435,8 @@ struct field_modify_info modify_tcp[] = {
 			return 0;
 		pool = TAILQ_FIRST(&cont->pool_list);
 	}
-	cnt_free->batch = batch;
+	if (!batch)
+		cnt_ext = MLX5_CNT_TO_CNT_EXT(pool, cnt_free);
 	/* Create a DV counter action only in the first time usage. */
 	if (!cnt_free->action) {
 		uint16_t offset;
@@ -4460,7 +4447,7 @@ struct field_modify_info modify_tcp[] = {
 			dcs = pool->min_dcs;
 		} else {
 			offset = 0;
-			dcs = cnt_free->dcs;
+			dcs = cnt_ext->dcs;
 		}
 		cnt_free->action = mlx5_glue->dv_create_flow_action_counter
 					(dcs->obj, offset);
@@ -4469,13 +4456,18 @@ struct field_modify_info modify_tcp[] = {
 			return 0;
 		}
 	}
+	cnt_idx = MLX5_MAKE_CNT_IDX(pool->index,
+				    (cnt_free - pool->counters_raw));
+	cnt_idx += batch * MLX5_CNT_BATCH_OFFSET;
 	/* Update the counter reset values. */
-	if (_flow_dv_query_count(dev, cnt_free, &cnt_free->hits,
+	if (_flow_dv_query_count(dev, cnt_idx, &cnt_free->hits,
 				 &cnt_free->bytes))
 		return 0;
-	cnt_free->shared = shared;
-	cnt_free->ref_cnt = 1;
-	cnt_free->id = id;
+	if (cnt_ext) {
+		cnt_ext->shared = shared;
+		cnt_ext->ref_cnt = 1;
+		cnt_ext->id = id;
+	}
 	if (!priv->sh->cmng.query_thread_on)
 		/* Start the asynchronous batch query by the host thread. */
 		mlx5_set_query_alarm(priv->sh);
@@ -4485,9 +4477,6 @@ struct field_modify_info modify_tcp[] = {
 		TAILQ_REMOVE(&cont->pool_list, pool, next);
 		TAILQ_INSERT_TAIL(&cont->pool_list, pool, next);
 	}
-	cnt_idx = MLX5_MAKE_CNT_IDX(pool->index,
-				    (cnt_free - pool->counters_raw));
-	cnt_idx += batch * MLX5_CNT_BATCH_OFFSET;
 	return cnt_idx;
 }
 
@@ -4503,27 +4492,33 @@ struct field_modify_info modify_tcp[] = {
 flow_dv_counter_release(struct rte_eth_dev *dev, uint32_t counter)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_flow_counter_pool *pool;
+	struct mlx5_flow_counter_pool *pool = NULL;
 	struct mlx5_flow_counter *cnt;
+	struct mlx5_flow_counter_ext *cnt_ext = NULL;
 
 	if (!counter)
 		return;
 	cnt = flow_dv_counter_get_by_idx(dev, counter, &pool);
+	MLX5_ASSERT(pool);
+	if (counter < MLX5_CNT_BATCH_OFFSET)
+		cnt_ext = MLX5_CNT_TO_CNT_EXT(pool, cnt);
+	if (cnt_ext && --cnt_ext->ref_cnt)
+		return;
 	if (priv->counter_fallback) {
-		flow_dv_counter_release_fallback(dev, cnt);
+		claim_zero(mlx5_glue->destroy_flow_action(cnt->action));
+		flow_dv_counter_release_fallback(dev, cnt_ext);
+		cnt->action = NULL;
 		return;
 	}
-	if (--cnt->ref_cnt == 0) {
-		/* Put the counter in the end - the last updated one. */
-		TAILQ_INSERT_TAIL(&pool->counters, cnt, next);
-		/*
-		 * Counters released between query trigger and handler need
-		 * to wait the next round of query. Since the packets arrive
-		 * in the gap period will not be taken into account to the
-		 * old counter.
-		 */
-		cnt->query_gen = rte_atomic64_read(&pool->start_query_gen);
-	}
+	/* Put the counter in the end - the last updated one. */
+	TAILQ_INSERT_TAIL(&pool->counters, cnt, next);
+	/*
+	 * Counters released between query trigger and handler need
+	 * to wait the next round of query. Since the packets arrive
+	 * in the gap period will not be taken into account to the
+	 * old counter.
+	 */
+	cnt->query_gen = rte_atomic64_read(&pool->start_query_gen);
 }
 
 /**
@@ -8525,7 +8520,7 @@ struct field_modify_info modify_tcp[] = {
 
 		cnt = flow_dv_counter_get_by_idx(dev, flow->counter,
 						 NULL);
-		int err = _flow_dv_query_count(dev, cnt, &pkts,
+		int err = _flow_dv_query_count(dev, flow->counter, &pkts,
 					       &bytes);
 
 		if (err)
@@ -9035,10 +9030,10 @@ struct field_modify_info modify_tcp[] = {
 	if (!priv->config.devx)
 		return -1;
 
-	cnt = flow_dv_counter_get_by_idx(dev, counter, NULL);
-	ret = _flow_dv_query_count(dev, cnt, &inn_pkts, &inn_bytes);
+	ret = _flow_dv_query_count(dev, counter, &inn_pkts, &inn_bytes);
 	if (ret)
 		return -1;
+	cnt = flow_dv_counter_get_by_idx(dev, counter, NULL);
 	*pkts = inn_pkts - cnt->hits;
 	*bytes = inn_bytes - cnt->bytes;
 	if (clear) {
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 227f963..eb558fd 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -83,7 +83,7 @@
  */
 static int
 flow_verbs_counter_create(struct rte_eth_dev *dev,
-			  struct mlx5_flow_counter *counter)
+			  struct mlx5_flow_counter_ext *counter)
 {
 #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42)
 	struct mlx5_priv *priv = dev->data->dev_private;
@@ -153,6 +153,7 @@
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_pools_container *cont = MLX5_CNT_CONTAINER(priv->sh, 0, 0);
 	struct mlx5_flow_counter_pool *pool = NULL;
+	struct mlx5_flow_counter_ext *cnt_ext = NULL;
 	struct mlx5_flow_counter *cnt = NULL;
 	uint32_t n_valid = rte_atomic16_read(&cont->n_valid);
 	uint32_t pool_idx;
@@ -163,9 +164,9 @@
 		for (pool_idx = 0; pool_idx < n_valid; ++pool_idx) {
 			pool = cont->pools[pool_idx];
 			for (i = 0; i < MLX5_COUNTERS_PER_POOL; ++i) {
-				cnt = &pool->counters_raw[i];
-				if (cnt->shared && cnt->id == id) {
-					cnt->ref_cnt++;
+				cnt_ext = MLX5_GET_POOL_CNT_EXT(pool, i);
+				if (cnt_ext->shared && cnt_ext->id == id) {
+					cnt_ext->ref_cnt++;
 					return MLX5_MAKE_CNT_IDX(pool_idx, i);
 				}
 			}
@@ -200,7 +201,8 @@
 			cont->n += MLX5_CNT_CONTAINER_RESIZE;
 		}
 		/* Allocate memory for new pool*/
-		size = sizeof(*pool) + sizeof(*cnt) * MLX5_COUNTERS_PER_POOL;
+		size = sizeof(*pool) + sizeof(*cnt_ext) *
+		       MLX5_COUNTERS_PER_POOL;
 		pool = rte_calloc(__func__, 1, size, 0);
 		if (!pool)
 			return 0;
@@ -214,16 +216,18 @@
 		rte_atomic16_add(&cont->n_valid, 1);
 		TAILQ_INSERT_HEAD(&cont->pool_list, pool, next);
 	}
-	cnt->id = id;
-	cnt->shared = shared;
-	cnt->ref_cnt = 1;
+	i = cnt - pool->counters_raw;
+	cnt_ext = MLX5_GET_POOL_CNT_EXT(pool, i);
+	cnt_ext->id = id;
+	cnt_ext->shared = shared;
+	cnt_ext->ref_cnt = 1;
 	cnt->hits = 0;
 	cnt->bytes = 0;
 	/* Create counter with Verbs. */
-	ret = flow_verbs_counter_create(dev, cnt);
+	ret = flow_verbs_counter_create(dev, cnt_ext);
 	if (!ret) {
 		TAILQ_REMOVE(&pool->counters, cnt, next);
-		return MLX5_MAKE_CNT_IDX(pool_idx, (cnt - pool->counters_raw));
+		return MLX5_MAKE_CNT_IDX(pool_idx, i);
 	}
 	/* Some error occurred in Verbs library. */
 	rte_errno = -ret;
@@ -243,16 +247,18 @@
 {
 	struct mlx5_flow_counter_pool *pool;
 	struct mlx5_flow_counter *cnt;
+	struct mlx5_flow_counter_ext *cnt_ext;
 
 	cnt = flow_verbs_counter_get_by_idx(dev, counter,
 					    &pool);
-	if (--cnt->ref_cnt == 0) {
+	cnt_ext = MLX5_CNT_TO_CNT_EXT(pool, cnt);
+	if (--cnt_ext->ref_cnt == 0) {
 #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42)
-		claim_zero(mlx5_glue->destroy_counter_set(cnt->cs));
-		cnt->cs = NULL;
+		claim_zero(mlx5_glue->destroy_counter_set(cnt_ext->cs));
+		cnt_ext->cs = NULL;
 #elif defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
-		claim_zero(mlx5_glue->destroy_counters(cnt->cs));
-		cnt->cs = NULL;
+		claim_zero(mlx5_glue->destroy_counters(cnt_ext->cs));
+		cnt_ext->cs = NULL;
 #endif
 		TAILQ_INSERT_HEAD(&pool->counters, cnt, next);
 	}
@@ -272,13 +278,16 @@
 #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42) || \
 	defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
 	if (flow->counter) {
+		struct mlx5_flow_counter_pool *pool;
 		struct mlx5_flow_counter *cnt = flow_verbs_counter_get_by_idx
-						(dev, flow->counter, NULL);
+						(dev, flow->counter, &pool);
+		struct mlx5_flow_counter_ext *cnt_ext = MLX5_CNT_TO_CNT_EXT
+							(pool, cnt);
 		struct rte_flow_query_count *qc = data;
 		uint64_t counters[2] = {0, 0};
 #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42)
 		struct ibv_query_counter_set_attr query_cs_attr = {
-			.cs = cnt->cs,
+			.cs = cnt_ext->cs,
 			.query_flags = IBV_COUNTER_SET_FORCE_UPDATE,
 		};
 		struct ibv_counter_set_data query_out = {
@@ -289,7 +298,7 @@
 						       &query_out);
 #elif defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
 		int err = mlx5_glue->query_counters
-			       (cnt->cs, counters,
+			       (cnt_ext->cs, counters,
 				RTE_DIM(counters),
 				IBV_READ_COUNTERS_ATTR_PREFER_CACHED);
 #endif
@@ -1057,9 +1066,11 @@
 {
 	const struct rte_flow_action_count *count = action->conf;
 	struct rte_flow *flow = dev_flow->flow;
-	struct mlx5_flow_counter *cnt = NULL;
 #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42) || \
 	defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
+	struct mlx5_flow_counter_pool *pool;
+	struct mlx5_flow_counter *cnt = NULL;
+	struct mlx5_flow_counter_ext *cnt_ext;
 	unsigned int size = sizeof(struct ibv_flow_spec_counter_action);
 	struct ibv_flow_spec_counter_action counter = {
 		.type = IBV_FLOW_SPEC_ACTION_COUNT,
@@ -1077,12 +1088,15 @@
 						  "cannot get counter"
 						  " context.");
 	}
-	cnt = flow_verbs_counter_get_by_idx(dev, flow->counter, NULL);
 #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42)
-	counter.counter_set_handle = cnt->cs->handle;
+	cnt = flow_verbs_counter_get_by_idx(dev, flow->counter, &pool);
+	cnt_ext = MLX5_CNT_TO_CNT_EXT(pool, cnt);
+	counter.counter_set_handle = cnt_ext->cs->handle;
 	flow_verbs_spec_add(&dev_flow->verbs, &counter, size);
 #elif defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
-	counter.counters = cnt->cs;
+	cnt = flow_verbs_counter_get_by_idx(dev, flow->counter, &pool);
+	cnt_ext = MLX5_CNT_TO_CNT_EXT(pool, cnt);
+	counter.counters = cnt_ext->cs;
 	flow_verbs_spec_add(&dev_flow->verbs, &counter, size);
 #endif
 	return 0;
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [dpdk-dev] [PATCH 8/8] net/mlx5: reorganize fallback counter management
  2020-04-07  3:59 [dpdk-dev] [PATCH 0/8] net/mlx5 counter optimize Suanming Mou
                   ` (6 preceding siblings ...)
  2020-04-07  3:59 ` [dpdk-dev] [PATCH 7/8] net/mlx5: split the counter struct Suanming Mou
@ 2020-04-07  3:59 ` Suanming Mou
  2020-04-08 12:52 ` [dpdk-dev] [PATCH 0/8] net/mlx5 counter optimize Raslan Darawsheh
  8 siblings, 0 replies; 10+ messages in thread
From: Suanming Mou @ 2020-04-07  3:59 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko; +Cc: dev, rasland

Currently, the fallback counter is also allocated from the pool, the
specify fallback function code becomes a bit duplicate.

Reorganize the fallback counter code to make it reuse from the normal
counter code.

Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
 drivers/net/mlx5/mlx5.h         |   1 -
 drivers/net/mlx5/mlx5_flow_dv.c | 193 ++++++++++------------------------------
 2 files changed, 47 insertions(+), 147 deletions(-)

diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 6bbb5dd..396dba7 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -299,7 +299,6 @@ struct mlx5_flow_counter_pool {
 	union {
 		struct mlx5_devx_obj *min_dcs;
 		rte_atomic64_t a64_dcs;
-		int dcs_id; /* Fallback pool counter id range. */
 	};
 	/* The devx object of the minimum counter ID. */
 	rte_atomic64_t start_query_gen; /* Query start round. */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index e051f8b..c547f8d 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -73,13 +73,6 @@
 	uint32_t attr;
 };
 
-static struct mlx5_flow_counter_pool *
-flow_dv_find_pool_by_id(struct mlx5_pools_container *cont, bool fallback,
-			int id);
-static struct mlx5_pools_container *
-flow_dv_pool_create(struct rte_eth_dev *dev, struct mlx5_devx_obj *dcs,
-		    uint32_t batch);
-
 /**
  * Initialize flow attributes structure according to flow items' types.
  *
@@ -3828,105 +3821,6 @@ struct field_modify_info modify_tcp[] = {
 }
 
 /**
- * Get or create a flow counter.
- *
- * @param[in] dev
- *   Pointer to the Ethernet device structure.
- * @param[in] shared
- *   Indicate if this counter is shared with other flows.
- * @param[in] id
- *   Counter identifier.
- *
- * @return
- *   Index to flow counter on success, 0 otherwise and rte_errno is set.
- */
-static uint32_t
-flow_dv_counter_alloc_fallback(struct rte_eth_dev *dev, uint32_t shared,
-			       uint32_t id)
-{
-	struct mlx5_priv *priv = dev->data->dev_private;
-	struct mlx5_pools_container *cont = MLX5_CNT_CONTAINER(priv->sh, 0, 0);
-	struct mlx5_flow_counter_pool *pool;
-	struct mlx5_flow_counter_ext *cnt_ext;
-	struct mlx5_devx_obj *dcs = NULL;
-	uint32_t offset;
-
-	if (!priv->config.devx) {
-		rte_errno = ENOTSUP;
-		return 0;
-	}
-	dcs = mlx5_devx_cmd_flow_counter_alloc(priv->sh->ctx, 0);
-	if (!dcs)
-		return 0;
-	pool = flow_dv_find_pool_by_id(cont, true, dcs->id);
-	if (!pool) {
-		cont = flow_dv_pool_create(dev, dcs, 0);
-		if (!cont) {
-			mlx5_devx_cmd_destroy(dcs);
-			rte_errno = ENOMEM;
-			return 0;
-		}
-		pool = TAILQ_FIRST(&cont->pool_list);
-	}
-	offset = dcs->id % MLX5_COUNTERS_PER_POOL;
-	cnt_ext = MLX5_GET_POOL_CNT_EXT(pool, offset);
-	cnt_ext->shared = shared;
-	cnt_ext->ref_cnt = 1;
-	cnt_ext->id = id;
-	cnt_ext->dcs = dcs;
-	pool->counters_raw[offset].action =
-	      mlx5_glue->dv_create_flow_action_counter(dcs->obj, 0);
-	if (!pool->counters_raw[offset].action) {
-		claim_zero(mlx5_devx_cmd_destroy(dcs));
-		rte_errno = errno;
-		return 0;
-	}
-	return MLX5_MAKE_CNT_IDX(pool->index, offset);
-}
-
-/**
- * Release a flow counter.
- *
- * @param[in] dev
- *   Pointer to the Ethernet device structure.
- * @param[in] counter
- *   Extend counter handler.
- */
-static void
-flow_dv_counter_release_fallback(struct rte_eth_dev *dev __rte_unused,
-				 struct mlx5_flow_counter_ext *counter)
-{
-	if (!counter)
-		return;
-	claim_zero(mlx5_devx_cmd_destroy(counter->dcs));
-	counter->dcs = NULL;
-}
-
-/**
- * Query a devx flow counter.
- *
- * @param[in] dev
- *   Pointer to the Ethernet device structure.
- * @param[in] cnt
- *   Pointer to the flow counter.
- * @param[out] pkts
- *   The statistics value of packets.
- * @param[out] bytes
- *   The statistics value of bytes.
- *
- * @return
- *   0 on success, otherwise a negative errno value and rte_errno is set.
- */
-static inline int
-_flow_dv_query_count_fallback(struct rte_eth_dev *dev __rte_unused,
-		     struct mlx5_flow_counter_ext *cnt, uint64_t *pkts,
-		     uint64_t *bytes)
-{
-	return mlx5_devx_cmd_flow_counter_query(cnt->dcs, 0, 0, pkts, bytes,
-						0, NULL, NULL, 0);
-}
-
-/**
  * Get DV flow counter by index.
  *
  * @param[in] dev
@@ -3968,8 +3862,6 @@ struct field_modify_info modify_tcp[] = {
  *
  * @param[in] cont
  *   Pointer to the counter container.
- * @param[in] fallback
- *   Fallback mode.
  * @param[in] id
  *   The counter devx ID.
  *
@@ -3977,16 +3869,15 @@ struct field_modify_info modify_tcp[] = {
  *   The counter pool pointer if exists, NULL otherwise,
  */
 static struct mlx5_flow_counter_pool *
-flow_dv_find_pool_by_id(struct mlx5_pools_container *cont, bool fallback,
-			int id)
+flow_dv_find_pool_by_id(struct mlx5_pools_container *cont, int id)
 {
 	uint32_t i;
 	uint32_t n_valid = rte_atomic16_read(&cont->n_valid);
 
 	for (i = 0; i < n_valid; i++) {
 		struct mlx5_flow_counter_pool *pool = cont->pools[i];
-		int base = ((fallback ? pool->dcs_id : pool->min_dcs->id) /
-			   MLX5_COUNTERS_PER_POOL) * MLX5_COUNTERS_PER_POOL;
+		int base = (pool->min_dcs->id / MLX5_COUNTERS_PER_POOL) *
+			   MLX5_COUNTERS_PER_POOL;
 
 		if (id >= base && id < base + MLX5_COUNTERS_PER_POOL) {
 			/*
@@ -4089,12 +3980,14 @@ struct field_modify_info modify_tcp[] = {
 			MLX5_CNT_CONTAINER(priv->sh, batch, 0);
 	struct mlx5_pools_container *new_cont =
 			MLX5_CNT_CONTAINER_UNUSED(priv->sh, batch, 0);
-	struct mlx5_counter_stats_mem_mng *mem_mng;
+	struct mlx5_counter_stats_mem_mng *mem_mng = NULL;
 	uint32_t resize = cont->n + MLX5_CNT_CONTAINER_RESIZE;
 	uint32_t mem_size = sizeof(struct mlx5_flow_counter_pool *) * resize;
 	int i;
 
-	if (cont != MLX5_CNT_CONTAINER(priv->sh, batch, 1)) {
+	/* Fallback mode has no background thread. Skip the check. */
+	if (!priv->counter_fallback &&
+	    cont != MLX5_CNT_CONTAINER(priv->sh, batch, 1)) {
 		/* The last resize still hasn't detected by the host thread. */
 		rte_errno = EAGAIN;
 		return NULL;
@@ -4107,16 +4000,29 @@ struct field_modify_info modify_tcp[] = {
 	if (cont->n)
 		memcpy(new_cont->pools, cont->pools, cont->n *
 		       sizeof(struct mlx5_flow_counter_pool *));
-	mem_mng = flow_dv_create_counter_stat_mem_mng(dev,
-		MLX5_CNT_CONTAINER_RESIZE + MLX5_MAX_PENDING_QUERIES);
-	if (!mem_mng) {
-		rte_free(new_cont->pools);
-		return NULL;
+	/*
+	 * Fallback mode query the counter directly, no background query
+	 * resources are needed.
+	 */
+	if (!priv->counter_fallback) {
+		mem_mng = flow_dv_create_counter_stat_mem_mng(dev,
+			MLX5_CNT_CONTAINER_RESIZE + MLX5_MAX_PENDING_QUERIES);
+		if (!mem_mng) {
+			rte_free(new_cont->pools);
+			return NULL;
+		}
+		for (i = 0; i < MLX5_MAX_PENDING_QUERIES; ++i)
+			LIST_INSERT_HEAD(&priv->sh->cmng.free_stat_raws,
+					 mem_mng->raws +
+					 MLX5_CNT_CONTAINER_RESIZE +
+					 i, next);
+	} else {
+		/*
+		 * Release the old container pools directly as no background
+		 * thread helps that.
+		 */
+		rte_free(cont->pools);
 	}
-	for (i = 0; i < MLX5_MAX_PENDING_QUERIES; ++i)
-		LIST_INSERT_HEAD(&priv->sh->cmng.free_stat_raws,
-				 mem_mng->raws + MLX5_CNT_CONTAINER_RESIZE +
-				 i, next);
 	new_cont->n = resize;
 	rte_atomic16_set(&new_cont->n_valid, rte_atomic16_read(&cont->n_valid));
 	TAILQ_INIT(&new_cont->pool_list);
@@ -4158,8 +4064,8 @@ struct field_modify_info modify_tcp[] = {
 	if (counter < MLX5_CNT_BATCH_OFFSET) {
 		cnt_ext = MLX5_CNT_TO_CNT_EXT(pool, cnt);
 		if (priv->counter_fallback)
-			return _flow_dv_query_count_fallback(dev, cnt_ext,
-							     pkts, bytes);
+			return mlx5_devx_cmd_flow_counter_query(cnt_ext->dcs, 0,
+					0, pkts, bytes, 0, NULL, NULL, 0);
 	}
 
 	rte_spinlock_lock(&pool->sl);
@@ -4220,11 +4126,9 @@ struct field_modify_info modify_tcp[] = {
 		rte_errno = ENOMEM;
 		return NULL;
 	}
-	if (priv->counter_fallback)
-		pool->dcs_id = dcs->id;
-	else
-		pool->min_dcs = dcs;
-	pool->raw = cont->init_mem_mng->raws + n_valid %
+	pool->min_dcs = dcs;
+	if (!priv->counter_fallback)
+		pool->raw = cont->init_mem_mng->raws + n_valid %
 						     MLX5_CNT_CONTAINER_RESIZE;
 	pool->raw_hw = NULL;
 	rte_spinlock_init(&pool->sl);
@@ -4236,7 +4140,13 @@ struct field_modify_info modify_tcp[] = {
 	 * without the last query finished and stats updated to the memory.
 	 */
 	rte_atomic64_set(&pool->start_query_gen, 0x2);
-	rte_atomic64_set(&pool->end_query_gen, 0x2);
+	/*
+	 * There's no background query thread for fallback mode, set the
+	 * end_query_gen to the maximum value since no need to wait for
+	 * statistics update.
+	 */
+	rte_atomic64_set(&pool->end_query_gen, priv->counter_fallback ?
+			 INT64_MAX : 0x2);
 	TAILQ_INIT(&pool->counters);
 	TAILQ_INSERT_HEAD(&cont->pool_list, pool, next);
 	pool->index = n_valid;
@@ -4279,7 +4189,7 @@ struct field_modify_info modify_tcp[] = {
 		dcs = mlx5_devx_cmd_flow_counter_alloc(priv->sh->ctx, 0);
 		if (!dcs)
 			return NULL;
-		pool = flow_dv_find_pool_by_id(cont, false, dcs->id);
+		pool = flow_dv_find_pool_by_id(cont, dcs->id);
 		if (!pool) {
 			cont = flow_dv_pool_create(dev, dcs, batch);
 			if (!cont) {
@@ -4386,7 +4296,7 @@ struct field_modify_info modify_tcp[] = {
 	 * A counter can be shared between different groups so need to take
 	 * shared counters from the single container.
 	 */
-	uint32_t batch = (group && !shared) ? 1 : 0;
+	uint32_t batch = (group && !shared && !priv->counter_fallback) ? 1 : 0;
 	struct mlx5_pools_container *cont = MLX5_CNT_CONTAINER(priv->sh, batch,
 							       0);
 	uint32_t cnt_idx;
@@ -4409,9 +4319,6 @@ struct field_modify_info modify_tcp[] = {
 			return cnt_idx;
 		}
 	}
-	if (priv->counter_fallback)
-		return flow_dv_counter_alloc_fallback(dev, shared, id);
-
 	/* Pools which has a free counters are in the start. */
 	TAILQ_FOREACH(pool, &cont->pool_list, next) {
 		/*
@@ -4468,7 +4375,7 @@ struct field_modify_info modify_tcp[] = {
 		cnt_ext->ref_cnt = 1;
 		cnt_ext->id = id;
 	}
-	if (!priv->sh->cmng.query_thread_on)
+	if (!priv->counter_fallback && !priv->sh->cmng.query_thread_on)
 		/* Start the asynchronous batch query by the host thread. */
 		mlx5_set_query_alarm(priv->sh);
 	TAILQ_REMOVE(&pool->counters, cnt_free, next);
@@ -4491,7 +4398,6 @@ struct field_modify_info modify_tcp[] = {
 static void
 flow_dv_counter_release(struct rte_eth_dev *dev, uint32_t counter)
 {
-	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_flow_counter_pool *pool = NULL;
 	struct mlx5_flow_counter *cnt;
 	struct mlx5_flow_counter_ext *cnt_ext = NULL;
@@ -4500,15 +4406,10 @@ struct field_modify_info modify_tcp[] = {
 		return;
 	cnt = flow_dv_counter_get_by_idx(dev, counter, &pool);
 	MLX5_ASSERT(pool);
-	if (counter < MLX5_CNT_BATCH_OFFSET)
+	if (counter < MLX5_CNT_BATCH_OFFSET) {
 		cnt_ext = MLX5_CNT_TO_CNT_EXT(pool, cnt);
-	if (cnt_ext && --cnt_ext->ref_cnt)
-		return;
-	if (priv->counter_fallback) {
-		claim_zero(mlx5_glue->destroy_flow_action(cnt->action));
-		flow_dv_counter_release_fallback(dev, cnt_ext);
-		cnt->action = NULL;
-		return;
+		if (cnt_ext && --cnt_ext->ref_cnt)
+			return;
 	}
 	/* Put the counter in the end - the last updated one. */
 	TAILQ_INSERT_TAIL(&pool->counters, cnt, next);
-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [dpdk-dev] [PATCH 0/8] net/mlx5 counter optimize
  2020-04-07  3:59 [dpdk-dev] [PATCH 0/8] net/mlx5 counter optimize Suanming Mou
                   ` (7 preceding siblings ...)
  2020-04-07  3:59 ` [dpdk-dev] [PATCH 8/8] net/mlx5: reorganize fallback counter management Suanming Mou
@ 2020-04-08 12:52 ` Raslan Darawsheh
  8 siblings, 0 replies; 10+ messages in thread
From: Raslan Darawsheh @ 2020-04-08 12:52 UTC (permalink / raw)
  To: Suanming Mou; +Cc: dev

Hi,

> -----Original Message-----
> From: Suanming Mou <suanmingm@mellanox.com>
> Sent: Tuesday, April 7, 2020 7:00 AM
> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@mellanox.com>
> Subject: [PATCH 0/8] net/mlx5 counter optimize
> 
> In the plan of save the memory consumption for rte_flow, the counter
> memory consumption will be optimized from two perspective.
> 
> Change the counter object saving as index instead of pointer in rte_flow.
> In this case, since currently the counters are allocated from the pool,
> counter can use the index as it is in the pool to address the object. The
> counter index ID is made up of the pool index and counter offset in the
> pool.
> 
> Split the counter struct members are used only in batch and none batch.
> Currently, there are two kinds of counters, one as batch and others as
> none batch. The most widely used batch counters only use limited members
> in the counter struct. Split the members only used by none batch counters
> to the extend counter struct, and allocate the memory only for the none
> batch counter pools saves memory for batch counters.
> 
> 
> Suanming Mou (8):
>   net/mlx5: fix incorrect counter container usage
>   net/mlx5: optimize counter release query generation
>   net/mlx5: change verbs counter allocator to indexed
>   common/mlx5: add batch counter id offset
>   net/mlx5: change Direct Verbs counter to indexed
>   net/mlx5: optimize flow counter handle type
>   net/mlx5: split the counter struct
>   net/mlx5: reorganize fallback counter management
> 
>  drivers/common/mlx5/mlx5_prm.h     |   9 +
>  drivers/net/mlx5/mlx5.c            |   6 +-
>  drivers/net/mlx5/mlx5.h            |  52 +++--
>  drivers/net/mlx5/mlx5_flow.c       |  28 ++-
>  drivers/net/mlx5/mlx5_flow.h       |  10 +-
>  drivers/net/mlx5/mlx5_flow_dv.c    | 445 ++++++++++++++++++--------------
> -----
>  drivers/net/mlx5/mlx5_flow_verbs.c | 173 ++++++++++----
>  7 files changed, 428 insertions(+), 295 deletions(-)
> 
> --
> 1.8.3.1

Series applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-04-08 12:52 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-07  3:59 [dpdk-dev] [PATCH 0/8] net/mlx5 counter optimize Suanming Mou
2020-04-07  3:59 ` [dpdk-dev] [PATCH 1/8] net/mlx5: fix incorrect counter container usage Suanming Mou
2020-04-07  3:59 ` [dpdk-dev] [PATCH 2/8] net/mlx5: optimize counter release query generation Suanming Mou
2020-04-07  3:59 ` [dpdk-dev] [PATCH 3/8] net/mlx5: change verbs counter allocator to indexed Suanming Mou
2020-04-07  3:59 ` [dpdk-dev] [PATCH 4/8] common/mlx5: add batch counter id offset Suanming Mou
2020-04-07  3:59 ` [dpdk-dev] [PATCH 5/8] net/mlx5: change Direct Verbs counter to indexed Suanming Mou
2020-04-07  3:59 ` [dpdk-dev] [PATCH 6/8] net/mlx5: optimize flow counter handle type Suanming Mou
2020-04-07  3:59 ` [dpdk-dev] [PATCH 7/8] net/mlx5: split the counter struct Suanming Mou
2020-04-07  3:59 ` [dpdk-dev] [PATCH 8/8] net/mlx5: reorganize fallback counter management Suanming Mou
2020-04-08 12:52 ` [dpdk-dev] [PATCH 0/8] net/mlx5 counter optimize Raslan Darawsheh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).