From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 74FABA0577; Tue, 7 Apr 2020 06:00:15 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5BB7E1BF30; Tue, 7 Apr 2020 05:59:59 +0200 (CEST) Received: from git-send-mailer.rdmz.labs.mlnx (unknown [37.142.13.130]) by dpdk.org (Postfix) with ESMTP id 89FBA1BF1B for ; Tue, 7 Apr 2020 05:59:57 +0200 (CEST) From: Suanming Mou To: Matan Azrad , Shahaf Shuler , Viacheslav Ovsiienko Cc: dev@dpdk.org, rasland@mellanox.com Date: Tue, 7 Apr 2020 11:59:41 +0800 Message-Id: <1586231987-338112-3-git-send-email-suanmingm@mellanox.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1586231987-338112-1-git-send-email-suanmingm@mellanox.com> References: <1586231987-338112-1-git-send-email-suanmingm@mellanox.com> Subject: [dpdk-dev] [PATCH 2/8] net/mlx5: optimize counter release query generation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Query generation was introduced to avoid counter to be reallocated before the counter statistics be fully updated. Since the counters be released between query trigger and query handler may miss the packets arrived in the trigger and handler gap period. In this case, user can only allocate the counter while pool query_gen is greater than the counter query_gen + 1 which indicates a new round of query finished, the statistic is fully updated. Split the pool query_gen to start_query_gen and end_query_gen helps to have a better identify for the counter released in the gap period. And it helps the counter released before query trigger or after query handler can be reallocated more efficiently. Signed-off-by: Suanming Mou Acked-by: Matan Azrad --- drivers/net/mlx5/mlx5.h | 3 ++- drivers/net/mlx5/mlx5_flow.c | 14 +++++++++++++- drivers/net/mlx5/mlx5_flow_dv.c | 18 ++++++++++++++---- 3 files changed, 29 insertions(+), 6 deletions(-) diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 34ab475..fe7e684 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -283,7 +283,8 @@ struct mlx5_flow_counter_pool { rte_atomic64_t a64_dcs; }; /* The devx object of the minimum counter ID. */ - rte_atomic64_t query_gen; + rte_atomic64_t start_query_gen; /* Query start round. */ + rte_atomic64_t end_query_gen; /* Query end round. */ uint32_t n_counters: 16; /* Number of devx allocated counters. */ rte_spinlock_t sl; /* The pool lock. */ struct mlx5_counter_stats_raw *raw; diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 6438a14..d2e9cc4 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -5655,6 +5655,13 @@ struct mlx5_flow_counter * dcs = (struct mlx5_devx_obj *)(uintptr_t)rte_atomic64_read (&pool->a64_dcs); offset = batch ? 0 : dcs->id % MLX5_COUNTERS_PER_POOL; + /* + * Identify the counters released between query trigger and query + * handle more effiecntly. The counter released in this gap period + * should wait for a new round of query as the new arrived packets + * will not be taken into account. + */ + rte_atomic64_add(&pool->start_query_gen, 1); ret = mlx5_devx_cmd_flow_counter_query(dcs, 0, MLX5_COUNTERS_PER_POOL - offset, NULL, NULL, pool->raw_hw->mem_mng->dm->id, @@ -5663,6 +5670,7 @@ struct mlx5_flow_counter * sh->devx_comp, (uint64_t)(uintptr_t)pool); if (ret) { + rte_atomic64_sub(&pool->start_query_gen, 1); DRV_LOG(ERR, "Failed to trigger asynchronous query for dcs ID" " %d", pool->min_dcs->id); pool->raw_hw = NULL; @@ -5702,13 +5710,17 @@ struct mlx5_flow_counter * struct mlx5_counter_stats_raw *raw_to_free; if (unlikely(status)) { + rte_atomic64_sub(&pool->start_query_gen, 1); raw_to_free = pool->raw_hw; } else { raw_to_free = pool->raw; rte_spinlock_lock(&pool->sl); pool->raw = pool->raw_hw; rte_spinlock_unlock(&pool->sl); - rte_atomic64_add(&pool->query_gen, 1); + MLX5_ASSERT(rte_atomic64_read(&pool->end_query_gen) + 1 == + rte_atomic64_read(&pool->start_query_gen)); + rte_atomic64_set(&pool->end_query_gen, + rte_atomic64_read(&pool->start_query_gen)); /* Be sure the new raw counters data is updated in memory. */ rte_cio_wmb(); } diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index f022751..074f05f 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -4190,8 +4190,12 @@ struct field_modify_info modify_tcp[] = { /* * The generation of the new allocated counters in this pool is 0, 2 in * the pool generation makes all the counters valid for allocation. + * The start and end query generation protect the counters be released + * between the query and update gap period will not be reallocated + * without the last query finished and stats updated to the memory. */ - rte_atomic64_set(&pool->query_gen, 0x2); + rte_atomic64_set(&pool->start_query_gen, 0x2); + rte_atomic64_set(&pool->end_query_gen, 0x2); TAILQ_INIT(&pool->counters); TAILQ_INSERT_HEAD(&cont->pool_list, pool, next); cont->pools[n_valid] = pool; @@ -4365,8 +4369,8 @@ struct field_modify_info modify_tcp[] = { * updated too. */ cnt_free = TAILQ_FIRST(&pool->counters); - if (cnt_free && cnt_free->query_gen + 1 < - rte_atomic64_read(&pool->query_gen)) + if (cnt_free && cnt_free->query_gen < + rte_atomic64_read(&pool->end_query_gen)) break; cnt_free = NULL; } @@ -4441,7 +4445,13 @@ struct field_modify_info modify_tcp[] = { /* Put the counter in the end - the last updated one. */ TAILQ_INSERT_TAIL(&pool->counters, counter, next); - counter->query_gen = rte_atomic64_read(&pool->query_gen); + /* + * Counters released between query trigger and handler need + * to wait the next round of query. Since the packets arrive + * in the gap period will not be taken into account to the + * old counter. + */ + counter->query_gen = rte_atomic64_read(&pool->start_query_gen); } } -- 1.8.3.1