From: Suanming Mou <suanmingm@mellanox.com>
To: Matan Azrad <matan@mellanox.com>,
Shahaf Shuler <shahafs@mellanox.com>,
Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Cc: dev@dpdk.org, rasland@mellanox.com
Subject: [dpdk-dev] [PATCH 2/8] net/mlx5: optimize counter release query generation
Date: Tue, 7 Apr 2020 11:59:41 +0800 [thread overview]
Message-ID: <1586231987-338112-3-git-send-email-suanmingm@mellanox.com> (raw)
In-Reply-To: <1586231987-338112-1-git-send-email-suanmingm@mellanox.com>
Query generation was introduced to avoid counter to be reallocated
before the counter statistics be fully updated. Since the counters
be released between query trigger and query handler may miss the
packets arrived in the trigger and handler gap period. In this case,
user can only allocate the counter while pool query_gen is greater
than the counter query_gen + 1 which indicates a new round of query
finished, the statistic is fully updated.
Split the pool query_gen to start_query_gen and end_query_gen helps
to have a better identify for the counter released in the gap period.
And it helps the counter released before query trigger or after query
handler can be reallocated more efficiently.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com>
Acked-by: Matan Azrad <matan@mellanox.com>
---
drivers/net/mlx5/mlx5.h | 3 ++-
drivers/net/mlx5/mlx5_flow.c | 14 +++++++++++++-
drivers/net/mlx5/mlx5_flow_dv.c | 18 ++++++++++++++----
3 files changed, 29 insertions(+), 6 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 34ab475..fe7e684 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -283,7 +283,8 @@ struct mlx5_flow_counter_pool {
rte_atomic64_t a64_dcs;
};
/* The devx object of the minimum counter ID. */
- rte_atomic64_t query_gen;
+ rte_atomic64_t start_query_gen; /* Query start round. */
+ rte_atomic64_t end_query_gen; /* Query end round. */
uint32_t n_counters: 16; /* Number of devx allocated counters. */
rte_spinlock_t sl; /* The pool lock. */
struct mlx5_counter_stats_raw *raw;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 6438a14..d2e9cc4 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -5655,6 +5655,13 @@ struct mlx5_flow_counter *
dcs = (struct mlx5_devx_obj *)(uintptr_t)rte_atomic64_read
(&pool->a64_dcs);
offset = batch ? 0 : dcs->id % MLX5_COUNTERS_PER_POOL;
+ /*
+ * Identify the counters released between query trigger and query
+ * handle more effiecntly. The counter released in this gap period
+ * should wait for a new round of query as the new arrived packets
+ * will not be taken into account.
+ */
+ rte_atomic64_add(&pool->start_query_gen, 1);
ret = mlx5_devx_cmd_flow_counter_query(dcs, 0, MLX5_COUNTERS_PER_POOL -
offset, NULL, NULL,
pool->raw_hw->mem_mng->dm->id,
@@ -5663,6 +5670,7 @@ struct mlx5_flow_counter *
sh->devx_comp,
(uint64_t)(uintptr_t)pool);
if (ret) {
+ rte_atomic64_sub(&pool->start_query_gen, 1);
DRV_LOG(ERR, "Failed to trigger asynchronous query for dcs ID"
" %d", pool->min_dcs->id);
pool->raw_hw = NULL;
@@ -5702,13 +5710,17 @@ struct mlx5_flow_counter *
struct mlx5_counter_stats_raw *raw_to_free;
if (unlikely(status)) {
+ rte_atomic64_sub(&pool->start_query_gen, 1);
raw_to_free = pool->raw_hw;
} else {
raw_to_free = pool->raw;
rte_spinlock_lock(&pool->sl);
pool->raw = pool->raw_hw;
rte_spinlock_unlock(&pool->sl);
- rte_atomic64_add(&pool->query_gen, 1);
+ MLX5_ASSERT(rte_atomic64_read(&pool->end_query_gen) + 1 ==
+ rte_atomic64_read(&pool->start_query_gen));
+ rte_atomic64_set(&pool->end_query_gen,
+ rte_atomic64_read(&pool->start_query_gen));
/* Be sure the new raw counters data is updated in memory. */
rte_cio_wmb();
}
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index f022751..074f05f 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -4190,8 +4190,12 @@ struct field_modify_info modify_tcp[] = {
/*
* The generation of the new allocated counters in this pool is 0, 2 in
* the pool generation makes all the counters valid for allocation.
+ * The start and end query generation protect the counters be released
+ * between the query and update gap period will not be reallocated
+ * without the last query finished and stats updated to the memory.
*/
- rte_atomic64_set(&pool->query_gen, 0x2);
+ rte_atomic64_set(&pool->start_query_gen, 0x2);
+ rte_atomic64_set(&pool->end_query_gen, 0x2);
TAILQ_INIT(&pool->counters);
TAILQ_INSERT_HEAD(&cont->pool_list, pool, next);
cont->pools[n_valid] = pool;
@@ -4365,8 +4369,8 @@ struct field_modify_info modify_tcp[] = {
* updated too.
*/
cnt_free = TAILQ_FIRST(&pool->counters);
- if (cnt_free && cnt_free->query_gen + 1 <
- rte_atomic64_read(&pool->query_gen))
+ if (cnt_free && cnt_free->query_gen <
+ rte_atomic64_read(&pool->end_query_gen))
break;
cnt_free = NULL;
}
@@ -4441,7 +4445,13 @@ struct field_modify_info modify_tcp[] = {
/* Put the counter in the end - the last updated one. */
TAILQ_INSERT_TAIL(&pool->counters, counter, next);
- counter->query_gen = rte_atomic64_read(&pool->query_gen);
+ /*
+ * Counters released between query trigger and handler need
+ * to wait the next round of query. Since the packets arrive
+ * in the gap period will not be taken into account to the
+ * old counter.
+ */
+ counter->query_gen = rte_atomic64_read(&pool->start_query_gen);
}
}
--
1.8.3.1
next prev parent reply other threads:[~2020-04-07 4:00 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-07 3:59 [dpdk-dev] [PATCH 0/8] net/mlx5 counter optimize Suanming Mou
2020-04-07 3:59 ` [dpdk-dev] [PATCH 1/8] net/mlx5: fix incorrect counter container usage Suanming Mou
2020-04-07 3:59 ` Suanming Mou [this message]
2020-04-07 3:59 ` [dpdk-dev] [PATCH 3/8] net/mlx5: change verbs counter allocator to indexed Suanming Mou
2020-04-07 3:59 ` [dpdk-dev] [PATCH 4/8] common/mlx5: add batch counter id offset Suanming Mou
2020-04-07 3:59 ` [dpdk-dev] [PATCH 5/8] net/mlx5: change Direct Verbs counter to indexed Suanming Mou
2020-04-07 3:59 ` [dpdk-dev] [PATCH 6/8] net/mlx5: optimize flow counter handle type Suanming Mou
2020-04-07 3:59 ` [dpdk-dev] [PATCH 7/8] net/mlx5: split the counter struct Suanming Mou
2020-04-07 3:59 ` [dpdk-dev] [PATCH 8/8] net/mlx5: reorganize fallback counter management Suanming Mou
2020-04-08 12:52 ` [dpdk-dev] [PATCH 0/8] net/mlx5 counter optimize Raslan Darawsheh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1586231987-338112-3-git-send-email-suanmingm@mellanox.com \
--to=suanmingm@mellanox.com \
--cc=dev@dpdk.org \
--cc=matan@mellanox.com \
--cc=rasland@mellanox.com \
--cc=shahafs@mellanox.com \
--cc=viacheslavo@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).