From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7783BA0A02; Tue, 27 Apr 2021 18:02:16 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9C0684126C; Tue, 27 Apr 2021 18:02:06 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id 7DEF3410F5 for ; Tue, 27 Apr 2021 18:02:01 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 07C39101E; Tue, 27 Apr 2021 09:02:01 -0700 (PDT) Received: from 2p2660v4-1.austin.arm.com (2p2660v4-1.austin.arm.com [10.118.13.237]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F3B2C3F73B; Tue, 27 Apr 2021 09:02:00 -0700 (PDT) From: Dharmik Thakkar To: Olivier Matz , Andrew Rybchenko Cc: dev@dpdk.org, nd@arm.com, joyce.kong@arm.com, honnappa.nagarahalli@arm.com, Dharmik Thakkar Date: Tue, 27 Apr 2021 11:01:40 -0500 Message-Id: <20210427160141.9945-3-dharmik.thakkar@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210427160141.9945-1-dharmik.thakkar@arm.com> References: <20210423012938.24770-1-dharmik.thakkar@arm.com> <20210427160141.9945-1-dharmik.thakkar@arm.com> Subject: [dpdk-dev] [PATCH v5 2/2] mempool: distinguish debug counters from cache and pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Joyce Kong If cache is enabled, objects will be retrieved/put from/to cache, subsequently from/to the common pool. Now the debug stats calculate the objects retrieved/put from/to cache and pool together, it is better to distinguish them. Signed-off-by: Joyce Kong Signed-off-by: Dharmik Thakkar Reviewed-by: Ruifeng Wang Reviewed-by: Honnappa Nagarahalli --- lib/mempool/rte_mempool.c | 16 ++++++++++++++++ lib/mempool/rte_mempool.h | 29 ++++++++++++++++++++++------- 2 files changed, 38 insertions(+), 7 deletions(-) diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index afb1239c8d48..e9343c2a7f6b 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -1244,6 +1244,14 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp) for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { sum.put_bulk += mp->stats[lcore_id].put_bulk; sum.put_objs += mp->stats[lcore_id].put_objs; + sum.put_common_pool_bulk += + mp->stats[lcore_id].put_common_pool_bulk; + sum.put_common_pool_objs += + mp->stats[lcore_id].put_common_pool_objs; + sum.get_common_pool_bulk += + mp->stats[lcore_id].get_common_pool_bulk; + sum.get_common_pool_objs += + mp->stats[lcore_id].get_common_pool_objs; sum.get_success_bulk += mp->stats[lcore_id].get_success_bulk; sum.get_success_objs += mp->stats[lcore_id].get_success_objs; sum.get_fail_bulk += mp->stats[lcore_id].get_fail_bulk; @@ -1254,6 +1262,14 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp) fprintf(f, " stats:\n"); fprintf(f, " put_bulk=%"PRIu64"\n", sum.put_bulk); fprintf(f, " put_objs=%"PRIu64"\n", sum.put_objs); + fprintf(f, " put_common_pool_bulk=%"PRIu64"\n", + sum.put_common_pool_bulk); + fprintf(f, " put_common_pool_objs=%"PRIu64"\n", + sum.put_common_pool_objs); + fprintf(f, " get_common_pool_bulk=%"PRIu64"\n", + sum.get_common_pool_bulk); + fprintf(f, " get_common_pool_objs=%"PRIu64"\n", + sum.get_common_pool_objs); fprintf(f, " get_success_bulk=%"PRIu64"\n", sum.get_success_bulk); fprintf(f, " get_success_objs=%"PRIu64"\n", sum.get_success_objs); fprintf(f, " get_fail_bulk=%"PRIu64"\n", sum.get_fail_bulk); diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 848a19226149..7089baab31e3 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -64,14 +64,21 @@ extern "C" { #ifdef RTE_LIBRTE_MEMPOOL_DEBUG /** * A structure that stores the mempool statistics (per-lcore). + * Note: Cache stats (put_cache_bulk/objs, get_cache_bulk/objs) are not + * captured since they can be calculated from other stats. + * For example: put_cache_objs = put_objs - put_common_pool_objs. */ struct rte_mempool_debug_stats { - uint64_t put_bulk; /**< Number of puts. */ - uint64_t put_objs; /**< Number of objects successfully put. */ - uint64_t get_success_bulk; /**< Successful allocation number. */ - uint64_t get_success_objs; /**< Objects successfully allocated. */ - uint64_t get_fail_bulk; /**< Failed allocation number. */ - uint64_t get_fail_objs; /**< Objects that failed to be allocated. */ + uint64_t put_bulk; /**< Number of puts. */ + uint64_t put_objs; /**< Number of objects successfully put. */ + uint64_t put_common_pool_bulk; /**< Number of bulks enqueued in common pool. */ + uint64_t put_common_pool_objs; /**< Number of objects enqueued in common pool. */ + uint64_t get_common_pool_bulk; /**< Number of bulks dequeued from common pool. */ + uint64_t get_common_pool_objs; /**< Number of objects dequeued from common pool. */ + uint64_t get_success_bulk; /**< Successful allocation number. */ + uint64_t get_success_objs; /**< Objects successfully allocated. */ + uint64_t get_fail_bulk; /**< Failed allocation number. */ + uint64_t get_fail_objs; /**< Objects that failed to be allocated. */ /** Successful allocation number of contiguous blocks. */ uint64_t get_success_blks; /** Failed allocation number of contiguous blocks. */ @@ -699,10 +706,16 @@ rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp, void **obj_table, unsigned n) { struct rte_mempool_ops *ops; + int ret; rte_mempool_trace_ops_dequeue_bulk(mp, obj_table, n); ops = rte_mempool_get_ops(mp->ops_index); - return ops->dequeue(mp, obj_table, n); + ret = ops->dequeue(mp, obj_table, n); + if (ret == 0) { + __MEMPOOL_STAT_ADD(mp, get_common_pool_bulk, 1); + __MEMPOOL_STAT_ADD(mp, get_common_pool_objs, n); + } + return ret; } /** @@ -749,6 +762,8 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table, { struct rte_mempool_ops *ops; + __MEMPOOL_STAT_ADD(mp, put_common_pool_bulk, 1); + __MEMPOOL_STAT_ADD(mp, put_common_pool_objs, n); rte_mempool_trace_ops_enqueue_bulk(mp, obj_table, n); ops = rte_mempool_get_ops(mp->ops_index); return ops->enqueue(mp, obj_table, n); -- 2.17.1