From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 94C22A0A02; Fri, 23 Apr 2021 03:30:03 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A8ADF41D41; Fri, 23 Apr 2021 03:29:59 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id 4368C4014F for ; Fri, 23 Apr 2021 03:29:57 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9FEE113D5; Thu, 22 Apr 2021 18:29:56 -0700 (PDT) Received: from 2p2660v4-1.austin.arm.com (2p2660v4-1.austin.arm.com [10.118.13.237]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9839B3F774; Thu, 22 Apr 2021 18:29:56 -0700 (PDT) From: Dharmik Thakkar To: Olivier Matz , Andrew Rybchenko Cc: dev@dpdk.org, nd@arm.com, joyce.kong@arm.com, Dharmik Thakkar Date: Thu, 22 Apr 2021 20:29:38 -0500 Message-Id: <20210423012938.24770-3-dharmik.thakkar@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210423012938.24770-1-dharmik.thakkar@arm.com> References: <20210420000800.1504-1-dharmik.thakkar@arm.com> <20210423012938.24770-1-dharmik.thakkar@arm.com> Subject: [dpdk-dev] [PATCH v4 2/2] lib/mempool: distinguish debug counters from cache and pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Joyce Kong If cache is enabled, objects will be retrieved/put from/to cache, subsequently from/to the common pool. Now the debug stats calculate the objects retrieved/put from/to cache and pool together, it is better to distinguish them. Signed-off-by: Joyce Kong Signed-off-by: Dharmik Thakkar Reviewed-by: Ruifeng Wang Reviewed-by: Honnappa Nagarahalli --- lib/mempool/rte_mempool.c | 16 +++++++++++++++ lib/mempool/rte_mempool.h | 43 ++++++++++++++++++++++++++------------- 2 files changed, 45 insertions(+), 14 deletions(-) diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c index afb1239c8d48..e9343c2a7f6b 100644 --- a/lib/mempool/rte_mempool.c +++ b/lib/mempool/rte_mempool.c @@ -1244,6 +1244,14 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp) for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { sum.put_bulk += mp->stats[lcore_id].put_bulk; sum.put_objs += mp->stats[lcore_id].put_objs; + sum.put_common_pool_bulk += + mp->stats[lcore_id].put_common_pool_bulk; + sum.put_common_pool_objs += + mp->stats[lcore_id].put_common_pool_objs; + sum.get_common_pool_bulk += + mp->stats[lcore_id].get_common_pool_bulk; + sum.get_common_pool_objs += + mp->stats[lcore_id].get_common_pool_objs; sum.get_success_bulk += mp->stats[lcore_id].get_success_bulk; sum.get_success_objs += mp->stats[lcore_id].get_success_objs; sum.get_fail_bulk += mp->stats[lcore_id].get_fail_bulk; @@ -1254,6 +1262,14 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp) fprintf(f, " stats:\n"); fprintf(f, " put_bulk=%"PRIu64"\n", sum.put_bulk); fprintf(f, " put_objs=%"PRIu64"\n", sum.put_objs); + fprintf(f, " put_common_pool_bulk=%"PRIu64"\n", + sum.put_common_pool_bulk); + fprintf(f, " put_common_pool_objs=%"PRIu64"\n", + sum.put_common_pool_objs); + fprintf(f, " get_common_pool_bulk=%"PRIu64"\n", + sum.get_common_pool_bulk); + fprintf(f, " get_common_pool_objs=%"PRIu64"\n", + sum.get_common_pool_objs); fprintf(f, " get_success_bulk=%"PRIu64"\n", sum.get_success_bulk); fprintf(f, " get_success_objs=%"PRIu64"\n", sum.get_success_objs); fprintf(f, " get_fail_bulk=%"PRIu64"\n", sum.get_fail_bulk); diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h index 848a19226149..4343b287dc4e 100644 --- a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -64,14 +64,21 @@ extern "C" { #ifdef RTE_LIBRTE_MEMPOOL_DEBUG /** * A structure that stores the mempool statistics (per-lcore). + * Note: Cache stats (put_cache_bulk/objs, get_cache_bulk/objs) are not + * captured since they can be calculated from other stats. + * For example: put_cache_objs = put_objs - put_common_pool_objs. */ struct rte_mempool_debug_stats { - uint64_t put_bulk; /**< Number of puts. */ - uint64_t put_objs; /**< Number of objects successfully put. */ - uint64_t get_success_bulk; /**< Successful allocation number. */ - uint64_t get_success_objs; /**< Objects successfully allocated. */ - uint64_t get_fail_bulk; /**< Failed allocation number. */ - uint64_t get_fail_objs; /**< Objects that failed to be allocated. */ + uint64_t put_bulk; /**< Number of puts. */ + uint64_t put_objs; /**< Number of objects successfully put. */ + uint64_t put_common_pool_bulk; /**< Number of bulks enqueued in common pool. */ + uint64_t put_common_pool_objs; /**< Number of objects enqueued in common pool. */ + uint64_t get_common_pool_bulk; /**< Number of bulks dequeued from common pool. */ + uint64_t get_common_pool_objs; /**< Number of objects dequeued from common pool. */ + uint64_t get_success_bulk; /**< Successful allocation number. */ + uint64_t get_success_objs; /**< Objects successfully allocated. */ + uint64_t get_fail_bulk; /**< Failed allocation number. */ + uint64_t get_fail_objs; /**< Objects that failed to be allocated. */ /** Successful allocation number of contiguous blocks. */ uint64_t get_success_blks; /** Failed allocation number of contiguous blocks. */ @@ -699,10 +706,18 @@ rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp, void **obj_table, unsigned n) { struct rte_mempool_ops *ops; + int ret; rte_mempool_trace_ops_dequeue_bulk(mp, obj_table, n); ops = rte_mempool_get_ops(mp->ops_index); - return ops->dequeue(mp, obj_table, n); + ret = ops->dequeue(mp, obj_table, n); + if (ret == 0) { + __MEMPOOL_STAT_ADD(mp, get_common_pool_bulk, 1); + __MEMPOOL_STAT_ADD(mp, get_common_pool_objs, n); + __MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); + __MEMPOOL_STAT_ADD(mp, get_success_objs, n); + } + return ret; } /** @@ -749,6 +764,8 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table, { struct rte_mempool_ops *ops; + __MEMPOOL_STAT_ADD(mp, put_common_pool_bulk, 1); + __MEMPOOL_STAT_ADD(mp, put_common_pool_objs, n); rte_mempool_trace_ops_enqueue_bulk(mp, obj_table, n); ops = rte_mempool_get_ops(mp->ops_index); return ops->enqueue(mp, obj_table, n); @@ -1297,9 +1314,10 @@ __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table, /* Add elements back into the cache */ rte_memcpy(&cache_objs[0], obj_table, sizeof(void *) * n); - cache->len += n; + __MEMPOOL_STAT_ADD(mp, put_cache_bulk, 1); + if (cache->len >= cache->flushthresh) { rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size], cache->len - cache->size); @@ -1430,6 +1448,9 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table, } cache->len += req; + } else { + __MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); + __MEMPOOL_STAT_ADD(mp, get_success_objs, n); } /* Now fill in the response ... */ @@ -1438,9 +1459,6 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table, cache->len -= n; - __MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); - __MEMPOOL_STAT_ADD(mp, get_success_objs, n); - return 0; ring_dequeue: @@ -1451,9 +1469,6 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table, if (ret < 0) { __MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1); __MEMPOOL_STAT_ADD(mp, get_fail_objs, n); - } else { - __MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); - __MEMPOOL_STAT_ADD(mp, get_success_objs, n); } return ret; -- 2.17.1