From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id ACD60A0548; Tue, 27 Apr 2021 14:18:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 961AC40150; Tue, 27 Apr 2021 14:18:46 +0200 (CEST) Received: from mail-wm1-f47.google.com (mail-wm1-f47.google.com [209.85.128.47]) by mails.dpdk.org (Postfix) with ESMTP id 37D704014E for ; Tue, 27 Apr 2021 14:18:45 +0200 (CEST) Received: by mail-wm1-f47.google.com with SMTP id n84so4511639wma.0 for ; Tue, 27 Apr 2021 05:18:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to :user-agent; bh=veixj7llr7ztj218QPNqSSvCgeBPVlYD+Fdapynw+aY=; b=TLD1JKiSGu+HM0VrUgcQvreYLpFfhotxQ0tAI5RdmUOz8KWSpJsQQ0pjb6n/iPZxVC bLIEskHOuknJiWNpySgTzgGygfm/15c8tcVNakEuwmt6BdbG09InCiXelRZS2jzAOcLx Jl7Hap9SP1nM70dfVl86IV0MuzVS9zZOxIaOTUPElrfFQBVZzs5rldfKLsIen4PfS7/F jWtAotJhFSzonYGVnmI+D/bBGYu6ohoN0LmSE0b9I6cj+V5lBTuM4mYftYIJ6VJFW7Tp EL2YeymqKrH3WrJpnhCyYkfXaQJc61udpvYNmWocWytK6suEmaoEaY5jus7CnIUhaDgK EjCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to:user-agent; bh=veixj7llr7ztj218QPNqSSvCgeBPVlYD+Fdapynw+aY=; b=iCPZvzGb1XgzshnvYmnPdXnw7SwwYoKiBECRu3eBLvZ6XEPEoGjWrL8UpELeMB9n5M uu52xxiqcDlcTdbuPT7C7YxuUhxquUAyYEJ5+3feb7Tn5iCmWy0n+4QxmmSXwEQ5hoZw 2BlUqMAAXDfmA+G5F6GXNXDGJk6cdKDi2lxxDFy/3dn190IfSJ3QP9YmxXh0ROvqQpFW CG2UMIu2majrTjo+sLJjO1+3dTDilloA3UCJ4Wd1oFgs1pr/uAlhCUB5vqJLJeUTouIl lFzlCxFNi1zT2by4Kw2qySFIfI58W4YXlO2tTaoBy1jiihTtmJPVU5C2zhSFhNFuZg5w Pd5g== X-Gm-Message-State: AOAM533hqQ/T8Trp5FPBeIIDNrUChIhkqhFgP+KykCimX+yXzHORG/0W d7R8x74ix7/mwPdemDnJNlJ+zw== X-Google-Smtp-Source: ABdhPJzJuhF6vEVAlFXctDGDzOqtRVtVeYTdm2PZztdCi9vglw/5nqF+2KzaA057UKDnDdeZJdfN2A== X-Received: by 2002:a05:600c:1987:: with SMTP id t7mr4117584wmq.34.1619525924848; Tue, 27 Apr 2021 05:18:44 -0700 (PDT) Received: from 6wind.com ([2a01:e0a:5ac:6460:c065:401d:87eb:9b25]) by smtp.gmail.com with ESMTPSA id p14sm3963945wrx.88.2021.04.27.05.18.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Apr 2021 05:18:44 -0700 (PDT) Date: Tue, 27 Apr 2021 14:18:43 +0200 From: Olivier Matz To: Dharmik Thakkar Cc: Andrew Rybchenko , dev@dpdk.org, nd@arm.com, joyce.kong@arm.com Message-ID: <20210427121843.GK1726@platinum> References: <20210420000800.1504-1-dharmik.thakkar@arm.com> <20210423012938.24770-1-dharmik.thakkar@arm.com> <20210423012938.24770-3-dharmik.thakkar@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <20210423012938.24770-3-dharmik.thakkar@arm.com> User-Agent: Mutt/1.10.1 (2018-07-13) Subject: Re: [dpdk-dev] [PATCH v4 2/2] lib/mempool: distinguish debug counters from cache and pool X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Dharmik, Few comments below. On Thu, Apr 22, 2021 at 08:29:38PM -0500, Dharmik Thakkar wrote: > From: Joyce Kong > > If cache is enabled, objects will be retrieved/put from/to cache, > subsequently from/to the common pool. Now the debug stats calculate > the objects retrieved/put from/to cache and pool together, it is > better to distinguish them. > > Signed-off-by: Joyce Kong > Signed-off-by: Dharmik Thakkar > Reviewed-by: Ruifeng Wang > Reviewed-by: Honnappa Nagarahalli > --- > lib/mempool/rte_mempool.c | 16 +++++++++++++++ > lib/mempool/rte_mempool.h | 43 ++++++++++++++++++++++++++------------- > 2 files changed, 45 insertions(+), 14 deletions(-) > > diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c > index afb1239c8d48..e9343c2a7f6b 100644 > --- a/lib/mempool/rte_mempool.c > +++ b/lib/mempool/rte_mempool.c > @@ -1244,6 +1244,14 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp) > for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { > sum.put_bulk += mp->stats[lcore_id].put_bulk; > sum.put_objs += mp->stats[lcore_id].put_objs; > + sum.put_common_pool_bulk += > + mp->stats[lcore_id].put_common_pool_bulk; > + sum.put_common_pool_objs += > + mp->stats[lcore_id].put_common_pool_objs; > + sum.get_common_pool_bulk += > + mp->stats[lcore_id].get_common_pool_bulk; > + sum.get_common_pool_objs += > + mp->stats[lcore_id].get_common_pool_objs; > sum.get_success_bulk += mp->stats[lcore_id].get_success_bulk; > sum.get_success_objs += mp->stats[lcore_id].get_success_objs; > sum.get_fail_bulk += mp->stats[lcore_id].get_fail_bulk; > @@ -1254,6 +1262,14 @@ rte_mempool_dump(FILE *f, struct rte_mempool *mp) > fprintf(f, " stats:\n"); > fprintf(f, " put_bulk=%"PRIu64"\n", sum.put_bulk); > fprintf(f, " put_objs=%"PRIu64"\n", sum.put_objs); > + fprintf(f, " put_common_pool_bulk=%"PRIu64"\n", > + sum.put_common_pool_bulk); > + fprintf(f, " put_common_pool_objs=%"PRIu64"\n", > + sum.put_common_pool_objs); > + fprintf(f, " get_common_pool_bulk=%"PRIu64"\n", > + sum.get_common_pool_bulk); > + fprintf(f, " get_common_pool_objs=%"PRIu64"\n", > + sum.get_common_pool_objs); > fprintf(f, " get_success_bulk=%"PRIu64"\n", sum.get_success_bulk); > fprintf(f, " get_success_objs=%"PRIu64"\n", sum.get_success_objs); > fprintf(f, " get_fail_bulk=%"PRIu64"\n", sum.get_fail_bulk); > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h > index 848a19226149..4343b287dc4e 100644 > --- a/lib/mempool/rte_mempool.h > +++ b/lib/mempool/rte_mempool.h > @@ -64,14 +64,21 @@ extern "C" { > #ifdef RTE_LIBRTE_MEMPOOL_DEBUG > /** > * A structure that stores the mempool statistics (per-lcore). > + * Note: Cache stats (put_cache_bulk/objs, get_cache_bulk/objs) are not > + * captured since they can be calculated from other stats. > + * For example: put_cache_objs = put_objs - put_common_pool_objs. > */ > struct rte_mempool_debug_stats { > - uint64_t put_bulk; /**< Number of puts. */ > - uint64_t put_objs; /**< Number of objects successfully put. */ > - uint64_t get_success_bulk; /**< Successful allocation number. */ > - uint64_t get_success_objs; /**< Objects successfully allocated. */ > - uint64_t get_fail_bulk; /**< Failed allocation number. */ > - uint64_t get_fail_objs; /**< Objects that failed to be allocated. */ > + uint64_t put_bulk; /**< Number of puts. */ > + uint64_t put_objs; /**< Number of objects successfully put. */ > + uint64_t put_common_pool_bulk; /**< Number of bulks enqueued in common pool. */ > + uint64_t put_common_pool_objs; /**< Number of objects enqueued in common pool. */ > + uint64_t get_common_pool_bulk; /**< Number of bulks dequeued from common pool. */ > + uint64_t get_common_pool_objs; /**< Number of objects dequeued from common pool. */ > + uint64_t get_success_bulk; /**< Successful allocation number. */ > + uint64_t get_success_objs; /**< Objects successfully allocated. */ > + uint64_t get_fail_bulk; /**< Failed allocation number. */ > + uint64_t get_fail_objs; /**< Objects that failed to be allocated. */ > /** Successful allocation number of contiguous blocks. */ > uint64_t get_success_blks; > /** Failed allocation number of contiguous blocks. */ > @@ -699,10 +706,18 @@ rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp, > void **obj_table, unsigned n) > { > struct rte_mempool_ops *ops; > + int ret; > > rte_mempool_trace_ops_dequeue_bulk(mp, obj_table, n); > ops = rte_mempool_get_ops(mp->ops_index); > - return ops->dequeue(mp, obj_table, n); > + ret = ops->dequeue(mp, obj_table, n); > + if (ret == 0) { > + __MEMPOOL_STAT_ADD(mp, get_common_pool_bulk, 1); > + __MEMPOOL_STAT_ADD(mp, get_common_pool_objs, n); > + __MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); > + __MEMPOOL_STAT_ADD(mp, get_success_objs, n); > + } > + return ret; > } I think we should only have the common_pool stats here, for 2 reasons: - more consistent with put() - in case we are called by __mempool_generic_get() for a "backfill" operation, the number of successes will not be incremented by the correct value (the "req" variable is != n) > > /** > @@ -749,6 +764,8 @@ rte_mempool_ops_enqueue_bulk(struct rte_mempool *mp, void * const *obj_table, > { > struct rte_mempool_ops *ops; > > + __MEMPOOL_STAT_ADD(mp, put_common_pool_bulk, 1); > + __MEMPOOL_STAT_ADD(mp, put_common_pool_objs, n); > rte_mempool_trace_ops_enqueue_bulk(mp, obj_table, n); > ops = rte_mempool_get_ops(mp->ops_index); > return ops->enqueue(mp, obj_table, n); > @@ -1297,9 +1314,10 @@ __mempool_generic_put(struct rte_mempool *mp, void * const *obj_table, > > /* Add elements back into the cache */ > rte_memcpy(&cache_objs[0], obj_table, sizeof(void *) * n); > - > cache->len += n; > > + __MEMPOOL_STAT_ADD(mp, put_cache_bulk, 1); > + This one was forgotten, there is a compilation error: In file included from ../lib/mempool/rte_mempool_ops_default.c:7: ../lib/mempool/rte_mempool.h: In function ‘__mempool_generic_put’: ../lib/mempool/rte_mempool.h:1319:25: error: ‘struct rte_mempool_debug_stats’ has no member named ‘put_cache_bulk’; did you mean ‘put_bulk’? __MEMPOOL_STAT_ADD(mp, put_cache_bulk, 1); ^~~~~~~~~~~~~~ ../lib/mempool/rte_mempool.h:283:26: note: in definition of macro ‘__MEMPOOL_STAT_ADD’ mp->stats[__lcore_id].name += n; \ ^~~~ > if (cache->len >= cache->flushthresh) { > rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size], > cache->len - cache->size); > @@ -1430,6 +1448,9 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table, > } > > cache->len += req; > + } else { > + __MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); > + __MEMPOOL_STAT_ADD(mp, get_success_objs, n); > } > > /* Now fill in the response ... */ > @@ -1438,9 +1459,6 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table, > > cache->len -= n; > > - __MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); > - __MEMPOOL_STAT_ADD(mp, get_success_objs, n); > - > return 0; > > ring_dequeue: > @@ -1451,9 +1469,6 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table, > if (ret < 0) { > __MEMPOOL_STAT_ADD(mp, get_fail_bulk, 1); > __MEMPOOL_STAT_ADD(mp, get_fail_objs, n); > - } else { > - __MEMPOOL_STAT_ADD(mp, get_success_bulk, 1); > - __MEMPOOL_STAT_ADD(mp, get_success_objs, n); > } > > return ret; > -- > 2.17.1 >