From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4B8ABA09EF; Tue, 22 Dec 2020 03:16:50 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A4036CA57; Tue, 22 Dec 2020 03:16:47 +0100 (CET) Received: from dish-sg.nttdocomo.co.jp (dish-sg.nttdocomo.co.jp [202.19.227.74]) by dpdk.org (Postfix) with ESMTP id 7DC65CA51 for ; Tue, 22 Dec 2020 03:16:45 +0100 (CET) X-dD-Source: Outbound Received: from zssg-mailmd101.ddreams.local (zssg-mailmd900.ddreams.local [10.160.172.63]) by zssg-mailou102.ddreams.local (Postfix) with ESMTP id 10F04120105; Tue, 22 Dec 2020 11:16:43 +0900 (JST) Received: from t131sg-mailcc111.ddreams.local (t131sg-mailcc111.ddreams.local [100.66.31.221]) by zssg-mailmd101.ddreams.local (dDREAMS) with ESMTP id <0QLP00DTAYBUUS50@dDREAMS>; Tue, 22 Dec 2020 11:16:42 +0900 (JST) Received: from t131sg-mailcc111.ddreams.local (localhost [127.0.0.1]) by t131sg-mailcc111.ddreams.local (unknown) with SMTP id 0BM2GgoX004822; Tue, 22 Dec 2020 11:16:42 +0900 Received: from zssg-mailmf102.ddreams.local (unknown [127.0.0.1]) by zssg-mailmf102.ddreams.local (Postfix) with ESMTP id 9A8647E6038; Tue, 22 Dec 2020 11:16:30 +0900 (JST) Received: from zssg-mailmf102.ddreams.local (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 992DB8E6056; Tue, 22 Dec 2020 11:16:30 +0900 (JST) Received: from localhost (unknown [127.0.0.1]) by IMSVA (Postfix) with SMTP id 97A258E6055; Tue, 22 Dec 2020 11:16:30 +0900 (JST) X-IMSS-HAND-OFF-DIRECTIVE: localhost:10026 Received: from zssg-mailmf102.ddreams.local (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 814408E6058; Tue, 22 Dec 2020 11:16:29 +0900 (JST) Received: from zssg-mailua102.ddreams.local (unknown [10.160.172.62]) by zssg-mailmf102.ddreams.local (Postfix) with ESMTP; Tue, 22 Dec 2020 11:16:29 +0900 (JST) Received: from [10.87.198.18] (unknown [10.160.183.129]) by zssg-mailua102.ddreams.local (dDREAMS) with ESMTPA id <0QLP00T8QYBACD60@dDREAMS>; Tue, 22 Dec 2020 11:16:23 +0900 (JST) Date: Tue, 22 Dec 2020 11:16:22 +0900 From: Hideyuki Yamashita In-reply-to: <2a9a170b-c351-e3f2-36b5-6db9d4c29da5@intel.com> References: <20201204075109.14694-1-yamashita.hideyuki@ntt-tx.co.jp> <2a9a170b-c351-e3f2-36b5-6db9d4c29da5@intel.com> Message-id: <20201222111622.CF9A.17218CA3@ntt-tx.co.jp> MIME-version: 1.0 Content-type: text/plain; charset=US-ASCII Content-transfer-encoding: 7bit X-Mailer: Becky! ver. 2.75.01 [ja] X-TM-AS-GCONF: 00 To: David Hunt Cc: dev@dpdk.org Subject: Re: [dpdk-dev] [PATCH 0/5] add apistats function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, David Thanks for your comments. Please see my comments inline tagged with [HY]. > > On 4/12/2020 7:51 AM, Hideyuki Yamashita wrote: > > In general, DPDK application consumes CPU usage because it polls > > incoming packets using rx_burst API in infinite loop. > > This makes difficult to estimate how much CPU usage is really > > used to send/receive packets by the DPDK application. > > > > For example, even if no incoming packets arriving, CPU usage > > looks nearly 100% when observed by top command. > > > > It is beneficial if developers can observe real CPU usage of the > > DPDK application. > > Such information can be exported to monitoring application like > > prometheus/graphana and shows CPU usage graphically. > > > > To achieve above, this patch set provides apistats functionality. > > apistats provides the followiing two counters for each lcore. > > - rx_burst_counts[RTE_MAX_LCORE] > > - tx_burst_counts[RTE_MAX_LCORE] > > Those accumulates rx_burst/tx_burst counts since the application starts. > > > > By using those values, developers can roughly estimate CPU usage. > > Let us assume a DPDK application is simply forwarding packets. > > It calls tx_burst only if it receive packets. > > If rx_burst_counts=1000 and tx_burst_count=1000 during certain > > period of time, one can assume CPU usage is 100%. > > If rx_burst_counts=1000 and tx_burst_count=100 during certain > > period of time, one can assume CPU usage is 10%. > > Here we assumes that tx_burst_count equals counts which rx_burst function > > really receives incoming packets. > > > > > > This patch set provides the following. > > - basic API counting functionality(apistats) into librte_ethdev > > - add code to testpmd to accumulate counter information > > - add code to proc-info to retrieve above mentioned counter information > > - add description in proc-info document about --apistats parameter > > - modify MAINTAINERS file for apistats.c and apistats.h > > > > Hideyuki Yamashita (5): > > maintainers: update maintainers file for apistats > > app/proc-info: add to use apistats > > app/test-pmd: add to use apistats > > docs: add description of apistats parameter into proc-info > > librte_ethdev: add to use apistats > > > > MAINTAINERS | 3 ++ > > app/proc-info/main.c | 46 +++++++++++++++++++++++ > > app/test-pmd/testpmd.c | 4 ++ > > doc/guides/tools/proc_info.rst | 10 ++++- > > lib/librte_ethdev/meson.build | 6 ++- > > lib/librte_ethdev/rte_apistats.c | 64 ++++++++++++++++++++++++++++++++ > > lib/librte_ethdev/rte_apistats.h | 64 ++++++++++++++++++++++++++++++++ > > lib/librte_ethdev/rte_ethdev.h | 7 ++++ > > lib/librte_ethdev/version.map | 5 +++ > > 9 files changed, 205 insertions(+), 4 deletions(-) > > create mode 100644 lib/librte_ethdev/rte_apistats.c > > create mode 100644 lib/librte_ethdev/rte_apistats.h > > > Hi Hideyuki, > > I have a few questions on the patch set. > > How does this compare to the mechanism added to l3fwd-power which counts the number of empty, partial and full polls, and uses them to calculate busyness? We saw pretty good tracking of busyness using those metrics. I would be concerned that just looking at the numebr of rx_bursts and tx_bursts may be limited to only a few use-cases. The l3fwd-power example uses branchless increments to capture empty, non-empty, full, and non-full polls. [HY] Thanks for your commetns. You are correct. As you well know, l3fwd-power measures "how cpu cores are busy". And in that sense, the goal of my proposal is the same with yours . Moreover l3fwd-power is more precise than my proposal. Point of my proposal is - more easy to use - less code impact on application code I think that if application developer wants to need to measure "how cpu cores are busy" he/she will needs to implement - logic similar with l3fwd-power or - use jobstats API But it is rather heavy for existing applications. By using my proposal, it is "much easier" to implement. (But it is "rough" measurement. I think it is trade-off) > Why not use the existing telemetry library to store the stats? It would be good if whatever metrics were counted were made available in a standard way, so that external entities such as collectd could pick them up, rather than having to parse the new struct. The l3fwd-power example registers the necessary new metrics, and exposes them through the telemetry library. [HY] OK. Currently, no reason not using telemetry. I think telemetry is useful for applications which does NOT call DPDK API(C lang API) directly. My patchset provide only C API to retrieve apistats. But if assuming not all applications call C API, then I think it is reasonable to add telemetry in addition to C API for exposing stats. > And a comment on the patch set in general: The order of the patch set seems reversed. The earlier patch do not compile, because they depend on rte_apistats.h, which is introduced in the final patch. Each patch as it is applied needs to build successfully. [HY] Thanks for your information. OK. Now I understand your point that the order of the patch is very important. Thanks. > Also, I would suggest a different name. rte_apistats seems very generic and could apply to anything. How about something like rte_ethdev_stats.h or rte_burst_stats.h? [HY] Thanks. I agree. "txrx_apicall_stats" maybe? Thanks! BR, Hideyuki Yamashita NTT TechnoCross > Rgds, > Dave. > > > > > >