From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3DEEEA04DB; Tue, 1 Dec 2020 01:41:10 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8F8E8C954; Tue, 1 Dec 2020 01:41:08 +0100 (CET) Received: from dish-sg.nttdocomo.co.jp (dish-sg.nttdocomo.co.jp [202.19.227.74]) by dpdk.org (Postfix) with ESMTP id 44C91C952 for ; Tue, 1 Dec 2020 01:41:04 +0100 (CET) X-dD-Source: Outbound Received: from zssg-mailmd104.ddreams.local (zssg-mailmd900.ddreams.local [10.160.172.63]) by zssg-mailou102.ddreams.local (Postfix) with ESMTP id AF375120104; Tue, 1 Dec 2020 09:41:01 +0900 (JST) Received: from t131sg-mailcc11.ddreams.local (t131sg-mailcc11.ddreams.local [100.66.31.86]) by zssg-mailmd104.ddreams.local (dDREAMS) with ESMTP id <0QKM00WY8XWD4A10@dDREAMS>; Tue, 01 Dec 2020 09:41:01 +0900 (JST) Received: from t131sg-mailcc12 (localhost [127.0.0.1]) by t131sg-mailcc11.ddreams.local (unknown) with SMTP id 0B10f1Oi009832; Tue, 1 Dec 2020 09:41:01 +0900 Received: from zssg-mailmf103.ddreams.local (unknown [127.0.0.1]) by zssg-mailmf103.ddreams.local (Postfix) with ESMTP id AE21E7E6038; Tue, 1 Dec 2020 09:40:48 +0900 (JST) Received: from zssg-mailmf103.ddreams.local (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id AB8B18E6061; Tue, 1 Dec 2020 09:40:48 +0900 (JST) Received: from localhost (unknown [127.0.0.1]) by IMSVA (Postfix) with SMTP id A9B918E605C; Tue, 1 Dec 2020 09:40:48 +0900 (JST) X-IMSS-HAND-OFF-DIRECTIVE: localhost:10026 Received: from zssg-mailmf103.ddreams.local (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 7DCA08E6056; Tue, 1 Dec 2020 09:40:47 +0900 (JST) Received: from zssg-mailua102.ddreams.local (unknown [10.160.172.62]) by zssg-mailmf103.ddreams.local (Postfix) with ESMTP; Tue, 1 Dec 2020 09:40:47 +0900 (JST) Received: from [10.87.198.18] (unknown [10.160.183.129]) by zssg-mailua102.ddreams.local (dDREAMS) with ESMTPA id <0QKM00QEVXVO7HC0@dDREAMS>; Tue, 01 Dec 2020 09:40:37 +0900 (JST) Date: Tue, 01 Dec 2020 09:40:37 +0900 From: Hideyuki Yamashita In-reply-to: References: <20201126102049.C6F8.17218CA3@ntt-tx.co.jp_1> Message-id: <20201201094035.EC0A.17218CA3@ntt-tx.co.jp_1> MIME-version: 1.0 Content-type: text/plain; charset=ISO-2022-JP Content-transfer-encoding: 7bit X-Mailer: Becky! ver. 2.75.01 [ja] X-TM-AS-GCONF: 00 To: Baruch Even Cc: Morten Brorup , dpdk-dev Subject: Re: [dpdk-dev] NTT TechnoCross roadmap for 21.02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hello Baruch, Thanks for your feedback to our roadmap. And thanks for your sharing your thought. As you pointed out, I agree that there are different ways to measure/estimate cpu usage. I think my proposal is "roughly way". I understand that there are some interests on this enhancement. (measure cpu usage) I think it is good. Anyway I will start preparing patch itself first and want to hear "how others think about my idea/proposal". Thanks! BR, Hideyuki Yamashita NTT TechnoCross > Hi, > > The way we do this accounting is completely different, it depends on having > logic that says you are in idle state and counts the start and stop time > from entering to exiting the idle function. It then subtracts the idle time > from the stat period time and that gives you the time (also percentage) > that you spend idle polling. The idle loop at the most basic form only does > the polling and excludes time when there were packets received (or other > things your app does not connected to DPDK on the same core). > > Using the counters for the number of polls is going to be harder to use and > far less effective. > > Baruch > > > > On Thu, Nov 26, 2020 at 3:21 AM Hideyuki Yamashita < > yamashita.hideyuki@ntt-tx.co.jp> wrote: > > > Hello Morten, > > > > Thanks for your giving me your valuable feedback. > > Please see inline tagged with [Hideyuki]. > > > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hideyuki > > Yamashita > > > > Sent: Wednesday, November 25, 2020 6:40 AM > > > > > > > > Hello, > > > > > > > > Following are the work items planned for 21.02 from NTT TechnoCross: > > > > I will try to post patch set after 20.11 is released. > > > > > > > > --- > > > > 1) Introduce API stats function > > > > In general, DPDK application consumes CPU usage because it polls > > > > incoming packets using rx_burst API in infinite loop. > > > > This makes difficult to estimate how much CPU usage is really > > > > used to send/receive packets by the DPDK application. > > > > > > > > For example, even if no incoming packets arriving, CPU usage > > > > looks nearly 100% when observed by top command. > > > > > > > > It is beneficial if developers can observe real CPU usage of the > > > > DPDK application. > > > > Such information can be exported to monitoring application like > > > > prometheus/graphana and shows CPU usage graphically. > > > > > > This would be very beneficial. > > > > > > Unfortunately, this seems to be not so simple for applications like the > > SmartShare StraightShaper, which is not a simple packet forwarding > > application, but has multiple pipeline stages. Our application also keeps > > some packets in queues for shaping purposes, so the number of packets > > transmitted does not match the number of packets received within some time > > interval. > > > > [Hideyuki] > > Thanks. > > I share the same view with you. > > DPDK application varies and not all applications "simply forward > > incoming packets". > > So I think maybe target applications are limited. > > Though I believe this enhancement is useful for those applications still. > > > > > > > > > > To achieve above, this patch set provides apistats functionality. > > > > apistats provides the followiing two counters for each lcore. > > > > - rx_burst_counts[RTE_MAX_LCORE] > > > > - tx_burst_counts[RTE_MAX_LCORE] > > > > Those accumulates rx_burst/tx_burst counts since the application > > > > starts. > > > > > > > > By using those values, developers can roughly estimate CPU usage. > > > > Let us assume a DPDK application is simply forwarding packets. > > > > It calls tx_burst only if it receive packets. > > > > If rx_burst_counts=1000 and tx_burst_count=1000 during certain > > > > period of time, one can assume CPU usage is 100%. > > > > If rx_burst_counts=1000 and tx_burst_count=100 during certain > > > > period of time, one can assume CPU usage is 10%. > > > > Here we assumes that tx_burst_count equals counts which rx_burst > > > > function > > > > really receives incoming packets. > > > > > > I am not sure I understand what is being counted in these counters. The > > number of packets in the bursts, or the number of invocations of the > > rx_burst/tx_burst functions. > > [Hideyuki] > > Latter. > > I think exsisting mechanism may store number of packets. > > (maybe I am wrong) > > > > > > > > Here are some data from our purpose built profiler, illustrating how > > nonlinear this really is. These data are from a SmartShare appliance in > > live production at an ISP. I hope you find it useful: > > > > > > Rx_burst uses ca. 40 CPU cycles if there are no packets, ca. 260 cycles > > if there is one packet, and down to ca. 40 cycles per packet for a burst of > > many packets. > > > > > > Tx_burst uses ca. 350 cycles for one packet, and down to ca. 20 cycles > > per packet for a burst of many packets. > > [Hideyuki] > > Thanks for your sharing useful info! > > Ah, I realized that consumption of CPU cycle is not linear like > > following. > > > > 0 packet receive -> 0 cycle > > 1 packet receive -> 1 cycle > > 10 packets receive -> 10 cycle > > > > It is very interesting. > > Thanks for your information. > > I will keep your information in my mind. > > > > > One of our intermediate pipeline stages (which not is not receiving or > > transmitting packets, only processing them) uses ca. 150 cycles for a burst > > of one packet, and down to ca. 110 cycles for a burst of many packets. > > > > > > > > Nevertheless, your suggested API might be usable by simple > > ingress->routing->egress applications. So don’t let me discourage you! > > [Hideyuki] > > Thanks for your supporting my idea. > > Yes, I agree with you that for simple forwarding applications, > > this enhancement might be useful to monitor the cpu usage "roughly". > > > > BR, > > Hideyuki Yamashita > > NTT TechnoCross > > > > > > > > > > > > > -- > Baruch Even > Platform Team Manager at WekaIO > +972-54-2577223 > *?* baruch@weka.io *?* https://www.weka.io/ > > The World's Fastest File System >