DPDK patches and discussions
 help / color / mirror / Atom feed
From: Baruch Even <baruch@weka.io>
To: Hideyuki Yamashita <yamashita.hideyuki@ntt-tx.co.jp>
Cc: Morten Brorup <mb@smartsharesystems.com>, dpdk-dev <dev@dpdk.org>
Subject: Re: [dpdk-dev] NTT TechnoCross roadmap for 21.02
Date: Sun, 29 Nov 2020 16:43:20 +0200	[thread overview]
Message-ID: <CAKye4QaOqFW0RQ2OhT1PyVxUVw0V9eMQNHVHB9UjWKSfgHOAiA@mail.gmail.com> (raw)
In-Reply-To: <20201126102049.C6F8.17218CA3@ntt-tx.co.jp_1>

Hi,

The way we do this accounting is completely different, it depends on having
logic that says you are in idle state and counts the start and stop time
from entering to exiting the idle function. It then subtracts the idle time
from the stat period time and that gives you the time (also percentage)
that you spend idle polling. The idle loop at the most basic form only does
the polling and excludes time when there were packets received (or other
things your app does not connected to DPDK on the same core).

Using the counters for the number of polls is going to be harder to use and
far less effective.

Baruch



On Thu, Nov 26, 2020 at 3:21 AM Hideyuki Yamashita <
yamashita.hideyuki@ntt-tx.co.jp> wrote:

> Hello Morten,
>
> Thanks for your giving me your valuable feedback.
> Please see inline tagged with [Hideyuki].
>
> > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hideyuki
> Yamashita
> > > Sent: Wednesday, November 25, 2020 6:40 AM
> > >
> > > Hello,
> > >
> > > Following are the work items planned for 21.02 from NTT TechnoCross:
> > > I will try to post patch set after 20.11 is released.
> > >
> > > ---
> > > 1) Introduce API stats function
> > > In general, DPDK application consumes CPU usage because it polls
> > > incoming packets using rx_burst API in infinite loop.
> > > This makes difficult to estimate how much CPU usage is really
> > > used to send/receive packets by the DPDK application.
> > >
> > > For example, even if no incoming packets arriving, CPU usage
> > > looks nearly 100% when observed by top command.
> > >
> > > It is beneficial if developers can observe real CPU usage of the
> > > DPDK application.
> > > Such information can be exported to monitoring application like
> > > prometheus/graphana and shows CPU usage graphically.
> >
> > This would be very beneficial.
> >
> > Unfortunately, this seems to be not so simple for applications like the
> SmartShare StraightShaper, which is not a simple packet forwarding
> application, but has multiple pipeline stages. Our application also keeps
> some packets in queues for shaping purposes, so the number of packets
> transmitted does not match the number of packets received within some time
> interval.
>
> [Hideyuki]
> Thanks.
> I share the same view with you.
> DPDK application varies and not all applications "simply forward
> incoming packets".
> So I think maybe target applications are limited.
> Though I believe this enhancement is useful for those applications still.
>
> > >
> > > To achieve above, this patch set provides apistats functionality.
> > > apistats provides the followiing two counters for each lcore.
> > > - rx_burst_counts[RTE_MAX_LCORE]
> > > - tx_burst_counts[RTE_MAX_LCORE]
> > > Those accumulates rx_burst/tx_burst counts since the application
> > > starts.
> > >
> > > By using those values, developers can roughly estimate CPU usage.
> > > Let us assume a DPDK application is simply forwarding packets.
> > > It calls tx_burst only if it receive packets.
> > > If rx_burst_counts=1000 and tx_burst_count=1000 during certain
> > > period of time, one can assume CPU usage is 100%.
> > > If rx_burst_counts=1000 and tx_burst_count=100 during certain
> > > period of time, one can assume CPU usage is 10%.
> > > Here we assumes that tx_burst_count equals counts which rx_burst
> > > function
> > > really receives incoming packets.
> >
> > I am not sure I understand what is being counted in these counters. The
> number of packets in the bursts, or the number of invocations of the
> rx_burst/tx_burst functions.
> [Hideyuki]
> Latter.
> I think exsisting mechanism may store number of packets.
> (maybe I am wrong)
>
> >
> > Here are some data from our purpose built profiler, illustrating how
> nonlinear this really is. These data are from a SmartShare appliance in
> live production at an ISP. I hope you find it useful:
> >
> > Rx_burst uses ca. 40 CPU cycles if there are no packets, ca. 260 cycles
> if there is one packet, and down to ca. 40 cycles per packet for a burst of
> many packets.
> >
> > Tx_burst uses ca. 350 cycles for one packet, and down to ca. 20 cycles
> per packet for a burst of many packets.
> [Hideyuki]
> Thanks for your sharing useful info!
> Ah, I realized that consumption of CPU cycle is not linear like
> following.
>
> 0 packet receive  -> 0 cycle
> 1 packet receive ->  1 cycle
> 10 packets receive -> 10 cycle
>
> It is very interesting.
> Thanks for your information.
> I will keep your information in my mind.
>
> > One of our intermediate pipeline stages (which not is not receiving or
> transmitting packets, only processing them) uses ca. 150 cycles for a burst
> of one packet, and down to ca. 110 cycles for a burst of many packets.
>
> >
> > Nevertheless, your suggested API might be usable by simple
> ingress->routing->egress applications. So don’t let me discourage you!
> [Hideyuki]
> Thanks for your supporting my idea.
> Yes, I agree with you that for simple forwarding applications,
> this enhancement might be useful to monitor the cpu usage "roughly".
>
> BR,
> Hideyuki Yamashita
> NTT TechnoCross
>
> >
>
>
>

-- 
Baruch Even
Platform Team Manager at WekaIO
+972-54-2577223
 *•*  baruch@weka.io  *•* https://www.weka.io/
<https://www.weka.io/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>
The World's Fastest File System
<https://www.weka.io/promo/2020-10-esg-validation-paper/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>

  reply	other threads:[~2020-11-29 14:43 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-11  4:18 [dpdk-dev] Arm roadmap for 20.11 Honnappa Nagarahalli
2020-11-25  5:39 ` [dpdk-dev] NTT TechnoCross roadmap for 21.02 Hideyuki Yamashita
2020-11-25 11:01   ` Morten Brørup
2020-11-26  1:20     ` Hideyuki Yamashita
2020-11-29 14:43       ` Baruch Even [this message]
2020-12-01  0:40         ` Hideyuki Yamashita
2020-12-01  5:01           ` [dpdk-dev] Basic question about where to write config for optional feature Hideyuki Yamashita
2020-12-01  9:37             ` Bruce Richardson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKye4QaOqFW0RQ2OhT1PyVxUVw0V9eMQNHVHB9UjWKSfgHOAiA@mail.gmail.com \
    --to=baruch@weka.io \
    --cc=dev@dpdk.org \
    --cc=mb@smartsharesystems.com \
    --cc=yamashita.hideyuki@ntt-tx.co.jp \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).