From: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
To: Ray Kinsella <mdr@ashroe.eu>, dpdk-dev <dev@dpdk.org>,
"dave@barachs.net" <dave@barachs.net>
Subject: Re: [dpdk-dev] [EXT] Re: [RFC] DPDK Trace support
Date: Mon, 13 Jan 2020 12:04:34 +0000 [thread overview]
Message-ID: <BYAPR18MB2424987ECA5FB2C184999918C8350@BYAPR18MB2424.namprd18.prod.outlook.com> (raw)
In-Reply-To: <d4977e85-d612-f4bd-63d2-4d98895b3335@ashroe.eu>
> -----Original Message-----
> From: Ray Kinsella <mdr@ashroe.eu>
> Sent: Monday, January 13, 2020 4:30 PM
> To: Jerin Jacob Kollanukkaran <jerinj@marvell.com>; dpdk-dev
> <dev@dpdk.org>; dave@barachs.net
> Subject: [EXT] Re: [RFC] [dpdk-dev] DPDK Trace support
>
> External Email
>
> ----------------------------------------------------------------------
> Hi Jerin,
Hi Ray,
>
> Any idea why lttng performance is so poor?
100ns is the expected number based on Lttng presentations.
Just the 100ns is for high-end x86 machines.
Here is the perf out. Looks like overhead is coming from ring buffer implementations
due to its features. More over for normal linux application case, 100ns may not be
bad, Just that DPDK need more..
45.07% liblttng-ust.so.0.0.0 [.] lttng_event_reserve
25.48% liblttng-ust.so.0.0.0 [.] lttng_event_commit
6.30% calibrate [.] __event_probe__dpdk___zero_arg
5.05% calibrate [.] __worker_ZERO_ARG
4.87% liblttng-ust-tracepoint.so.0.0.0 [.] tp_rcu_read_lock_bp
4.79% liblttng-ust-tracepoint.so.0.0.0 [.] tp_rcu_read_unlock_bp
4.43% ld-2.29.so [.] _dl_tlsdesc_return
1.94% calibrate [.] plugin_getcpu
1.42% calibrate [.] plugin_read64
0.65% liblttng-ust-tracepoint.so.0.0.0 [.] tp_rcu_dereference_sym_bp
Note:
- Performance is even worse, we if don’t use snapshot mode and have DPDK plugin
For get_clock and get_cpu. There numbers are based on the optimizations provided
Lttng in the framework.
> I would have naturally gone there to benefit from the existing toolchain.
Yes. That’s reason why I started with Lttng. After the integration, the testpmd dipped
Performance. Then added following test case to verify the overhead.
https://github.com/jerinjacobk/lttng-overhead
> Have you looked at the FD.io logging/tracing infrastructure for inspiration?
Based on my understanding, VPP has VPP specific trace format, trace emitter and trace viewer.
Since Lttng uses CTF and it is open, We could leverage the open source viewers
And post processing tracing tools with CTF. Looks like High performance trace_emiiter
Is only the missing piece in Lttng for us.
Off Couse, We can use FD.IO logging documentation for reference.
>
> Ray K
>
> On 13/01/2020 10:40, Jerin Jacob Kollanukkaran wrote:
> > Hi All,
> >
> > I would like to add tracing support for DPDK.
> > I am planning to add this support in v20.05 release.
> >
> > This RFC attempts to get feedback from the community on
> >
> > a) Tracing Use cases.
> > b) Tracing Requirements.
> > b) Implementation choices.
> > c) Trace format.
> >
> > Use-cases
> > ---------
> > - Most of the cases, The DPDK provider will not have access to the DPDK
> customer applications.
> > To debug/analyze the slow path and fast path DPDK API usage from the
> > field, we need to have integrated trace support in DPDK.
> >
> > - Need a low overhead Fast path multi-core PMD driver
> > debugging/analysis infrastructure in DPDK to fix the functional and
> performance issue(s) of PMD.
> >
> > - Post trace analysis tools can provide various status across the
> > system such as cpu_idle() using the timestamp added in the trace.
> >
> >
> > Requirements:
> > -------------
> > - Support for Linux, FreeBSD and Windows OS
> > - Open trace format
> > - Multi-platform Open source trace viewer
> > - Absolute low overhead trace API for DPDK fast path tracing/debugging.
> > - Dynamic enable/disable of trace events
> >
> >
> > To enable trace support in DPDK, following items need to work out:
> >
> > a) Add the DPDK trace points in the DPDK source code.
> >
> > - This includes updating DPDK functions such as,
> > rte_eth_dev_configure(), rte_eth_dev_start(), rte_eth_dev_rx_burst() to emit
> the trace.
> >
> > b) Choosing suitable serialization-format
> >
> > - Common Trace Format, CTF, is an open format and language to describe
> trace formats.
> > This enables tool reuse, of which line-textual (babeltrace) and
> > graphical (TraceCompass) variants already exist.
> >
> > CTF should look familiar to C programmers but adds stronger typing.
> > See CTF - A Flexible, High-performance Binary Trace Format.
> >
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__diamon.org_ctf_&d
> >
> =DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0s5f4
> wCNtTa4
> >
> UUKvcsvI&m=xnRsAfdFoEyF_20G98OkCz08C9v5tKxAPUrVQmQcUXg&s=k0FbD-
> lAnFNH9
> > qkmKI6-LX_OHFBmsxKwQio7eEModCM&e=
> >
> > c) Writing the on-target serialization code,
> >
> > See the section below.(Lttng CTF trace emitter vs DPDK specific CTF
> > trace emitter)
> >
> > d) Deciding on and writing the I/O transport mechanics,
> >
> > For performance reasons, it should be backed by a huge-page and write to file
> IO.
> >
> > e) Writing the PC-side deserializer/parser,
> >
> > Both the babletrace(CLI tool) and Trace Compass(GUI tool) support CTF.
> > See:
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__lttng.org_viewers
> >
> _&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0
> s5f4wCNt
> >
> Ta4UUKvcsvI&m=xnRsAfdFoEyF_20G98OkCz08C9v5tKxAPUrVQmQcUXg&s=GJ
> U1ogbBwJ
> > N320JxEY4iB4SXWVopIDkoIAtxrMaHK4E&e=
> >
> > f) Writing tools for filtering and presentation.
> >
> > See item (e)
> >
> >
> > Lttng CTF trace emitter vs DPDK specific CTF trace emitter
> > ----------------------------------------------------------
> >
> > I have written a performance evaluation application to measure the
> > overhead of Lttng CTF emitter(The fastpath infrastructure used by
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__lttng.org_&d=DwIC
> >
> aQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0s5f4wCNtT
> a4UUKvc
> > svI&m=xnRsAfdFoEyF_20G98OkCz08C9v5tKxAPUrVQmQcUXg&s=Ea-
> LF8IytOCG48BPPU
> > _4ucf9tRLIFbnKXLsj5E-eiNw&e= library to emit the trace)
> >
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_jerinj
> > acobk_lttng-
> 2Doverhead&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=1DGob4H4rxz
> >
> 6H8uITozGOCa0s5f4wCNtTa4UUKvcsvI&m=xnRsAfdFoEyF_20G98OkCz08C9v5t
> KxAPUr
> > VQmQcUXg&s=EUqCHc0znylADrKrGw9oLNew7ZqMQ2Dvi9T-t8NAmm8&e=
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_jerinj
> > acobk_lttng-
> 2Doverhead_blob_master_README&d=DwICaQ&c=nKjWec2b6R0mOyPaz
> >
> 7xtfQ&r=1DGob4H4rxz6H8uITozGOCa0s5f4wCNtTa4UUKvcsvI&m=xnRsAfdFoE
> yF_20G
> > 98OkCz08C9v5tKxAPUrVQmQcUXg&s=Q3fAN3dL_m44lSb5I5I4BG4zZ-
> IQQM44b160UXlI
> > JaA&e=
> >
> > I could improve the performance by 30% by adding the "DPDK"
> > based plugin for get_clock() and get_cpu(), Here are the performance
> > numbers after adding the plugin on
> > x86 and various arm64 board that I have access to,
> >
> > On high-end x86, it comes around 236 cycles/~100ns @ 2.4GHz (See the
> > last line in the log(ZERO_ARG)) On arm64, it varies from 312 cycles to 1100
> cycles(based on the class of CPU).
> > In short, Based on the "IPC capabilities", The cost would be around
> > 100ns to 400ns for single void trace(a trace without any argument)
> >
> >
> > [lttng-overhead-x86] $ sudo ./calibrate/build/app/calibrate -c 0xc0
> > make[1]: Entering directory '/export/lttng-overhead-x86/calibrate'
> > make[1]: Leaving directory '/export/lttng-overhead-x86/calibrate'
> > EAL: Detected 56 lcore(s)
> > EAL: Detected 2 NUMA nodes
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > EAL: Selected IOVA mode 'PA'
> > EAL: Probing VFIO support...
> > EAL: PCI device 0000:01:00.0 on NUMA socket 0
> > EAL: probe driver: 8086:1521 net_e1000_igb
> > EAL: PCI device 0000:01:00.1 on NUMA socket 0
> > EAL: probe driver: 8086:1521 net_e1000_igb
> > CPU Timer freq is 2600.000000MHz
> > NOP: cycles=0.194834 ns=0.074936
> > GET_CLOCK: cycles=47.854658 ns=18.405638
> > GET_CPU: cycles=30.995892 ns=11.921497
> > ZERO_ARG: cycles=236.945113 ns=91.132736
> >
> >
> > We will have only 16.75ns to process 59.2 mpps(40Gbps), So IMO, Lttng
> > CTF emitter may not fit the DPDK fast path purpose due to the cost
> associated with generic Lttng features.
> >
> > One option could be to have, native CTF emitter in EAL/DPDK to emit
> > the trace in a hugepage. I think it would be a handful of cycles if we
> > limit the features to the requirements above:
> >
> > The upside of using Lttng CTF emitter:
> > a) No need to write a new CTF trace emitter(the item (c))
> >
> > The downside of Lttng CTF emitter(the item (c))
> > a) performance issue(See above)
> > b) Lack of Windows OS support. It looks like, it has basic FreeBSD support.
> > c) dpdk library dependency to lttng for trace.
> >
> > So, Probably it good to have native CTF emitter in DPDK and reuse all
> > open-source trace viewer(babeltrace and TraceCompass) and format(CTF)
> infrastructure.
> > I think, it would be best of both world.
> >
> > Any thoughts on this subject? Based on the community feedback, I can work
> on the patch for v20.05.
> >
next prev parent reply other threads:[~2020-01-13 12:04 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-13 10:40 [dpdk-dev] " Jerin Jacob Kollanukkaran
2020-01-13 11:00 ` Ray Kinsella
2020-01-13 12:04 ` Jerin Jacob Kollanukkaran [this message]
2020-01-18 15:14 ` dave
2020-01-20 16:51 ` Stephen Hemminger
2020-01-13 13:05 ` Bruce Richardson
2020-01-13 14:46 ` Jerin Jacob
2020-01-13 14:58 ` Bruce Richardson
2020-01-13 15:13 ` Jerin Jacob
2020-01-13 16:12 ` Bruce Richardson
2020-01-17 4:41 ` Jerin Jacob
2020-01-17 8:04 ` David Marchand
2020-01-17 9:52 ` Jerin Jacob
2020-01-17 10:30 ` Mattias Rönnblom
2020-01-17 10:54 ` Jerin Jacob
2020-02-15 10:21 ` Jerin Jacob
2020-02-17 9:35 ` Mattias Rönnblom
2020-02-17 10:23 ` Jerin Jacob
2020-01-17 10:43 ` David Marchand
2020-01-17 11:08 ` Jerin Jacob
2020-01-27 16:12 ` Aaron Conole
2020-01-27 17:23 ` Jerin Jacob
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BYAPR18MB2424987ECA5FB2C184999918C8350@BYAPR18MB2424.namprd18.prod.outlook.com \
--to=jerinj@marvell.com \
--cc=dave@barachs.net \
--cc=dev@dpdk.org \
--cc=mdr@ashroe.eu \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).