From: "Morten Brørup" <mb@smartsharesystems.com>
To: "Shani Peretz" <shperetz@nvidia.com>,
"Stephen Hemminger" <stephen@networkplumber.org>
Cc: <dev@dpdk.org>
Subject: RE: [RFC PATCH 0/5] Introduce mempool object new debug capabilities
Date: Mon, 7 Jul 2025 14:10:31 +0200 [thread overview]
Message-ID: <98CBD80474FA8B44BF855DF32C47DC35E9FD92@smartserver.smartshare.dk> (raw)
In-Reply-To: <SA1PR12MB949141C3D0F5E987AB39CCC3BF4FA@SA1PR12MB9491.namprd12.prod.outlook.com>
> From: Shani Peretz [mailto:shperetz@nvidia.com]
> Sent: Monday, 7 July 2025 07.45
>
> > From: Stephen Hemminger <stephen@networkplumber.org>
> > Sent: Monday, 16 June 2025 18:30
> >
> > On Mon, 16 Jun 2025 10:29:05 +0300
> > Shani Peretz <shperetz@nvidia.com> wrote:
> >
> > > This feature is designed to monitor the lifecycle of mempool objects
> > > as they move between the application and the PMD.
> > >
> > > It will allow us to track the operations and transitions of each
> > > mempool object throughout the system, helping in debugging and
> > understanding objects flow.
> > >
> > > The implementation include several key components:
> > > 1. Added a bitmap to mempool's header (rte_mempool_objhdr)
> > > that represent the operations history.
> > > 2. Added functions that allow marking operations on an
> > > mempool objects.
> > > 3. Dumps the history to a file or the console
> > > (rte_mempool_objects_dump).
> > > 4. Added python script that can parse, analyze the data and
> > > present it in an human readable format.
> > > 5. Added compilation flag to enable the feature.
> > >
> >
> > Could this not already be done with tracing infrastructure?
>
> Hey,
> We did consider tracing but:
> - It has limited capacity, which will result in older mbufs being
> lost in the tracing output while they are still in use
> - Some operations may be lost, and we might not capture the
> complete picture due to trace misses caused by the performance overhead
> of tracking on the datapath as far as I understand
> WDYT?
This looks like an alternative trace infrastructure, just for mempool objects.
But the list of operations is limited to basic operations on mbuf mempool objects.
It lacks support for other operations on mbufs, e.g. IP fragmentation/defragmentation library operations, application specific operations, and transitions between the mempool cache and the mempool backing store.
It also lacks support for operations on other mempool objects than mbufs.
You might better off using the trace infrastructure, or something similar.
Using the trace infrastructure allows you to record more detailed information along with the transitions of "owners" of each mbuf.
I'm not opposing this RFC, but I think it is very limited, and not sufficiently expandable.
I get the point that trace can cause old events on active mbufs to be lost, and the concept of a trace buffer per mempool object is a good solution to that.
But I think you need to be able to store much more information with each transition; at least a timestamp. And if you do that, you need much more than 4 bits per event.
Alternatively, if you do proceed with the RFC in the current form, I have two key suggestions:
1. Make it possible to register operations at runtime. (Look at dynamic mbuf fields for inspiration.)
2. Use 8 bits for the operation, instead of 4.
And if you need a longer trace history, you can use the rte_bitset library instead of a single uint64_t.
prev parent reply other threads:[~2025-07-07 12:10 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-16 7:29 Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 1/5] mempool: record mempool objects operations history Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 2/5] drivers: add mempool history compilation flag Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 3/5] net/mlx5: mark an operation in mempool object's history Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 4/5] app/testpmd: add testpmd command to dump mempool history Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 5/5] usertool: add a script to parse mempool history dump Shani Peretz
2025-06-16 15:30 ` [RFC PATCH 0/5] Introduce mempool object new debug capabilities Stephen Hemminger
2025-06-19 12:57 ` Morten Brørup
2025-07-07 5:46 ` Shani Peretz
2025-07-07 5:45 ` Shani Peretz
2025-07-07 12:10 ` Morten Brørup [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=98CBD80474FA8B44BF855DF32C47DC35E9FD92@smartserver.smartshare.dk \
--to=mb@smartsharesystems.com \
--cc=dev@dpdk.org \
--cc=shperetz@nvidia.com \
--cc=stephen@networkplumber.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).