From: Shani Peretz <shperetz@nvidia.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: RE: [RFC PATCH 0/5] Introduce mempool object new debug capabilities
Date: Mon, 7 Jul 2025 05:45:28 +0000 [thread overview]
Message-ID: <SA1PR12MB949141C3D0F5E987AB39CCC3BF4FA@SA1PR12MB9491.namprd12.prod.outlook.com> (raw)
In-Reply-To: <20250616083009.4a2d69f2@hermes.local>
> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Monday, 16 June 2025 18:30
> To: Shani Peretz <shperetz@nvidia.com>
> Cc: dev@dpdk.org
> Subject: Re: [RFC PATCH 0/5] Introduce mempool object new debug
> capabilities
>
> External email: Use caution opening links or attachments
>
>
> On Mon, 16 Jun 2025 10:29:05 +0300
> Shani Peretz <shperetz@nvidia.com> wrote:
>
> > This feature is designed to monitor the lifecycle of mempool objects
> > as they move between the application and the PMD.
> >
> > It will allow us to track the operations and transitions of each
> > mempool object throughout the system, helping in debugging and
> understanding objects flow.
> >
> > The implementation include several key components:
> > 1. Added a bitmap to mempool's header (rte_mempool_objhdr)
> > that represent the operations history.
> > 2. Added functions that allow marking operations on an
> > mempool objects.
> > 3. Dumps the history to a file or the console
> > (rte_mempool_objects_dump).
> > 4. Added python script that can parse, analyze the data and
> > present it in an human readable format.
> > 5. Added compilation flag to enable the feature.
> >
> > Shani Peretz (5):
> > mempool: record mempool objects operations history
> > drivers: add mempool history compilation flag
> > net/mlx5: mark an operation in mempool object's history
> > app/testpmd: add testpmd command to dump mempool history
> > usertool: add a script to parse mempool history dump
> >
> > app/test-pmd/cmdline.c | 59 +++++++-
> > config/meson.build | 1 +
> > drivers/meson.build | 7 +
> > drivers/net/af_packet/meson.build | 1 +
> > drivers/net/af_xdp/meson.build | 1 +
> > drivers/net/ark/meson.build | 2 +
> > drivers/net/atlantic/meson.build | 2 +
> > drivers/net/avp/meson.build | 2 +
> > drivers/net/axgbe/meson.build | 2 +
> > drivers/net/bnx2x/meson.build | 1 +
> > drivers/net/bnxt/meson.build | 2 +
> > drivers/net/bonding/meson.build | 1 +
> > drivers/net/cnxk/meson.build | 1 +
> > drivers/net/cxgbe/meson.build | 2 +
> > drivers/net/dpaa/meson.build | 2 +
> > drivers/net/dpaa2/meson.build | 2 +
> > drivers/net/ena/meson.build | 2 +
> > drivers/net/enetc/meson.build | 2 +
> > drivers/net/enetfec/meson.build | 2 +
> > drivers/net/enic/meson.build | 2 +
> > drivers/net/failsafe/meson.build | 1 +
> > drivers/net/gve/meson.build | 2 +
> > drivers/net/hinic/meson.build | 2 +
> > drivers/net/hns3/meson.build | 1 +
> > drivers/net/intel/cpfl/meson.build | 2 +
> > drivers/net/intel/e1000/meson.build | 2 +
> > drivers/net/intel/fm10k/meson.build | 2 +
> > drivers/net/intel/i40e/meson.build | 2 +
> > drivers/net/intel/iavf/meson.build | 2 +
> > drivers/net/intel/ice/meson.build | 1 +
> > drivers/net/intel/idpf/meson.build | 2 +
> > drivers/net/intel/ixgbe/meson.build | 2 +
> > drivers/net/ionic/meson.build | 2 +
> > drivers/net/mana/meson.build | 2 +
> > drivers/net/memif/meson.build | 1 +
> > drivers/net/mlx4/meson.build | 2 +
> > drivers/net/mlx5/meson.build | 1 +
> > drivers/net/mlx5/mlx5_rx.c | 9 ++
> > drivers/net/mlx5/mlx5_rx.h | 2 +
> > drivers/net/mlx5/mlx5_rxq.c | 9 +-
> > drivers/net/mlx5/mlx5_rxtx_vec.c | 6 +
> > drivers/net/mlx5/mlx5_tx.h | 7 +
> > drivers/net/mlx5/mlx5_txq.c | 1 +
> > drivers/net/mvneta/meson.build | 2 +
> > drivers/net/mvpp2/meson.build | 2 +
> > drivers/net/netvsc/meson.build | 2 +
> > drivers/net/nfb/meson.build | 2 +
> > drivers/net/nfp/meson.build | 2 +
> > drivers/net/ngbe/meson.build | 2 +
> > drivers/net/ntnic/meson.build | 4 +
> > drivers/net/null/meson.build | 1 +
> > drivers/net/octeon_ep/meson.build | 2 +
> > drivers/net/octeontx/meson.build | 2 +
> > drivers/net/pcap/meson.build | 1 +
> > drivers/net/pfe/meson.build | 2 +
> > drivers/net/qede/meson.build | 2 +
> > drivers/net/r8169/meson.build | 4 +-
> > drivers/net/ring/meson.build | 1 +
> > drivers/net/sfc/meson.build | 2 +
> > drivers/net/softnic/meson.build | 2 +
> > drivers/net/tap/meson.build | 1 +
> > drivers/net/thunderx/meson.build | 2 +
> > drivers/net/txgbe/meson.build | 2 +
> > drivers/net/vdev_netvsc/meson.build | 2 +
> > drivers/net/vhost/meson.build | 2 +
> > drivers/net/virtio/meson.build | 2 +
> > drivers/net/vmxnet3/meson.build | 2 +
> > drivers/net/xsc/meson.build | 2 +
> > drivers/net/zxdh/meson.build | 4 +
> > lib/ethdev/rte_ethdev.h | 14 ++
> > lib/mempool/rte_mempool.c | 111 +++++++++++++++
> > lib/mempool/rte_mempool.h | 106 ++++++++++++++
> > meson_options.txt | 2 +
> > .../dpdk-mempool_object_history_parser.py | 129
> ++++++++++++++++++
> > 74 files changed, 571 insertions(+), 4 deletions(-) create mode
> > 100755 usertools/dpdk-mempool_object_history_parser.py
> >
>
> Could this not already be done with tracing infrastructure?
Hey,
We did consider tracing but:
- It has limited capacity, which will result in older mbufs being lost in the tracing output while they are still in use
- Some operations may be lost, and we might not capture the complete picture due to trace misses caused by the performance overhead of tracking on the datapath as far as I understand
WDYT?
next prev parent reply other threads:[~2025-07-07 5:45 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-16 7:29 Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 1/5] mempool: record mempool objects operations history Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 2/5] drivers: add mempool history compilation flag Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 3/5] net/mlx5: mark an operation in mempool object's history Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 4/5] app/testpmd: add testpmd command to dump mempool history Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 5/5] usertool: add a script to parse mempool history dump Shani Peretz
2025-06-16 15:30 ` [RFC PATCH 0/5] Introduce mempool object new debug capabilities Stephen Hemminger
2025-06-19 12:57 ` Morten Brørup
2025-07-07 5:46 ` Shani Peretz
2025-07-07 5:45 ` Shani Peretz [this message]
2025-07-07 12:10 ` Morten Brørup
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=SA1PR12MB949141C3D0F5E987AB39CCC3BF4FA@SA1PR12MB9491.namprd12.prod.outlook.com \
--to=shperetz@nvidia.com \
--cc=dev@dpdk.org \
--cc=stephen@networkplumber.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).