* [RFC PATCH 0/5] Introduce mempool object new debug capabilities
@ 2025-06-16 7:29 Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 1/5] mempool: record mempool objects operations history Shani Peretz
` (6 more replies)
0 siblings, 7 replies; 24+ messages in thread
From: Shani Peretz @ 2025-06-16 7:29 UTC (permalink / raw)
To: dev; +Cc: Shani Peretz
This feature is designed to monitor the lifecycle of mempool objects
as they move between the application and the PMD.
It will allow us to track the operations and transitions of each mempool
object throughout the system, helping in debugging and understanding objects flow.
The implementation include several key components:
1. Added a bitmap to mempool's header (rte_mempool_objhdr)
that represent the operations history.
2. Added functions that allow marking operations on an
mempool objects.
3. Dumps the history to a file or the console
(rte_mempool_objects_dump).
4. Added python script that can parse, analyze the data and
present it in an human readable format.
5. Added compilation flag to enable the feature.
Shani Peretz (5):
mempool: record mempool objects operations history
drivers: add mempool history compilation flag
net/mlx5: mark an operation in mempool object's history
app/testpmd: add testpmd command to dump mempool history
usertool: add a script to parse mempool history dump
app/test-pmd/cmdline.c | 59 +++++++-
config/meson.build | 1 +
drivers/meson.build | 7 +
drivers/net/af_packet/meson.build | 1 +
drivers/net/af_xdp/meson.build | 1 +
drivers/net/ark/meson.build | 2 +
drivers/net/atlantic/meson.build | 2 +
drivers/net/avp/meson.build | 2 +
drivers/net/axgbe/meson.build | 2 +
drivers/net/bnx2x/meson.build | 1 +
drivers/net/bnxt/meson.build | 2 +
drivers/net/bonding/meson.build | 1 +
drivers/net/cnxk/meson.build | 1 +
drivers/net/cxgbe/meson.build | 2 +
drivers/net/dpaa/meson.build | 2 +
drivers/net/dpaa2/meson.build | 2 +
drivers/net/ena/meson.build | 2 +
drivers/net/enetc/meson.build | 2 +
drivers/net/enetfec/meson.build | 2 +
drivers/net/enic/meson.build | 2 +
drivers/net/failsafe/meson.build | 1 +
drivers/net/gve/meson.build | 2 +
drivers/net/hinic/meson.build | 2 +
drivers/net/hns3/meson.build | 1 +
drivers/net/intel/cpfl/meson.build | 2 +
drivers/net/intel/e1000/meson.build | 2 +
drivers/net/intel/fm10k/meson.build | 2 +
drivers/net/intel/i40e/meson.build | 2 +
drivers/net/intel/iavf/meson.build | 2 +
drivers/net/intel/ice/meson.build | 1 +
drivers/net/intel/idpf/meson.build | 2 +
drivers/net/intel/ixgbe/meson.build | 2 +
drivers/net/ionic/meson.build | 2 +
drivers/net/mana/meson.build | 2 +
drivers/net/memif/meson.build | 1 +
drivers/net/mlx4/meson.build | 2 +
drivers/net/mlx5/meson.build | 1 +
drivers/net/mlx5/mlx5_rx.c | 9 ++
drivers/net/mlx5/mlx5_rx.h | 2 +
drivers/net/mlx5/mlx5_rxq.c | 9 +-
drivers/net/mlx5/mlx5_rxtx_vec.c | 6 +
drivers/net/mlx5/mlx5_tx.h | 7 +
drivers/net/mlx5/mlx5_txq.c | 1 +
drivers/net/mvneta/meson.build | 2 +
drivers/net/mvpp2/meson.build | 2 +
drivers/net/netvsc/meson.build | 2 +
drivers/net/nfb/meson.build | 2 +
drivers/net/nfp/meson.build | 2 +
drivers/net/ngbe/meson.build | 2 +
drivers/net/ntnic/meson.build | 4 +
drivers/net/null/meson.build | 1 +
drivers/net/octeon_ep/meson.build | 2 +
drivers/net/octeontx/meson.build | 2 +
drivers/net/pcap/meson.build | 1 +
drivers/net/pfe/meson.build | 2 +
drivers/net/qede/meson.build | 2 +
drivers/net/r8169/meson.build | 4 +-
drivers/net/ring/meson.build | 1 +
drivers/net/sfc/meson.build | 2 +
drivers/net/softnic/meson.build | 2 +
drivers/net/tap/meson.build | 1 +
drivers/net/thunderx/meson.build | 2 +
drivers/net/txgbe/meson.build | 2 +
drivers/net/vdev_netvsc/meson.build | 2 +
drivers/net/vhost/meson.build | 2 +
drivers/net/virtio/meson.build | 2 +
drivers/net/vmxnet3/meson.build | 2 +
drivers/net/xsc/meson.build | 2 +
drivers/net/zxdh/meson.build | 4 +
lib/ethdev/rte_ethdev.h | 14 ++
lib/mempool/rte_mempool.c | 111 +++++++++++++++
lib/mempool/rte_mempool.h | 106 ++++++++++++++
meson_options.txt | 2 +
.../dpdk-mempool_object_history_parser.py | 129 ++++++++++++++++++
74 files changed, 571 insertions(+), 4 deletions(-)
create mode 100755 usertools/dpdk-mempool_object_history_parser.py
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* [RFC PATCH 1/5] mempool: record mempool objects operations history
2025-06-16 7:29 [RFC PATCH 0/5] Introduce mempool object new debug capabilities Shani Peretz
@ 2025-06-16 7:29 ` Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 2/5] drivers: add mempool history compilation flag Shani Peretz
` (5 subsequent siblings)
6 siblings, 0 replies; 24+ messages in thread
From: Shani Peretz @ 2025-06-16 7:29 UTC (permalink / raw)
To: dev
Cc: Shani Peretz, Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko,
Morten Brørup
This feature is designed to monitor the lifecycle of
mempool objects as they move between the application
and the PMD.
It will allow us to track the operations and transitions
of each mempool object throughout the system, helping in
debugging and understanding objects flow.
added a bitmap to the mempool object header that records
the most recent operations.
Each operation is represented by a 4-bit value, and for each mempool
object the history field stores 16 steps.
Signed-off-by: Shani Peretz <shperetz@nvidia.com>
---
lib/ethdev/rte_ethdev.h | 14 +++++
lib/mempool/rte_mempool.c | 111 ++++++++++++++++++++++++++++++++++++++
lib/mempool/rte_mempool.h | 106 ++++++++++++++++++++++++++++++++++++
3 files changed, 231 insertions(+)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index f9fb6ae549..77d15f1bcb 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -167,6 +167,7 @@
#include <rte_common.h>
#include <rte_config.h>
#include <rte_power_intrinsics.h>
+#include <rte_mempool.h>
#include "rte_ethdev_trace_fp.h"
#include "rte_dev_info.h"
@@ -6334,6 +6335,8 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
+ rte_mempool_history_bulk((void **)rx_pkts, nb_rx, RTE_MEMPOOL_APP_RX);
+
#ifdef RTE_ETHDEV_RXTX_CALLBACKS
{
void *cb;
@@ -6692,8 +6695,19 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
}
#endif
+#if RTE_MEMPOOL_DEBUG_OBJECTS_HISTORY
+ uint16_t requested_pkts = nb_pkts;
+ rte_mempool_history_bulk((void **)tx_pkts, nb_pkts, RTE_MEMPOOL_PMD_TX);
+#endif
+
nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
+#if RTE_MEMPOOL_DEBUG_OBJECTS_HISTORY
+ if (requested_pkts > nb_pkts)
+ rte_mempool_history_bulk((void **)tx_pkts + nb_pkts,
+ requested_pkts - nb_pkts, RTE_MEMPOOL_BUSY_TX);
+#endif
+
rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
return nb_pkts;
}
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 1021ede0c2..1d84c72ba0 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -32,6 +32,7 @@
#include "mempool_trace.h"
#include "rte_mempool.h"
+
RTE_EXPORT_SYMBOL(rte_mempool_logtype)
RTE_LOG_REGISTER_DEFAULT(rte_mempool_logtype, INFO);
@@ -1632,3 +1633,113 @@ RTE_INIT(mempool_init_telemetry)
rte_telemetry_register_cmd("/mempool/info", mempool_handle_info,
"Returns mempool info. Parameters: pool_name");
}
+
+#if RTE_MEMPOOL_DEBUG_OBJECTS_HISTORY
+static void
+rte_mempool_get_object_history_stat(FILE *f, struct rte_mempool *mp)
+{
+ struct rte_mempool_objhdr *hdr;
+
+ uint64_t n_never = 0; /* never been allocated. */
+ uint64_t n_free = 0; /* returned to the pool. */
+ uint64_t n_alloc = 0; /* allocated from the pool. */
+ uint64_t n_ref = 0; /* freed by pmd, not returned to the pool. */
+ uint64_t n_pmd_tx = 0; /* owned by PMD Tx. */
+ uint64_t n_pmd_rx = 0; /* owned by PMD Rx. */
+ uint64_t n_app_rx = 0; /* owned by Application on Rx. */
+ uint64_t n_app_alloc = 0; /* owned by Application on Alloc. */
+ uint64_t n_busy_tx = 0; /* owned by Application on Busy Tx. */
+ uint64_t n_total = 0; /* Total amount. */
+
+ if (f == NULL)
+ return;
+
+ STAILQ_FOREACH(hdr, &mp->elt_list, next) {
+ uint64_t hs = hdr->history;
+
+ rte_rmb();
+ n_total++;
+ if (hs == 0) {
+ n_never++;
+ continue;
+ }
+ switch (hs & RTE_MEMPOOL_HISTORY_MASK) {
+ case RTE_MEMPOOL_FREE:
+ n_free++;
+ break;
+ case RTE_MEMPOOL_PMD_FREE:
+ n_alloc++;
+ n_ref++;
+ break;
+ case RTE_MEMPOOL_PMD_TX:
+ n_alloc++;
+ n_pmd_tx++;
+ break;
+ case RTE_MEMPOOL_APP_RX:
+ n_alloc++;
+ n_app_rx++;
+ break;
+ case RTE_MEMPOOL_PMD_ALLOC:
+ n_alloc++;
+ n_pmd_rx++;
+ break;
+ case RTE_MEMPOOL_ALLOC:
+ n_alloc++;
+ n_app_alloc++;
+ break;
+ case RTE_MEMPOOL_BUSY_TX:
+ n_alloc++;
+ n_busy_tx++;
+ break;
+ default:
+ break;
+ }
+ fprintf(f, "%016" PRIX64 "\n", hs);
+ }
+
+ fprintf(f, "\n"
+ "Populated: %u\n"
+ "Never allocated: %" PRIu64 "\n"
+ "Free: %" PRIu64 "\n"
+ "Allocated: %" PRIu64 "\n"
+ "Referenced free: %" PRIu64 "\n"
+ "PMD owned Tx: %" PRIu64 "\n"
+ "PMD owned Rx: %" PRIu64 "\n"
+ "App owned alloc: %" PRIu64 "\n"
+ "App owned Rx: %" PRIu64 "\n"
+ "App owned busy: %" PRIu64 "\n"
+ "Counted total: %" PRIu64 "\n",
+ mp->populated_size, n_never, n_free + n_never, n_alloc,
+ n_ref, n_pmd_tx, n_pmd_rx, n_app_alloc, n_app_rx,
+ n_busy_tx, n_total);
+}
+#endif
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mempool_objects_dump, 24.07)
+void
+rte_mempool_objects_dump(__rte_unused FILE *f)
+{
+ #if RTE_MEMPOOL_DEBUG_OBJECTS_HISTORY
+ if (f == NULL) {
+ RTE_MEMPOOL_LOG(ERR, "Invalid file pointer");
+ return;
+ }
+
+ struct rte_mempool *mp = NULL;
+ struct rte_tailq_entry *te;
+ struct rte_mempool_list *mempool_list;
+
+ mempool_list = RTE_TAILQ_CAST(rte_mempool_tailq.head, rte_mempool_list);
+
+ rte_mcfg_mempool_read_lock();
+
+ TAILQ_FOREACH(te, mempool_list, next) {
+ mp = (struct rte_mempool *) te->data;
+ rte_mempool_get_object_history_stat(f, mp);
+ }
+
+ rte_mcfg_mempool_read_unlock();
+#else
+ RTE_MEMPOOL_LOG(INFO, "Mempool history recorder is not supported");
+#endif
+}
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index aedc100964..4c0ea6d0a7 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -60,6 +60,26 @@ extern "C" {
#define RTE_MEMPOOL_HEADER_COOKIE2 0xf2eef2eedadd2e55ULL /**< Header cookie. */
#define RTE_MEMPOOL_TRAILER_COOKIE 0xadd2e55badbadbadULL /**< Trailer cookie.*/
+/**
+ * Mempool trace operation bits and masks.
+ * Used to record the lifecycle of mempool objects through the system.
+ */
+#define RTE_MEMPOOL_HISTORY_BITS 4 /*Number of bits for history operation*/
+#define RTE_MEMPOOL_HISTORY_MASK ((1ULL << RTE_MEMPOOL_HISTORY_BITS) - 1)
+
+/* History operation types */
+enum rte_mempool_history_op {
+ RTE_MEMPOOL_NEVER = 0, /* Initial state - never allocated */
+ RTE_MEMPOOL_FREE = 1, /* Freed back to mempool */
+ RTE_MEMPOOL_PMD_FREE = 2, /* Freed by PMD back to mempool */
+ RTE_MEMPOOL_PMD_TX = 3, /* Sent to PMD for Tx */
+ RTE_MEMPOOL_APP_RX = 4, /* Returned to application on Rx */
+ RTE_MEMPOOL_PMD_ALLOC = 5, /* Allocated by PMD for Rx */
+ RTE_MEMPOOL_ALLOC = 6, /* Allocated by application */
+ RTE_MEMPOOL_BUSY_TX = 7, /* Returned to app due to Tx busy */
+ RTE_MEMPOOL_MAX = 8 /* Maximum trace operation value */
+};
+
#ifdef RTE_LIBRTE_MEMPOOL_STATS
/**
* A structure that stores the mempool statistics (per-lcore).
@@ -157,6 +177,9 @@ struct rte_mempool_objhdr {
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
uint64_t cookie; /**< Debug cookie. */
#endif
+#if RTE_MEMPOOL_DEBUG_OBJECTS_HISTORY
+ uint64_t history; /**< Debug object history. */
+#endif
};
/**
@@ -457,6 +480,83 @@ void rte_mempool_contig_blocks_check_cookies(const struct rte_mempool *mp,
#define RTE_MEMPOOL_OPS_NAMESIZE 32 /**< Max length of ops struct name. */
+
+#if RTE_MEMPOOL_DEBUG_OBJECTS_HISTORY
+/**
+ * Get the history value from a mempool object header.
+ *
+ * @param obj
+ * Pointer to the mempool object.
+ * @return
+ * The history value from the object header.
+ */
+static inline uint64_t rte_mempool_history_get(void *obj)
+{
+ struct rte_mempool_objhdr *hdr;
+
+ if (unlikely(obj == NULL))
+ return 0;
+
+ hdr = rte_mempool_get_header(obj);
+ return hdr->history;
+}
+
+/**
+ * Mark a mempool object with the history value.
+ *
+ * @param obj
+ * Pointer to the mempool object.
+ * @param mark
+ * The history mark value to add.
+ */
+static inline void rte_mempool_history_mark(void *obj, uint32_t mark)
+{
+ struct rte_mempool_objhdr *hdr;
+
+ if (unlikely(obj == NULL))
+ return;
+
+ hdr = rte_mempool_get_header(obj);
+ hdr->history = (hdr->history << RTE_MEMPOOL_HISTORY_BITS) | mark;
+}
+
+/**
+ * Mark multiple mempool objects with the history value.
+ *
+ * @param b
+ * Array of pointers to mempool objects.
+ * @param n
+ * Number of objects to mark.
+ * @param mark
+ * The history mark value to add to each object.
+ */
+static inline void rte_mempool_history_bulk(void * const *b, uint32_t n, uint32_t mark)
+{
+ if (unlikely(b == NULL))
+ return;
+
+ while (n--)
+ rte_mempool_history_mark(*b++, mark);
+}
+#else
+static inline uint64_t rte_mempool_history_get(void *obj)
+{
+ RTE_SET_USED(obj);
+ return 0;
+}
+static inline void rte_mempool_history_mark(void *obj, uint32_t mark)
+{
+ RTE_SET_USED(obj);
+ RTE_SET_USED(mark);
+}
+static inline void rte_mempool_history_bulk(void * const *b, uint32_t n, uint32_t mark)
+{
+ RTE_SET_USED(b);
+ RTE_SET_USED(n);
+ RTE_SET_USED(mark);
+}
+#endif
+
/**
* Prototype for implementation specific data provisioning function.
*
@@ -1395,6 +1495,7 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
/* Increment stats now, adding in mempool always succeeds. */
RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_bulk, 1);
RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, n);
+ rte_mempool_history_bulk(obj_table, n, RTE_MEMPOOL_FREE);
__rte_assume(cache->flushthresh <= RTE_MEMPOOL_CACHE_MAX_SIZE * 2);
__rte_assume(cache->len <= RTE_MEMPOOL_CACHE_MAX_SIZE * 2);
@@ -1661,6 +1762,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
ret = rte_mempool_do_generic_get(mp, obj_table, n, cache);
if (likely(ret == 0))
RTE_MEMPOOL_CHECK_COOKIES(mp, obj_table, n, 1);
+ rte_mempool_history_bulk(obj_table, n, RTE_MEMPOOL_ALLOC);
rte_mempool_trace_generic_get(mp, obj_table, n, cache);
return ret;
}
@@ -1876,6 +1978,10 @@ static inline void *rte_mempool_get_priv(struct rte_mempool *mp)
RTE_MEMPOOL_HEADER_SIZE(mp, mp->cache_size);
}
+__rte_experimental
+void
+rte_mempool_objects_dump(FILE *f);
+
/**
* Dump the status of all mempools on the console
*
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* [RFC PATCH 2/5] drivers: add mempool history compilation flag
2025-06-16 7:29 [RFC PATCH 0/5] Introduce mempool object new debug capabilities Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 1/5] mempool: record mempool objects operations history Shani Peretz
@ 2025-06-16 7:29 ` Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 3/5] net/mlx5: mark an operation in mempool object's history Shani Peretz
` (4 subsequent siblings)
6 siblings, 0 replies; 24+ messages in thread
From: Shani Peretz @ 2025-06-16 7:29 UTC (permalink / raw)
To: dev
Cc: Shani Peretz, Bruce Richardson, John W. Linville, Ciara Loftus,
Maryam Tahhan, Shepard Siegel, Ed Czeck, John Miller,
Igor Russkikh, Steven Webster, Matt Peters, Selwin Sebastian,
Julien Aube, Ajit Khaparde, Somnath Kotur, Chas Williams,
Min Hu (Connor),
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Harman Kalra, Potnuri Bharat Teja, Hemant Agrawal, Sachin Saxena,
Shai Brandes, Evgeny Schemeilin, Ron Beider, Amit Bernstein,
Wajeeh Atrash, Gagandeep Singh, Apeksha Gupta, John Daley,
Hyong Youb Kim, Gaetan Rivet, Jeroen de Borst, Joshua Washington,
Ziyang Xuan, Xiaoyun Wang, Dengdui Huang, Praveen Shetty,
Vladimir Medvedkin, Anatoly Burakov, Jingjing Wu, Andrew Boyer,
Long Li, Wei Hu, Jakub Grajciar, Matan Azrad,
Viacheslav Ovsiienko, Dariusz Sosnowski, Bing Zhao, Ori Kam,
Suanming Mou, Zyta Szpak, Liron Himi, Martin Spinler,
Chaoyong He, Jiawen Wu, Zaiyu Wang, Christian Koue Muf,
Serhii Iliushyk, Tetsuya Mukawa, Vamsi Attunuru,
Devendra Singh Rawat, Alok Prasad, Howard Wang, Chunhao Lin,
Xing Wang, Andrew Rybchenko, Cristian Dumitrescu,
Stephen Hemminger, Jerin Jacob, Maciej Czekaj, Jian Wang,
Maxime Coquelin, Chenbo Xia, Jochen Behrens, Renyong Wan, Na Na,
Rong Qian, Xiaoxiong Zhang, Dongwei Xu, Junlong Wang, Lijie Shan
This commit adds a new compilation flag to enable
mempool history recording in DPDK drivers.
- Add support check for mempool history feature in each driver's meson
- Skip building drivers that don't support mempool history when enabled
Drivers must explicitly support this feature in their meson file
to be built when RTE_MEMPOOL_DEBUG_OBJECTS_HISTORY is enabled.
Signed-off-by: Shani Peretz <shperetz@nvidia.com>
---
config/meson.build | 1 +
drivers/meson.build | 7 +++++++
drivers/net/af_packet/meson.build | 1 +
drivers/net/af_xdp/meson.build | 1 +
drivers/net/ark/meson.build | 2 ++
drivers/net/atlantic/meson.build | 2 ++
drivers/net/avp/meson.build | 2 ++
drivers/net/axgbe/meson.build | 2 ++
drivers/net/bnx2x/meson.build | 1 +
drivers/net/bnxt/meson.build | 2 ++
drivers/net/bonding/meson.build | 1 +
drivers/net/cnxk/meson.build | 1 +
drivers/net/cxgbe/meson.build | 2 ++
drivers/net/dpaa/meson.build | 2 ++
drivers/net/dpaa2/meson.build | 2 ++
drivers/net/ena/meson.build | 2 ++
drivers/net/enetc/meson.build | 2 ++
drivers/net/enetfec/meson.build | 2 ++
drivers/net/enic/meson.build | 2 ++
drivers/net/failsafe/meson.build | 1 +
drivers/net/gve/meson.build | 2 ++
drivers/net/hinic/meson.build | 2 ++
drivers/net/hns3/meson.build | 1 +
drivers/net/intel/cpfl/meson.build | 2 ++
drivers/net/intel/e1000/meson.build | 2 ++
drivers/net/intel/fm10k/meson.build | 2 ++
drivers/net/intel/i40e/meson.build | 2 ++
drivers/net/intel/iavf/meson.build | 2 ++
drivers/net/intel/ice/meson.build | 1 +
drivers/net/intel/idpf/meson.build | 2 ++
drivers/net/intel/ixgbe/meson.build | 2 ++
drivers/net/ionic/meson.build | 2 ++
drivers/net/mana/meson.build | 2 ++
drivers/net/memif/meson.build | 1 +
drivers/net/mlx4/meson.build | 2 ++
drivers/net/mlx5/meson.build | 1 +
drivers/net/mvneta/meson.build | 2 ++
drivers/net/mvpp2/meson.build | 2 ++
drivers/net/netvsc/meson.build | 2 ++
drivers/net/nfb/meson.build | 2 ++
drivers/net/nfp/meson.build | 2 ++
drivers/net/ngbe/meson.build | 2 ++
drivers/net/ntnic/meson.build | 4 ++++
drivers/net/null/meson.build | 1 +
drivers/net/octeon_ep/meson.build | 2 ++
drivers/net/octeontx/meson.build | 2 ++
drivers/net/pcap/meson.build | 1 +
drivers/net/pfe/meson.build | 2 ++
drivers/net/qede/meson.build | 2 ++
drivers/net/r8169/meson.build | 4 +++-
drivers/net/ring/meson.build | 1 +
drivers/net/sfc/meson.build | 2 ++
drivers/net/softnic/meson.build | 2 ++
drivers/net/tap/meson.build | 1 +
drivers/net/thunderx/meson.build | 2 ++
drivers/net/txgbe/meson.build | 2 ++
drivers/net/vdev_netvsc/meson.build | 2 ++
drivers/net/vhost/meson.build | 2 ++
drivers/net/virtio/meson.build | 2 ++
drivers/net/vmxnet3/meson.build | 2 ++
drivers/net/xsc/meson.build | 2 ++
drivers/net/zxdh/meson.build | 4 ++++
meson_options.txt | 2 ++
63 files changed, 121 insertions(+), 1 deletion(-)
diff --git a/config/meson.build b/config/meson.build
index f31fef216c..78d3574eb2 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -379,6 +379,7 @@ if get_option('mbuf_refcnt_atomic')
dpdk_conf.set('RTE_MBUF_REFCNT_ATOMIC', true)
endif
dpdk_conf.set10('RTE_IOVA_IN_MBUF', get_option('enable_iova_as_pa'))
+dpdk_conf.set10('RTE_MEMPOOL_DEBUG_OBJECTS_HISTORY', get_option('enable_mempool_history'))
compile_time_cpuflags = []
subdir(arch_subdir)
diff --git a/drivers/meson.build b/drivers/meson.build
index 7b7205dfac..253180bccd 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -143,6 +143,7 @@ foreach subpath:subdirs
pkgconfig_extra_libs = []
testpmd_sources = []
require_iova_in_mbuf = true
+ support_mempool_history = true
# for handling base code files which may need extra cflags
base_sources = []
base_cflags = []
@@ -174,6 +175,12 @@ foreach subpath:subdirs
build = false
reason = 'requires IOVA in mbuf (set enable_iova_as_pa option)'
endif
+
+ if dpdk_conf.get('RTE_MEMPOOL_DEBUG_OBJECTS_HISTORY') == 1 and not support_mempool_history
+ build = false
+ reason = 'driver does not support mempool history tracing'
+ endif
+
# error out if we can't build a driver and that driver was explicitly requested,
# i.e. not via wildcard.
if not build and require_drivers and get_option('enable_drivers').contains(drv_path)
diff --git a/drivers/net/af_packet/meson.build b/drivers/net/af_packet/meson.build
index f45e4491d4..476d10e132 100644
--- a/drivers/net/af_packet/meson.build
+++ b/drivers/net/af_packet/meson.build
@@ -7,3 +7,4 @@ if not is_linux
endif
sources = files('rte_eth_af_packet.c')
require_iova_in_mbuf = false
+support_mempool_history = false
diff --git a/drivers/net/af_xdp/meson.build b/drivers/net/af_xdp/meson.build
index 2d37bcc869..09228d8e54 100644
--- a/drivers/net/af_xdp/meson.build
+++ b/drivers/net/af_xdp/meson.build
@@ -92,3 +92,4 @@ if build
endif
require_iova_in_mbuf = false
+support_mempool_history = false
diff --git a/drivers/net/ark/meson.build b/drivers/net/ark/meson.build
index 12b3935b85..c3126f570f 100644
--- a/drivers/net/ark/meson.build
+++ b/drivers/net/ark/meson.build
@@ -18,3 +18,5 @@ sources = files(
'ark_pktgen.c',
'ark_udm.c',
)
+
+support_mempool_history = false
diff --git a/drivers/net/atlantic/meson.build b/drivers/net/atlantic/meson.build
index bf5e47eaaf..09085f853c 100644
--- a/drivers/net/atlantic/meson.build
+++ b/drivers/net/atlantic/meson.build
@@ -17,3 +17,5 @@ sources = files(
'hw_atl/hw_atl_utils.c',
'rte_pmd_atlantic.c',
)
+
+support_mempool_history = false
diff --git a/drivers/net/avp/meson.build b/drivers/net/avp/meson.build
index ea9dd1f20e..ee46fca105 100644
--- a/drivers/net/avp/meson.build
+++ b/drivers/net/avp/meson.build
@@ -7,3 +7,5 @@ if not is_linux
endif
sources = files('avp_ethdev.c')
headers = files('rte_avp_common.h', 'rte_avp_fifo.h')
+
+support_mempool_history = false
diff --git a/drivers/net/axgbe/meson.build b/drivers/net/axgbe/meson.build
index b1cbe3d810..426b470673 100644
--- a/drivers/net/axgbe/meson.build
+++ b/drivers/net/axgbe/meson.build
@@ -20,3 +20,5 @@ cflags += '-Wno-cast-qual'
if arch_subdir == 'x86'
sources += files('axgbe_rxtx_vec_sse.c')
endif
+
+support_mempool_history = false
diff --git a/drivers/net/bnx2x/meson.build b/drivers/net/bnx2x/meson.build
index bc66ab8032..12a2f1a5e4 100644
--- a/drivers/net/bnx2x/meson.build
+++ b/drivers/net/bnx2x/meson.build
@@ -23,3 +23,4 @@ sources = files(
)
annotate_locks = false
+support_mempool_history = false
diff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build
index 79300eb6ac..4ef555a78c 100644
--- a/drivers/net/bnxt/meson.build
+++ b/drivers/net/bnxt/meson.build
@@ -64,3 +64,5 @@ elif arch_subdir == 'arm' and dpdk_conf.get('RTE_ARCH_64')
endif
annotate_locks = false
+support_mempool_history = false
+
diff --git a/drivers/net/bonding/meson.build b/drivers/net/bonding/meson.build
index d87e7a2522..d13ab392c6 100644
--- a/drivers/net/bonding/meson.build
+++ b/drivers/net/bonding/meson.build
@@ -24,5 +24,6 @@ deps += ['ip_frag']
headers = files('rte_eth_bond.h', 'rte_eth_bond_8023ad.h')
require_iova_in_mbuf = false
+support_mempool_history = false
cflags += no_wvla_cflag
diff --git a/drivers/net/cnxk/meson.build b/drivers/net/cnxk/meson.build
index d20a8601eb..ad5faa57f2 100644
--- a/drivers/net/cnxk/meson.build
+++ b/drivers/net/cnxk/meson.build
@@ -343,5 +343,6 @@ endforeach
headers = files('rte_pmd_cnxk.h')
require_iova_in_mbuf = false
+support_mempool_history = false
annotate_locks = false
diff --git a/drivers/net/cxgbe/meson.build b/drivers/net/cxgbe/meson.build
index 95cb58ab44..a3dc7ff906 100644
--- a/drivers/net/cxgbe/meson.build
+++ b/drivers/net/cxgbe/meson.build
@@ -25,3 +25,5 @@ sources = files(
includes += include_directories('base')
cflags += no_wvla_cflag
+
+support_mempool_history = false
diff --git a/drivers/net/dpaa/meson.build b/drivers/net/dpaa/meson.build
index b1e4bbafb5..6d33f1a2b8 100644
--- a/drivers/net/dpaa/meson.build
+++ b/drivers/net/dpaa/meson.build
@@ -24,3 +24,5 @@ if cc.has_argument('-Wno-pointer-arith')
endif
headers = files('rte_pmd_dpaa.h')
+
+support_mempool_history = false
diff --git a/drivers/net/dpaa2/meson.build b/drivers/net/dpaa2/meson.build
index 89932b3037..e44e01ab95 100644
--- a/drivers/net/dpaa2/meson.build
+++ b/drivers/net/dpaa2/meson.build
@@ -29,3 +29,5 @@ includes += include_directories('base', 'mc')
headers = files('rte_pmd_dpaa2.h')
cflags += no_wvla_cflag
+
+support_mempool_history = false
diff --git a/drivers/net/ena/meson.build b/drivers/net/ena/meson.build
index d02ed3f64f..620bdeeb82 100644
--- a/drivers/net/ena/meson.build
+++ b/drivers/net/ena/meson.build
@@ -17,3 +17,5 @@ sources = files(
deps += ['timer']
includes += include_directories('base', 'base/ena_defs')
+
+support_mempool_history = false
diff --git a/drivers/net/enetc/meson.build b/drivers/net/enetc/meson.build
index 966dc694fc..36e3740042 100644
--- a/drivers/net/enetc/meson.build
+++ b/drivers/net/enetc/meson.build
@@ -13,3 +13,5 @@ sources = files(
)
includes += include_directories('base')
+
+support_mempool_history = false
diff --git a/drivers/net/enetfec/meson.build b/drivers/net/enetfec/meson.build
index 29a464424b..c213da7640 100644
--- a/drivers/net/enetfec/meson.build
+++ b/drivers/net/enetfec/meson.build
@@ -11,3 +11,5 @@ sources = files(
'enet_uio.c',
'enet_rxtx.c',
)
+
+support_mempool_history = false
diff --git a/drivers/net/enic/meson.build b/drivers/net/enic/meson.build
index a48a497e46..1c160fdc62 100644
--- a/drivers/net/enic/meson.build
+++ b/drivers/net/enic/meson.build
@@ -35,3 +35,5 @@ if dpdk_conf.has('RTE_ARCH_X86_64')
endif
annotate_locks = false
+support_mempool_history = false
+
diff --git a/drivers/net/failsafe/meson.build b/drivers/net/failsafe/meson.build
index 90c965b705..889eb383bd 100644
--- a/drivers/net/failsafe/meson.build
+++ b/drivers/net/failsafe/meson.build
@@ -29,6 +29,7 @@ sources = files(
)
require_iova_in_mbuf = false
+support_mempool_history = false
if is_freebsd
annotate_locks = false
diff --git a/drivers/net/gve/meson.build b/drivers/net/gve/meson.build
index ed5ef0a1fc..c7155cc15d 100644
--- a/drivers/net/gve/meson.build
+++ b/drivers/net/gve/meson.build
@@ -20,3 +20,5 @@ sources = files(
includes += include_directories('base')
cflags += no_wvla_cflag
+
+support_mempool_history = false
diff --git a/drivers/net/hinic/meson.build b/drivers/net/hinic/meson.build
index 36cc9431a6..0cd5fe6f72 100644
--- a/drivers/net/hinic/meson.build
+++ b/drivers/net/hinic/meson.build
@@ -21,3 +21,5 @@ includes += include_directories('base')
if is_freebsd
annotate_locks = false
endif
+
+support_mempool_history = false
diff --git a/drivers/net/hns3/meson.build b/drivers/net/hns3/meson.build
index 53a9dd6f39..ca9bbec971 100644
--- a/drivers/net/hns3/meson.build
+++ b/drivers/net/hns3/meson.build
@@ -34,6 +34,7 @@ sources = files(
)
require_iova_in_mbuf = false
+support_mempool_history = false
annotate_locks = false
diff --git a/drivers/net/intel/cpfl/meson.build b/drivers/net/intel/cpfl/meson.build
index 1f0269d50b..a821e5e999 100644
--- a/drivers/net/intel/cpfl/meson.build
+++ b/drivers/net/intel/cpfl/meson.build
@@ -35,3 +35,5 @@ if dpdk_conf.has('RTE_HAS_JANSSON')
)
ext_deps += jansson_dep
endif
+
+support_mempool_history = false
diff --git a/drivers/net/intel/e1000/meson.build b/drivers/net/intel/e1000/meson.build
index 924fe4ecae..a80de1964f 100644
--- a/drivers/net/intel/e1000/meson.build
+++ b/drivers/net/intel/e1000/meson.build
@@ -23,3 +23,5 @@ if not is_windows
'igc_txrx.c',
)
endif
+
+support_mempool_history = false
diff --git a/drivers/net/intel/fm10k/meson.build b/drivers/net/intel/fm10k/meson.build
index fac4750f8d..a7e68959e8 100644
--- a/drivers/net/intel/fm10k/meson.build
+++ b/drivers/net/intel/fm10k/meson.build
@@ -16,3 +16,5 @@ sources += files(
if arch_subdir == 'x86'
sources += files('fm10k_rxtx_vec.c')
endif
+
+support_mempool_history = false
diff --git a/drivers/net/intel/i40e/meson.build b/drivers/net/intel/i40e/meson.build
index 49e7f899e6..c4840f9bef 100644
--- a/drivers/net/intel/i40e/meson.build
+++ b/drivers/net/intel/i40e/meson.build
@@ -47,3 +47,5 @@ elif arch_subdir == 'arm'
endif
headers = files('rte_pmd_i40e.h')
+
+support_mempool_history = false
diff --git a/drivers/net/intel/iavf/meson.build b/drivers/net/intel/iavf/meson.build
index 0db94d6fe6..1c94518b5f 100644
--- a/drivers/net/intel/iavf/meson.build
+++ b/drivers/net/intel/iavf/meson.build
@@ -35,3 +35,5 @@ elif arch_subdir == 'arm'
endif
headers = files('rte_pmd_iavf.h')
+
+support_mempool_history = false
diff --git a/drivers/net/intel/ice/meson.build b/drivers/net/intel/ice/meson.build
index 8a20d0f297..d2e570d79c 100644
--- a/drivers/net/intel/ice/meson.build
+++ b/drivers/net/intel/ice/meson.build
@@ -45,3 +45,4 @@ sources += files(
)
require_iova_in_mbuf = false
+support_mempool_history = false
diff --git a/drivers/net/intel/idpf/meson.build b/drivers/net/intel/idpf/meson.build
index a805d02ea2..e61df7a5f0 100644
--- a/drivers/net/intel/idpf/meson.build
+++ b/drivers/net/intel/idpf/meson.build
@@ -24,3 +24,5 @@ if arch_subdir == 'x86' and dpdk_conf.get('RTE_IOVA_IN_MBUF') == 1
sources_avx2 += files('idpf_common_rxtx_avx2.c')
sources_avx512 += files('idpf_common_rxtx_avx512.c')
endif
+
+support_mempool_history = false
diff --git a/drivers/net/intel/ixgbe/meson.build b/drivers/net/intel/ixgbe/meson.build
index d1122bb9cd..d525ef3871 100644
--- a/drivers/net/intel/ixgbe/meson.build
+++ b/drivers/net/intel/ixgbe/meson.build
@@ -32,3 +32,5 @@ elif arch_subdir == 'arm'
endif
headers = files('rte_pmd_ixgbe.h')
+
+support_mempool_history = false
diff --git a/drivers/net/ionic/meson.build b/drivers/net/ionic/meson.build
index cc6d5ce4db..ed96a3074b 100644
--- a/drivers/net/ionic/meson.build
+++ b/drivers/net/ionic/meson.build
@@ -24,3 +24,5 @@ sources = files(
)
includes += include_directories('../../common/ionic')
+
+support_mempool_history = false
diff --git a/drivers/net/mana/meson.build b/drivers/net/mana/meson.build
index e320da7fc4..01655c2cd7 100644
--- a/drivers/net/mana/meson.build
+++ b/drivers/net/mana/meson.build
@@ -53,3 +53,5 @@ foreach arg:required_symbols
subdir_done()
endif
endforeach
+
+support_mempool_history = false
diff --git a/drivers/net/memif/meson.build b/drivers/net/memif/meson.build
index 8b2aab1f39..07d106a3f9 100644
--- a/drivers/net/memif/meson.build
+++ b/drivers/net/memif/meson.build
@@ -14,3 +14,4 @@ sources = files(
deps += ['hash']
require_iova_in_mbuf = false
+support_mempool_history = false
diff --git a/drivers/net/mlx4/meson.build b/drivers/net/mlx4/meson.build
index 869d2895c8..7bd54ff6aa 100644
--- a/drivers/net/mlx4/meson.build
+++ b/drivers/net/mlx4/meson.build
@@ -136,3 +136,5 @@ if dlopen_ibverbs
install_dir: dlopen_install_dir,
)
endif
+
+support_mempool_history = false
diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build
index 6a91692759..a259de5b87 100644
--- a/drivers/net/mlx5/meson.build
+++ b/drivers/net/mlx5/meson.build
@@ -90,6 +90,7 @@ else
endif
require_iova_in_mbuf = false
+support_mempool_history = true
testpmd_sources += files('mlx5_testpmd.c')
diff --git a/drivers/net/mvneta/meson.build b/drivers/net/mvneta/meson.build
index 2b0a94ddd0..41f68d0d60 100644
--- a/drivers/net/mvneta/meson.build
+++ b/drivers/net/mvneta/meson.build
@@ -26,3 +26,5 @@ sources = files(
deps += ['cfgfile', 'common_mvep']
cflags += no_wvla_cflag
+
+support_mempool_history = false
diff --git a/drivers/net/mvpp2/meson.build b/drivers/net/mvpp2/meson.build
index 396e382128..8853fbe809 100644
--- a/drivers/net/mvpp2/meson.build
+++ b/drivers/net/mvpp2/meson.build
@@ -29,3 +29,5 @@ sources = files(
cflags += no_wvla_cflag
deps += ['cfgfile', 'common_mvep']
+
+support_mempool_history = false
diff --git a/drivers/net/netvsc/meson.build b/drivers/net/netvsc/meson.build
index ca94d97989..463f05aeeb 100644
--- a/drivers/net/netvsc/meson.build
+++ b/drivers/net/netvsc/meson.build
@@ -17,3 +17,5 @@ sources = files(
)
cflags += no_wvla_cflag
+
+support_mempool_history = false
diff --git a/drivers/net/nfb/meson.build b/drivers/net/nfb/meson.build
index d7a255c928..e5f1f1f03c 100644
--- a/drivers/net/nfb/meson.build
+++ b/drivers/net/nfb/meson.build
@@ -23,3 +23,5 @@ sources = files(
)
cflags += no_wvla_cflag
+
+support_mempool_history = false
diff --git a/drivers/net/nfp/meson.build b/drivers/net/nfp/meson.build
index 4c34a73b70..a5d07ec813 100644
--- a/drivers/net/nfp/meson.build
+++ b/drivers/net/nfp/meson.build
@@ -65,3 +65,5 @@ else
endif
deps += ['hash', 'security', 'common_nfp']
+
+support_mempool_history = false
diff --git a/drivers/net/ngbe/meson.build b/drivers/net/ngbe/meson.build
index 319eb23c35..e4d90a290d 100644
--- a/drivers/net/ngbe/meson.build
+++ b/drivers/net/ngbe/meson.build
@@ -24,3 +24,5 @@ if arch_subdir == 'x86'
elif arch_subdir == 'arm'
sources += files('ngbe_rxtx_vec_neon.c')
endif
+
+support_mempool_history = false
diff --git a/drivers/net/ntnic/meson.build b/drivers/net/ntnic/meson.build
index b4c6cfe7de..bd46794ec3 100644
--- a/drivers/net/ntnic/meson.build
+++ b/drivers/net/ntnic/meson.build
@@ -124,3 +124,7 @@ sources = files(
'ntnic_vfio.c',
'ntnic_ethdev.c',
)
+
+cflags += no_wvla_cflag
+
+support_mempool_history = false
diff --git a/drivers/net/null/meson.build b/drivers/net/null/meson.build
index bad7dc1af7..4127a07959 100644
--- a/drivers/net/null/meson.build
+++ b/drivers/net/null/meson.build
@@ -3,3 +3,4 @@
sources = files('rte_eth_null.c')
require_iova_in_mbuf = false
+support_mempool_history = false
diff --git a/drivers/net/octeon_ep/meson.build b/drivers/net/octeon_ep/meson.build
index cbb729b689..f5ff3b8a9d 100644
--- a/drivers/net/octeon_ep/meson.build
+++ b/drivers/net/octeon_ep/meson.build
@@ -28,3 +28,5 @@ foreach flag: extra_flags
cflags += flag
endif
endforeach
+
+support_mempool_history = false
diff --git a/drivers/net/octeontx/meson.build b/drivers/net/octeontx/meson.build
index fc8a5a73f2..82247a87a0 100644
--- a/drivers/net/octeontx/meson.build
+++ b/drivers/net/octeontx/meson.build
@@ -18,3 +18,5 @@ sources = files(
deps += ['mempool_octeontx', 'eventdev']
cflags += no_wvla_cflag
+
+support_mempool_history = false
diff --git a/drivers/net/pcap/meson.build b/drivers/net/pcap/meson.build
index 676c55018e..28c86267a8 100644
--- a/drivers/net/pcap/meson.build
+++ b/drivers/net/pcap/meson.build
@@ -17,3 +17,4 @@ if is_windows
endif
require_iova_in_mbuf = false
+support_mempool_history = false
diff --git a/drivers/net/pfe/meson.build b/drivers/net/pfe/meson.build
index 684b062090..2fb4ec4408 100644
--- a/drivers/net/pfe/meson.build
+++ b/drivers/net/pfe/meson.build
@@ -19,3 +19,5 @@ if cc.has_argument('-Wno-pointer-arith')
endif
includes += include_directories('base')
+
+support_mempool_history = false
diff --git a/drivers/net/qede/meson.build b/drivers/net/qede/meson.build
index e1b21d6ff5..a58811c9fe 100644
--- a/drivers/net/qede/meson.build
+++ b/drivers/net/qede/meson.build
@@ -22,3 +22,5 @@ sources = files(
if cc.has_argument('-Wno-format-nonliteral')
cflags += '-Wno-format-nonliteral'
endif
+
+support_mempool_history = false
diff --git a/drivers/net/r8169/meson.build b/drivers/net/r8169/meson.build
index d1e65377a3..56eb726c84 100644
--- a/drivers/net/r8169/meson.build
+++ b/drivers/net/r8169/meson.build
@@ -18,4 +18,6 @@ sources = files(
'base/rtl8126a.c',
'base/rtl8126a_mcu.c',
'base/rtl8168kb.c',
-)
\ No newline at end of file
+)
+
+support_mempool_history = false
diff --git a/drivers/net/ring/meson.build b/drivers/net/ring/meson.build
index 9b713c9370..a37c7294f6 100644
--- a/drivers/net/ring/meson.build
+++ b/drivers/net/ring/meson.build
@@ -4,3 +4,4 @@
sources = files('rte_eth_ring.c')
headers = files('rte_eth_ring.h')
require_iova_in_mbuf = false
+support_mempool_history = false
diff --git a/drivers/net/sfc/meson.build b/drivers/net/sfc/meson.build
index 90c38231e1..924771c1ea 100644
--- a/drivers/net/sfc/meson.build
+++ b/drivers/net/sfc/meson.build
@@ -108,3 +108,5 @@ sources = files(
'sfc_repr.c',
'sfc_nic_dma.c',
)
+
+support_mempool_history = false
diff --git a/drivers/net/softnic/meson.build b/drivers/net/softnic/meson.build
index e8d7290062..7f018f6a60 100644
--- a/drivers/net/softnic/meson.build
+++ b/drivers/net/softnic/meson.build
@@ -17,3 +17,5 @@ sources = files(
)
deps += ['pipeline', 'port', 'table']
cflags += no_wvla_cflag
+
+support_mempool_history = false
diff --git a/drivers/net/tap/meson.build b/drivers/net/tap/meson.build
index 7160e9e98d..11c25589eb 100644
--- a/drivers/net/tap/meson.build
+++ b/drivers/net/tap/meson.build
@@ -19,6 +19,7 @@ cflags += max_queues
cflags += no_wvla_cflag
require_iova_in_mbuf = false
+support_mempool_history = false
if cc.has_header_symbol('linux/pkt_cls.h', 'TCA_FLOWER_ACT')
cflags += '-DHAVE_TCA_FLOWER'
diff --git a/drivers/net/thunderx/meson.build b/drivers/net/thunderx/meson.build
index 03262af8ca..a93e6c0403 100644
--- a/drivers/net/thunderx/meson.build
+++ b/drivers/net/thunderx/meson.build
@@ -22,3 +22,5 @@ endif
if cc.has_argument('-Wno-maybe-uninitialized')
cflags += '-Wno-maybe-uninitialized'
endif
+
+support_mempool_history = false
diff --git a/drivers/net/txgbe/meson.build b/drivers/net/txgbe/meson.build
index 4dbbf597bb..baa6f549a9 100644
--- a/drivers/net/txgbe/meson.build
+++ b/drivers/net/txgbe/meson.build
@@ -32,3 +32,5 @@ elif arch_subdir == 'arm'
endif
install_headers('rte_pmd_txgbe.h')
+
+support_mempool_history = false
diff --git a/drivers/net/vdev_netvsc/meson.build b/drivers/net/vdev_netvsc/meson.build
index bd35a13f3d..cbbef57633 100644
--- a/drivers/net/vdev_netvsc/meson.build
+++ b/drivers/net/vdev_netvsc/meson.build
@@ -19,3 +19,5 @@ foreach option:cflags_options
cflags += option
endif
endforeach
+
+support_mempool_history = false
diff --git a/drivers/net/vhost/meson.build b/drivers/net/vhost/meson.build
index f481a3a4b8..9462f9db05 100644
--- a/drivers/net/vhost/meson.build
+++ b/drivers/net/vhost/meson.build
@@ -10,3 +10,5 @@ endif
deps += 'vhost'
sources = files('rte_eth_vhost.c')
headers = files('rte_eth_vhost.h')
+
+support_mempool_history = false
diff --git a/drivers/net/virtio/meson.build b/drivers/net/virtio/meson.build
index d3caa3a3b4..1aa77bb7fa 100644
--- a/drivers/net/virtio/meson.build
+++ b/drivers/net/virtio/meson.build
@@ -56,3 +56,5 @@ if is_linux
'virtio_user/virtio_user_dev.c')
deps += ['bus_vdev']
endif
+
+support_mempool_history = false
diff --git a/drivers/net/vmxnet3/meson.build b/drivers/net/vmxnet3/meson.build
index ed563386d5..63881d02ce 100644
--- a/drivers/net/vmxnet3/meson.build
+++ b/drivers/net/vmxnet3/meson.build
@@ -17,3 +17,5 @@ foreach flag: error_cflags
cflags += flag
endif
endforeach
+
+support_mempool_history = false
diff --git a/drivers/net/xsc/meson.build b/drivers/net/xsc/meson.build
index fe88bbee8c..e6410e92f1 100644
--- a/drivers/net/xsc/meson.build
+++ b/drivers/net/xsc/meson.build
@@ -17,3 +17,5 @@ sources = files(
)
cflags += no_wvla_cflag
+
+support_mempool_history = false
diff --git a/drivers/net/zxdh/meson.build b/drivers/net/zxdh/meson.build
index a48a0d43c2..28fa70f319 100644
--- a/drivers/net/zxdh/meson.build
+++ b/drivers/net/zxdh/meson.build
@@ -25,3 +25,7 @@ sources = files(
'zxdh_ethdev_ops.c',
'zxdh_mtr.c',
)
+
+cflags += no_wvla_cflag
+
+support_mempool_history = false
diff --git a/meson_options.txt b/meson_options.txt
index e49b2fc089..8be20c7456 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -16,6 +16,8 @@ option('drivers_install_subdir', type: 'string', value: 'dpdk/pmds-<VERSION>', d
'Subdirectory of libdir where to install PMDs. Defaults to using a versioned subdirectory.')
option('enable_docs', type: 'boolean', value: false, description:
'build documentation')
+option('enable_mempool_history', type: 'boolean', value: false, description:
+ 'Enable mempool history tracking for debugging purposes. This will track mempool objects allocation and free operations. Default is false.')
option('enable_apps', type: 'string', value: '', description:
'Comma-separated list of apps to build. If unspecified, build all apps.')
option('enable_deprecated_libs', type: 'string', value: '', description:
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* [RFC PATCH 3/5] net/mlx5: mark an operation in mempool object's history
2025-06-16 7:29 [RFC PATCH 0/5] Introduce mempool object new debug capabilities Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 1/5] mempool: record mempool objects operations history Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 2/5] drivers: add mempool history compilation flag Shani Peretz
@ 2025-06-16 7:29 ` Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 4/5] app/testpmd: add testpmd command to dump mempool history Shani Peretz
` (3 subsequent siblings)
6 siblings, 0 replies; 24+ messages in thread
From: Shani Peretz @ 2025-06-16 7:29 UTC (permalink / raw)
To: dev
Cc: Shani Peretz, Dariusz Sosnowski, Viacheslav Ovsiienko, Bing Zhao,
Ori Kam, Suanming Mou, Matan Azrad
record operations on mempool objects when it is allocated
and released inside the mlx5 PMD.
Signed-off-by: Shani Peretz <shperetz@nvidia.com>
---
drivers/net/mlx5/mlx5_rx.c | 9 +++++++++
drivers/net/mlx5/mlx5_rx.h | 2 ++
drivers/net/mlx5/mlx5_rxq.c | 9 +++++++--
drivers/net/mlx5/mlx5_rxtx_vec.c | 6 ++++++
drivers/net/mlx5/mlx5_tx.h | 7 +++++++
drivers/net/mlx5/mlx5_txq.c | 1 +
6 files changed, 32 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index 5f4a93fe8c..a86ed2180e 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -560,12 +560,15 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec,
elt_idx = (elts_ci + i) & e_mask;
elt = &(*rxq->elts)[elt_idx];
*elt = rte_mbuf_raw_alloc(rxq->mp);
+ rte_mempool_history_mark(*elt, RTE_MEMPOOL_PMD_ALLOC);
if (!*elt) {
for (i--; i >= 0; --i) {
elt_idx = (elts_ci +
i) & elts_n;
elt = &(*rxq->elts)
[elt_idx];
+ rte_mempool_history_mark(*elt,
+ RTE_MEMPOOL_PMD_FREE);
rte_pktmbuf_free_seg
(*elt);
}
@@ -952,6 +955,7 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
rte_prefetch0(wqe);
/* Allocate the buf from the same pool. */
rep = rte_mbuf_raw_alloc(seg->pool);
+ rte_mempool_history_mark(rep, RTE_MEMPOOL_PMD_ALLOC);
if (unlikely(rep == NULL)) {
++rxq->stats.rx_nombuf;
if (!pkt) {
@@ -966,6 +970,7 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
rep = NEXT(pkt);
NEXT(pkt) = NULL;
NB_SEGS(pkt) = 1;
+ rte_mempool_history_mark(pkt, RTE_MEMPOOL_PMD_FREE);
rte_mbuf_raw_free(pkt);
pkt = rep;
}
@@ -979,6 +984,7 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
len = mlx5_rx_poll_len(rxq, cqe, cqe_n, cqe_mask, &mcqe, &skip_cnt, false);
if (unlikely(len & MLX5_ERROR_CQE_MASK)) {
/* We drop packets with non-critical errors */
+ rte_mempool_history_mark(rep, RTE_MEMPOOL_PMD_FREE);
rte_mbuf_raw_free(rep);
if (len == MLX5_CRITICAL_ERROR_CQE_RET) {
rq_ci = rxq->rq_ci << sges_n;
@@ -992,6 +998,7 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
continue;
}
if (len == 0) {
+ rte_mempool_history_mark(rep, RTE_MEMPOOL_PMD_FREE);
rte_mbuf_raw_free(rep);
break;
}
@@ -1268,6 +1275,7 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
++rxq->stats.rx_nombuf;
break;
}
+ rte_mempool_history_mark(pkt, RTE_MEMPOOL_PMD_ALLOC);
len = (byte_cnt & MLX5_MPRQ_LEN_MASK) >> MLX5_MPRQ_LEN_SHIFT;
MLX5_ASSERT((int)len >= (rxq->crc_present << 2));
if (rxq->crc_present)
@@ -1275,6 +1283,7 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
rxq_code = mprq_buf_to_pkt(rxq, pkt, len, buf,
strd_idx, strd_cnt);
if (unlikely(rxq_code != MLX5_RXQ_CODE_EXIT)) {
+ rte_mempool_history_mark(pkt, RTE_MEMPOOL_PMD_FREE);
rte_pktmbuf_free_seg(pkt);
if (rxq_code == MLX5_RXQ_CODE_DROPPED) {
++rxq->stats.idropped;
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 6380895502..db4ef10ca1 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -516,6 +516,7 @@ mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, uint32_t len,
if (unlikely(next == NULL))
return MLX5_RXQ_CODE_NOMBUF;
+ rte_mempool_history_mark(next, RTE_MEMPOOL_PMD_ALLOC);
NEXT(prev) = next;
SET_DATA_OFF(next, 0);
addr = RTE_PTR_ADD(addr, seg_len);
@@ -579,6 +580,7 @@ mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, uint32_t len,
if (unlikely(seg == NULL))
return MLX5_RXQ_CODE_NOMBUF;
+ rte_mempool_history_mark(seg, RTE_MEMPOOL_PMD_ALLOC);
SET_DATA_OFF(seg, 0);
rte_memcpy(rte_pktmbuf_mtod(seg, void *),
RTE_PTR_ADD(addr, len - hdrm_overlap),
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index f5df451a32..e95bef9d55 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -164,6 +164,7 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
rte_errno = ENOMEM;
goto error;
}
+ rte_mempool_history_mark(buf, RTE_MEMPOOL_PMD_ALLOC);
/* Only vectored Rx routines rely on headroom size. */
MLX5_ASSERT(!has_vec_support ||
DATA_OFF(buf) >= RTE_PKTMBUF_HEADROOM);
@@ -221,8 +222,10 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
err = rte_errno; /* Save rte_errno before cleanup. */
elts_n = i;
for (i = 0; (i != elts_n); ++i) {
- if ((*rxq_ctrl->rxq.elts)[i] != NULL)
+ if ((*rxq_ctrl->rxq.elts)[i] != NULL) {
+ rte_mempool_history_mark((*rxq_ctrl->rxq.elts)[i], RTE_MEMPOOL_PMD_FREE);
rte_pktmbuf_free_seg((*rxq_ctrl->rxq.elts)[i]);
+ }
(*rxq_ctrl->rxq.elts)[i] = NULL;
}
if (rxq_ctrl->share_group == 0)
@@ -324,8 +327,10 @@ rxq_free_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
rxq->rq_pi = elts_ci;
}
for (i = 0; i != q_n; ++i) {
- if ((*rxq->elts)[i] != NULL)
+ if ((*rxq->elts)[i] != NULL) {
+ rte_mempool_history_mark((*rxq->elts)[i], RTE_MEMPOOL_PMD_FREE);
rte_pktmbuf_free_seg((*rxq->elts)[i]);
+ }
(*rxq->elts)[i] = NULL;
}
}
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c
index 1b701801c5..ffaa10c547 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.c
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.c
@@ -64,6 +64,7 @@ rxq_handle_pending_error(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts,
#ifdef MLX5_PMD_SOFT_COUNTERS
err_bytes += PKT_LEN(pkt);
#endif
+ rte_mempool_history_mark(pkt, RTE_MEMPOOL_PMD_FREE);
rte_pktmbuf_free_seg(pkt);
} else {
pkts[n++] = pkt;
@@ -107,6 +108,7 @@ mlx5_rx_replenish_bulk_mbuf(struct mlx5_rxq_data *rxq)
rxq->stats.rx_nombuf += n;
return;
}
+ rte_mempool_history_bulk((void *)elts, n, RTE_MEMPOOL_PMD_ALLOC);
if (unlikely(mlx5_mr_btree_len(&rxq->mr_ctrl.cache_bh) > 1)) {
for (i = 0; i < n; ++i) {
/*
@@ -171,6 +173,7 @@ mlx5_rx_mprq_replenish_bulk_mbuf(struct mlx5_rxq_data *rxq)
rxq->stats.rx_nombuf += n;
return;
}
+ rte_mempool_history_bulk((void *)elts, n, RTE_MEMPOOL_PMD_ALLOC);
rxq->elts_ci += n;
/* Prevent overflowing into consumed mbufs. */
elts_idx = rxq->elts_ci & wqe_mask;
@@ -224,6 +227,7 @@ rxq_copy_mprq_mbuf_v(struct mlx5_rxq_data *rxq,
if (!elts[i]->pkt_len) {
rxq->consumed_strd = strd_n;
+ rte_mempool_history_mark(elts[i], RTE_MEMPOOL_PMD_FREE);
rte_pktmbuf_free_seg(elts[i]);
#ifdef MLX5_PMD_SOFT_COUNTERS
rxq->stats.ipackets -= 1;
@@ -236,6 +240,7 @@ rxq_copy_mprq_mbuf_v(struct mlx5_rxq_data *rxq,
buf, rxq->consumed_strd, strd_cnt);
rxq->consumed_strd += strd_cnt;
if (unlikely(rxq_code != MLX5_RXQ_CODE_EXIT)) {
+ rte_mempool_history_mark(elts[i], RTE_MEMPOOL_PMD_FREE);
rte_pktmbuf_free_seg(elts[i]);
#ifdef MLX5_PMD_SOFT_COUNTERS
rxq->stats.ipackets -= 1;
@@ -586,6 +591,7 @@ mlx5_rx_burst_mprq_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
rte_io_wmb();
*rxq->cq_db = rte_cpu_to_be_32(rxq->cq_ci);
} while (tn != pkts_n);
+
return tn;
}
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 55568c41b1..7b61d87120 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -553,6 +553,7 @@ mlx5_tx_free_mbuf(struct mlx5_txq_data *__rte_restrict txq,
if (!MLX5_TXOFF_CONFIG(MULTI) && txq->fast_free) {
mbuf = *pkts;
pool = mbuf->pool;
+ rte_mempool_history_bulk((void *)pkts, pkts_n, RTE_MEMPOOL_PMD_FREE);
rte_mempool_put_bulk(pool, (void *)pkts, pkts_n);
return;
}
@@ -608,6 +609,7 @@ mlx5_tx_free_mbuf(struct mlx5_txq_data *__rte_restrict txq,
* Free the array of pre-freed mbufs
* belonging to the same memory pool.
*/
+ rte_mempool_history_bulk((void *)p_free, n_free, RTE_MEMPOOL_PMD_FREE);
rte_mempool_put_bulk(pool, (void *)p_free, n_free);
if (unlikely(mbuf != NULL)) {
/* There is the request to start new scan. */
@@ -1223,6 +1225,7 @@ mlx5_tx_mseg_memcpy(uint8_t *pdst,
/* Exhausted packet, just free. */
mbuf = loc->mbuf;
loc->mbuf = mbuf->next;
+ rte_mempool_history_mark(mbuf, RTE_MEMPOOL_PMD_FREE);
rte_pktmbuf_free_seg(mbuf);
loc->mbuf_off = 0;
MLX5_ASSERT(loc->mbuf_nseg > 1);
@@ -1265,6 +1268,7 @@ mlx5_tx_mseg_memcpy(uint8_t *pdst,
/* Exhausted packet, just free. */
mbuf = loc->mbuf;
loc->mbuf = mbuf->next;
+ rte_mempool_history_mark(mbuf, RTE_MEMPOOL_PMD_FREE);
rte_pktmbuf_free_seg(mbuf);
loc->mbuf_off = 0;
MLX5_ASSERT(loc->mbuf_nseg >= 1);
@@ -1715,6 +1719,7 @@ mlx5_tx_mseg_build(struct mlx5_txq_data *__rte_restrict txq,
/* Zero length segment found, just skip. */
mbuf = loc->mbuf;
loc->mbuf = loc->mbuf->next;
+ rte_mempool_history_mark(mbuf, RTE_MEMPOOL_PMD_FREE);
rte_pktmbuf_free_seg(mbuf);
if (--loc->mbuf_nseg == 0)
break;
@@ -2018,6 +2023,7 @@ mlx5_tx_packet_multi_send(struct mlx5_txq_data *__rte_restrict txq,
wqe->cseg.sq_ds -= RTE_BE32(1);
mbuf = loc->mbuf;
loc->mbuf = mbuf->next;
+ rte_mempool_history_mark(mbuf, RTE_MEMPOOL_PMD_FREE);
rte_pktmbuf_free_seg(mbuf);
if (--nseg == 0)
break;
@@ -3317,6 +3323,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
* Packet data are completely inlined,
* free the packet immediately.
*/
+ rte_mempool_history_mark(loc->mbuf, RTE_MEMPOOL_PMD_FREE);
rte_pktmbuf_free_seg(loc->mbuf);
} else if ((!MLX5_TXOFF_CONFIG(EMPW) ||
MLX5_TXOFF_CONFIG(MPW)) &&
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 5fee5bc4e8..156f8c2ef8 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -78,6 +78,7 @@ txq_free_elts(struct mlx5_txq_ctrl *txq_ctrl)
struct rte_mbuf *elt = (*elts)[elts_tail & elts_m];
MLX5_ASSERT(elt != NULL);
+ rte_mempool_history_mark(elt, RTE_MEMPOOL_PMD_FREE);
rte_pktmbuf_free_seg(elt);
#ifdef RTE_LIBRTE_MLX5_DEBUG
/* Poisoning. */
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* [RFC PATCH 4/5] app/testpmd: add testpmd command to dump mempool history
2025-06-16 7:29 [RFC PATCH 0/5] Introduce mempool object new debug capabilities Shani Peretz
` (2 preceding siblings ...)
2025-06-16 7:29 ` [RFC PATCH 3/5] net/mlx5: mark an operation in mempool object's history Shani Peretz
@ 2025-06-16 7:29 ` Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 5/5] usertool: add a script to parse mempool history dump Shani Peretz
` (2 subsequent siblings)
6 siblings, 0 replies; 24+ messages in thread
From: Shani Peretz @ 2025-06-16 7:29 UTC (permalink / raw)
To: dev; +Cc: Shani Peretz, Aman Singh
dumps the mempool object history to console or to a file.
The dump will contain:
- Operation history for each mempool object
- Summary and statistics about all mempool objects
testpmd> dump_mempool_objects_history
testpmd> dump_mempool_objects_history <file_name>
Signed-off-by: Shani Peretz <shperetz@nvidia.com>
---
app/test-pmd/cmdline.c | 59 +++++++++++++++++++++++++++++++++++++++++-
1 file changed, 58 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 7b4e27eddf..3233b9d663 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -296,6 +296,9 @@ static void cmd_help_long_parsed(void *parsed_result,
"dump_log_types\n"
" Dumps the log level for all the dpdk modules\n\n"
+ "dump_mempool_objects_history\n"
+ " Dumps the mempool objects history\n\n"
+
"show port (port_id) speed_lanes capabilities"
" Show speed lanes capabilities of a port.\n\n"
);
@@ -9170,6 +9173,8 @@ static void cmd_dump_parsed(void *parsed_result,
#endif
else if (!strcmp(res->dump, "dump_log_types"))
rte_log_dump(stdout);
+ else if (!strcmp(res->dump, "dump_mempool_objects_history"))
+ rte_mempool_objects_dump(stdout);
}
static cmdline_parse_token_string_t cmd_dump_dump =
@@ -9191,7 +9196,8 @@ cmd_dump_init(void)
#ifndef RTE_EXEC_ENV_WINDOWS
"dump_trace#"
#endif
- "dump_log_types";
+ "dump_log_types#"
+ "dump_mempool_objects_history";
}
static cmdline_parse_inst_t cmd_dump = {
@@ -9253,6 +9259,56 @@ static cmdline_parse_inst_t cmd_dump_one = {
},
};
+/* Dump mempool objects history to file */
+struct cmd_dump_to_file_result {
+ cmdline_fixed_string_t dump;
+ cmdline_fixed_string_t file;
+};
+
+static void cmd_dump_to_file_parsed(void *parsed_result, struct cmdline *cl,
+ __rte_unused void *data)
+{
+ struct cmd_dump_to_file_result *res = parsed_result;
+ FILE *file = stdout;
+ char *file_name = res->file;
+
+ if (strcmp(res->dump, "dump_mempool_objects_history")) {
+ cmdline_printf(cl, "Invalid dump type\n");
+ return;
+ }
+
+ if (file_name && strlen(file_name)) {
+ file = fopen(file_name, "w");
+ if (!file) {
+ fprintf(stderr, "Failed to create file %s: %s\n",
+ file_name, strerror(errno));
+ return;
+ }
+ }
+ rte_mempool_objects_dump(file);
+ printf("Flow dump finished\n");
+ if (file_name && strlen(file_name))
+ fclose(file);
+}
+
+static cmdline_parse_token_string_t cmd_dump_to_file_dump =
+ TOKEN_STRING_INITIALIZER(struct cmd_dump_to_file_result, dump,
+ "dump_mempool_objects_history");
+
+static cmdline_parse_token_string_t cmd_dump_to_file_file =
+ TOKEN_STRING_INITIALIZER(struct cmd_dump_to_file_result, file, NULL);
+
+static cmdline_parse_inst_t cmd_dump_to_file = {
+ .f = cmd_dump_to_file_parsed, /* function to call */
+ .data = NULL, /* 2nd arg of func */
+ .help_str = "dump_mempool_objects_history <file_name>: Dump mempool objects history to file",
+ .tokens = { /* token list, NULL terminated */
+ (void *)&cmd_dump_to_file_dump,
+ (void *)&cmd_dump_to_file_file,
+ NULL,
+ },
+};
+
/* *** Filters Control *** */
#define IPV4_ADDR_TO_UINT(ip_addr, ip) \
@@ -13992,6 +14048,7 @@ static cmdline_parse_ctx_t builtin_ctx[] = {
&cmd_cleanup_txq_mbufs,
&cmd_dump,
&cmd_dump_one,
+ &cmd_dump_to_file,
&cmd_flow,
&cmd_show_port_meter_cap,
&cmd_add_port_meter_profile_srtcm,
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* [RFC PATCH 5/5] usertool: add a script to parse mempool history dump
2025-06-16 7:29 [RFC PATCH 0/5] Introduce mempool object new debug capabilities Shani Peretz
` (3 preceding siblings ...)
2025-06-16 7:29 ` [RFC PATCH 4/5] app/testpmd: add testpmd command to dump mempool history Shani Peretz
@ 2025-06-16 7:29 ` Shani Peretz
2025-06-16 15:30 ` [RFC PATCH 0/5] Introduce mempool object new debug capabilities Stephen Hemminger
2025-09-16 15:12 ` [PATCH v2 0/4] add mbuf " Shani Peretz
6 siblings, 0 replies; 24+ messages in thread
From: Shani Peretz @ 2025-06-16 7:29 UTC (permalink / raw)
To: dev; +Cc: Shani Peretz, Robin Jarry
Added a Python script that parses the history dump of a mempool object
generated by rte_mempool_objects_dump and presents it in a human-readable
format.
If an operation ID is repeated, such as in the case of a double free,
it will be highlighted in red and listed at the end of the file.
Signed-off-by: Shani Peretz <shperetz@nvidia.com>
---
.../dpdk-mempool_object_history_parser.py | 129 ++++++++++++++++++
1 file changed, 129 insertions(+)
create mode 100755 usertools/dpdk-mempool_object_history_parser.py
diff --git a/usertools/dpdk-mempool_object_history_parser.py b/usertools/dpdk-mempool_object_history_parser.py
new file mode 100755
index 0000000000..0224a97e22
--- /dev/null
+++ b/usertools/dpdk-mempool_object_history_parser.py
@@ -0,0 +1,129 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright (c) 2023 NVIDIA Corporation & Affiliates
+
+import sys
+import re
+import os
+
+RED = "\033[91m"
+RESET = "\033[0m"
+ENUM_PATTERN = r'enum\s+rte_mempool_history_op\s*{([^}]+)}'
+VALUE_PATTERN = r'([A-Z_]+)\s*=\s*(\d+),\s*(?:/\*\s*(.*?)\s*\*/)?'
+
+def match_field(match: re.Match) -> tuple[int, str]:
+ name, value, _ = match.groups()
+ return (int(value), name.replace('RTE_MEMPOOL_', ''))
+
+def parse_history_enum(header_file: str) -> dict[int, str]:
+ with open(header_file, 'r') as f:
+ content = f.read()
+
+ # Extract each enum value and its comment
+ enum_content = re.search(ENUM_PATTERN, content, re.DOTALL).group(1)
+ return dict(map(match_field, re.finditer(VALUE_PATTERN, enum_content)))
+
+
+# Generate HISTORY_OPS from the header file
+HEADER_FILE = os.path.join(os.path.dirname(os.path.dirname(__file__)), 'lib/mempool/rte_mempool.h')
+try:
+ HISTORY_OPS = parse_history_enum(HEADER_FILE)
+except Exception as e:
+ print(f"Warning: Could not generate HISTORY_OPS from header file: {e}")
+
+
+def op_to_string(op: int) -> str:
+ return HISTORY_OPS.get(op, f"UNKNOWN({op})")
+
+def parse_mempool_object_history(line: str) -> list[str]:
+ line = line.strip().replace('0x', '')
+ return [op_to_string(int(digit)) for digit in line]
+
+def parse_metrics(lines: list[str]) -> dict[str, int]:
+ metrics = {}
+ for line in lines:
+ if ':' not in line:
+ continue
+ key, value = line.split(':', 1)
+ metrics[key.strip()] = int(value.strip())
+ return metrics
+
+def print_history_sequence(ops: list[str]) -> bool:
+ sequence = []
+ had_repeat = False
+ for idx, op in enumerate(ops):
+ if idx > 0 and op == ops[idx-1] and op != 'NEVER':
+ sequence.append(RED + op + RESET)
+ had_repeat = True
+ else:
+ sequence.append(op)
+
+ if not sequence:
+ return had_repeat
+
+ max_op_width = max(len(re.sub(r'\x1b\[[0-9;]*m', '', op)) for op in sequence)
+ OP_WIDTH = max_op_width
+ for i in range(0, len(sequence), 4):
+ chunk = sequence[i:i+4]
+ formatted_ops = [f"{op:<{OP_WIDTH}}" for op in chunk]
+ line = ""
+ for j, op in enumerate(formatted_ops):
+ line += op
+ if j < len(formatted_ops) - 1:
+ line += " -> "
+ if i + 4 < len(sequence):
+ line += " ->"
+ print("\t" + line)
+ return had_repeat
+
+def main():
+ if len(sys.argv) != 2:
+ print("Usage: {} <history_file>".format(sys.argv[0]))
+ sys.exit(1)
+
+ try:
+ with open(sys.argv[1], 'r') as f:
+ lines = f.readlines()
+
+ # Find where metrics start
+ metrics_start = -1
+ for i, line in enumerate(lines):
+ if "Populated:" in line:
+ metrics_start = i
+ break
+
+ # Process mempool object history traces
+ marked_mempool_objects = []
+ mempool_object_id = 1
+ for line in lines[:metrics_start] if metrics_start != -1 else lines:
+ if not line.strip():
+ continue
+ ops = parse_mempool_object_history(line)
+ print(f"MEMPOOL OBJECT {mempool_object_id}:")
+ had_repeat = print_history_sequence(ops)
+ print() # Empty line between mempool objects
+ if had_repeat:
+ marked_mempool_objects.append(mempool_object_id)
+ mempool_object_id += 1
+
+ if marked_mempool_objects:
+ print("MEMPOOL OBJECTS with repeated ops:", marked_mempool_objects)
+
+ if metrics_start != -1:
+ print("=== Metrics Summary ===")
+ metrics = parse_metrics(lines[metrics_start:])
+ # Find max width of metric names for alignment
+ max_name_width = max(len(name) for name in metrics.keys())
+ # Print metrics in aligned format
+ for name, value in metrics.items():
+ print(f"{name + ':':<{max_name_width + 2}} {value}")
+
+ except FileNotFoundError:
+ print(f"Error: File {sys.argv[1]} not found")
+ sys.exit(1)
+ except Exception as e:
+ print(f"Error processing file: {e}")
+ sys.exit(1)
+
+if __name__ == "__main__":
+ main()
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [RFC PATCH 0/5] Introduce mempool object new debug capabilities
2025-06-16 7:29 [RFC PATCH 0/5] Introduce mempool object new debug capabilities Shani Peretz
` (4 preceding siblings ...)
2025-06-16 7:29 ` [RFC PATCH 5/5] usertool: add a script to parse mempool history dump Shani Peretz
@ 2025-06-16 15:30 ` Stephen Hemminger
2025-06-19 12:57 ` Morten Brørup
2025-07-07 5:45 ` Shani Peretz
2025-09-16 15:12 ` [PATCH v2 0/4] add mbuf " Shani Peretz
6 siblings, 2 replies; 24+ messages in thread
From: Stephen Hemminger @ 2025-06-16 15:30 UTC (permalink / raw)
To: Shani Peretz; +Cc: dev
On Mon, 16 Jun 2025 10:29:05 +0300
Shani Peretz <shperetz@nvidia.com> wrote:
> This feature is designed to monitor the lifecycle of mempool objects
> as they move between the application and the PMD.
>
> It will allow us to track the operations and transitions of each mempool
> object throughout the system, helping in debugging and understanding objects flow.
>
> The implementation include several key components:
> 1. Added a bitmap to mempool's header (rte_mempool_objhdr)
> that represent the operations history.
> 2. Added functions that allow marking operations on an
> mempool objects.
> 3. Dumps the history to a file or the console
> (rte_mempool_objects_dump).
> 4. Added python script that can parse, analyze the data and
> present it in an human readable format.
> 5. Added compilation flag to enable the feature.
>
> Shani Peretz (5):
> mempool: record mempool objects operations history
> drivers: add mempool history compilation flag
> net/mlx5: mark an operation in mempool object's history
> app/testpmd: add testpmd command to dump mempool history
> usertool: add a script to parse mempool history dump
>
> app/test-pmd/cmdline.c | 59 +++++++-
> config/meson.build | 1 +
> drivers/meson.build | 7 +
> drivers/net/af_packet/meson.build | 1 +
> drivers/net/af_xdp/meson.build | 1 +
> drivers/net/ark/meson.build | 2 +
> drivers/net/atlantic/meson.build | 2 +
> drivers/net/avp/meson.build | 2 +
> drivers/net/axgbe/meson.build | 2 +
> drivers/net/bnx2x/meson.build | 1 +
> drivers/net/bnxt/meson.build | 2 +
> drivers/net/bonding/meson.build | 1 +
> drivers/net/cnxk/meson.build | 1 +
> drivers/net/cxgbe/meson.build | 2 +
> drivers/net/dpaa/meson.build | 2 +
> drivers/net/dpaa2/meson.build | 2 +
> drivers/net/ena/meson.build | 2 +
> drivers/net/enetc/meson.build | 2 +
> drivers/net/enetfec/meson.build | 2 +
> drivers/net/enic/meson.build | 2 +
> drivers/net/failsafe/meson.build | 1 +
> drivers/net/gve/meson.build | 2 +
> drivers/net/hinic/meson.build | 2 +
> drivers/net/hns3/meson.build | 1 +
> drivers/net/intel/cpfl/meson.build | 2 +
> drivers/net/intel/e1000/meson.build | 2 +
> drivers/net/intel/fm10k/meson.build | 2 +
> drivers/net/intel/i40e/meson.build | 2 +
> drivers/net/intel/iavf/meson.build | 2 +
> drivers/net/intel/ice/meson.build | 1 +
> drivers/net/intel/idpf/meson.build | 2 +
> drivers/net/intel/ixgbe/meson.build | 2 +
> drivers/net/ionic/meson.build | 2 +
> drivers/net/mana/meson.build | 2 +
> drivers/net/memif/meson.build | 1 +
> drivers/net/mlx4/meson.build | 2 +
> drivers/net/mlx5/meson.build | 1 +
> drivers/net/mlx5/mlx5_rx.c | 9 ++
> drivers/net/mlx5/mlx5_rx.h | 2 +
> drivers/net/mlx5/mlx5_rxq.c | 9 +-
> drivers/net/mlx5/mlx5_rxtx_vec.c | 6 +
> drivers/net/mlx5/mlx5_tx.h | 7 +
> drivers/net/mlx5/mlx5_txq.c | 1 +
> drivers/net/mvneta/meson.build | 2 +
> drivers/net/mvpp2/meson.build | 2 +
> drivers/net/netvsc/meson.build | 2 +
> drivers/net/nfb/meson.build | 2 +
> drivers/net/nfp/meson.build | 2 +
> drivers/net/ngbe/meson.build | 2 +
> drivers/net/ntnic/meson.build | 4 +
> drivers/net/null/meson.build | 1 +
> drivers/net/octeon_ep/meson.build | 2 +
> drivers/net/octeontx/meson.build | 2 +
> drivers/net/pcap/meson.build | 1 +
> drivers/net/pfe/meson.build | 2 +
> drivers/net/qede/meson.build | 2 +
> drivers/net/r8169/meson.build | 4 +-
> drivers/net/ring/meson.build | 1 +
> drivers/net/sfc/meson.build | 2 +
> drivers/net/softnic/meson.build | 2 +
> drivers/net/tap/meson.build | 1 +
> drivers/net/thunderx/meson.build | 2 +
> drivers/net/txgbe/meson.build | 2 +
> drivers/net/vdev_netvsc/meson.build | 2 +
> drivers/net/vhost/meson.build | 2 +
> drivers/net/virtio/meson.build | 2 +
> drivers/net/vmxnet3/meson.build | 2 +
> drivers/net/xsc/meson.build | 2 +
> drivers/net/zxdh/meson.build | 4 +
> lib/ethdev/rte_ethdev.h | 14 ++
> lib/mempool/rte_mempool.c | 111 +++++++++++++++
> lib/mempool/rte_mempool.h | 106 ++++++++++++++
> meson_options.txt | 2 +
> .../dpdk-mempool_object_history_parser.py | 129 ++++++++++++++++++
> 74 files changed, 571 insertions(+), 4 deletions(-)
> create mode 100755 usertools/dpdk-mempool_object_history_parser.py
>
Could this not already be done with tracing infrastructure?
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [RFC PATCH 0/5] Introduce mempool object new debug capabilities
2025-06-16 15:30 ` [RFC PATCH 0/5] Introduce mempool object new debug capabilities Stephen Hemminger
@ 2025-06-19 12:57 ` Morten Brørup
2025-07-07 5:46 ` Shani Peretz
2025-07-07 5:45 ` Shani Peretz
1 sibling, 1 reply; 24+ messages in thread
From: Morten Brørup @ 2025-06-19 12:57 UTC (permalink / raw)
To: Stephen Hemminger, Shani Peretz; +Cc: dev
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Monday, 16 June 2025 17.30
>
> On Mon, 16 Jun 2025 10:29:05 +0300
> Shani Peretz <shperetz@nvidia.com> wrote:
>
> > This feature is designed to monitor the lifecycle of mempool objects
> > as they move between the application and the PMD.
> >
> > It will allow us to track the operations and transitions of each mempool
> > object throughout the system, helping in debugging and understanding objects
> flow.
> >
> > The implementation include several key components:
> > 1. Added a bitmap to mempool's header (rte_mempool_objhdr)
> > that represent the operations history.
> > 2. Added functions that allow marking operations on an
> > mempool objects.
> > 3. Dumps the history to a file or the console
> > (rte_mempool_objects_dump).
> > 4. Added python script that can parse, analyze the data and
> > present it in an human readable format.
> > 5. Added compilation flag to enable the feature.
> >
> > Shani Peretz (5):
> > mempool: record mempool objects operations history
> > drivers: add mempool history compilation flag
> > net/mlx5: mark an operation in mempool object's history
> > app/testpmd: add testpmd command to dump mempool history
> > usertool: add a script to parse mempool history dump
> >
>
> Could this not already be done with tracing infrastructure?
I agree with Stephen on this.
And, if you plan to use this for performance measurements, you can use the coming PMU trace to trace the objects' movements between CPU caches and RAM, so you can discriminate between hot and cold objects.
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [RFC PATCH 0/5] Introduce mempool object new debug capabilities
2025-06-16 15:30 ` [RFC PATCH 0/5] Introduce mempool object new debug capabilities Stephen Hemminger
2025-06-19 12:57 ` Morten Brørup
@ 2025-07-07 5:45 ` Shani Peretz
2025-07-07 12:10 ` Morten Brørup
1 sibling, 1 reply; 24+ messages in thread
From: Shani Peretz @ 2025-07-07 5:45 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Monday, 16 June 2025 18:30
> To: Shani Peretz <shperetz@nvidia.com>
> Cc: dev@dpdk.org
> Subject: Re: [RFC PATCH 0/5] Introduce mempool object new debug
> capabilities
>
> External email: Use caution opening links or attachments
>
>
> On Mon, 16 Jun 2025 10:29:05 +0300
> Shani Peretz <shperetz@nvidia.com> wrote:
>
> > This feature is designed to monitor the lifecycle of mempool objects
> > as they move between the application and the PMD.
> >
> > It will allow us to track the operations and transitions of each
> > mempool object throughout the system, helping in debugging and
> understanding objects flow.
> >
> > The implementation include several key components:
> > 1. Added a bitmap to mempool's header (rte_mempool_objhdr)
> > that represent the operations history.
> > 2. Added functions that allow marking operations on an
> > mempool objects.
> > 3. Dumps the history to a file or the console
> > (rte_mempool_objects_dump).
> > 4. Added python script that can parse, analyze the data and
> > present it in an human readable format.
> > 5. Added compilation flag to enable the feature.
> >
> > Shani Peretz (5):
> > mempool: record mempool objects operations history
> > drivers: add mempool history compilation flag
> > net/mlx5: mark an operation in mempool object's history
> > app/testpmd: add testpmd command to dump mempool history
> > usertool: add a script to parse mempool history dump
> >
> > app/test-pmd/cmdline.c | 59 +++++++-
> > config/meson.build | 1 +
> > drivers/meson.build | 7 +
> > drivers/net/af_packet/meson.build | 1 +
> > drivers/net/af_xdp/meson.build | 1 +
> > drivers/net/ark/meson.build | 2 +
> > drivers/net/atlantic/meson.build | 2 +
> > drivers/net/avp/meson.build | 2 +
> > drivers/net/axgbe/meson.build | 2 +
> > drivers/net/bnx2x/meson.build | 1 +
> > drivers/net/bnxt/meson.build | 2 +
> > drivers/net/bonding/meson.build | 1 +
> > drivers/net/cnxk/meson.build | 1 +
> > drivers/net/cxgbe/meson.build | 2 +
> > drivers/net/dpaa/meson.build | 2 +
> > drivers/net/dpaa2/meson.build | 2 +
> > drivers/net/ena/meson.build | 2 +
> > drivers/net/enetc/meson.build | 2 +
> > drivers/net/enetfec/meson.build | 2 +
> > drivers/net/enic/meson.build | 2 +
> > drivers/net/failsafe/meson.build | 1 +
> > drivers/net/gve/meson.build | 2 +
> > drivers/net/hinic/meson.build | 2 +
> > drivers/net/hns3/meson.build | 1 +
> > drivers/net/intel/cpfl/meson.build | 2 +
> > drivers/net/intel/e1000/meson.build | 2 +
> > drivers/net/intel/fm10k/meson.build | 2 +
> > drivers/net/intel/i40e/meson.build | 2 +
> > drivers/net/intel/iavf/meson.build | 2 +
> > drivers/net/intel/ice/meson.build | 1 +
> > drivers/net/intel/idpf/meson.build | 2 +
> > drivers/net/intel/ixgbe/meson.build | 2 +
> > drivers/net/ionic/meson.build | 2 +
> > drivers/net/mana/meson.build | 2 +
> > drivers/net/memif/meson.build | 1 +
> > drivers/net/mlx4/meson.build | 2 +
> > drivers/net/mlx5/meson.build | 1 +
> > drivers/net/mlx5/mlx5_rx.c | 9 ++
> > drivers/net/mlx5/mlx5_rx.h | 2 +
> > drivers/net/mlx5/mlx5_rxq.c | 9 +-
> > drivers/net/mlx5/mlx5_rxtx_vec.c | 6 +
> > drivers/net/mlx5/mlx5_tx.h | 7 +
> > drivers/net/mlx5/mlx5_txq.c | 1 +
> > drivers/net/mvneta/meson.build | 2 +
> > drivers/net/mvpp2/meson.build | 2 +
> > drivers/net/netvsc/meson.build | 2 +
> > drivers/net/nfb/meson.build | 2 +
> > drivers/net/nfp/meson.build | 2 +
> > drivers/net/ngbe/meson.build | 2 +
> > drivers/net/ntnic/meson.build | 4 +
> > drivers/net/null/meson.build | 1 +
> > drivers/net/octeon_ep/meson.build | 2 +
> > drivers/net/octeontx/meson.build | 2 +
> > drivers/net/pcap/meson.build | 1 +
> > drivers/net/pfe/meson.build | 2 +
> > drivers/net/qede/meson.build | 2 +
> > drivers/net/r8169/meson.build | 4 +-
> > drivers/net/ring/meson.build | 1 +
> > drivers/net/sfc/meson.build | 2 +
> > drivers/net/softnic/meson.build | 2 +
> > drivers/net/tap/meson.build | 1 +
> > drivers/net/thunderx/meson.build | 2 +
> > drivers/net/txgbe/meson.build | 2 +
> > drivers/net/vdev_netvsc/meson.build | 2 +
> > drivers/net/vhost/meson.build | 2 +
> > drivers/net/virtio/meson.build | 2 +
> > drivers/net/vmxnet3/meson.build | 2 +
> > drivers/net/xsc/meson.build | 2 +
> > drivers/net/zxdh/meson.build | 4 +
> > lib/ethdev/rte_ethdev.h | 14 ++
> > lib/mempool/rte_mempool.c | 111 +++++++++++++++
> > lib/mempool/rte_mempool.h | 106 ++++++++++++++
> > meson_options.txt | 2 +
> > .../dpdk-mempool_object_history_parser.py | 129
> ++++++++++++++++++
> > 74 files changed, 571 insertions(+), 4 deletions(-) create mode
> > 100755 usertools/dpdk-mempool_object_history_parser.py
> >
>
> Could this not already be done with tracing infrastructure?
Hey,
We did consider tracing but:
- It has limited capacity, which will result in older mbufs being lost in the tracing output while they are still in use
- Some operations may be lost, and we might not capture the complete picture due to trace misses caused by the performance overhead of tracking on the datapath as far as I understand
WDYT?
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [RFC PATCH 0/5] Introduce mempool object new debug capabilities
2025-06-19 12:57 ` Morten Brørup
@ 2025-07-07 5:46 ` Shani Peretz
0 siblings, 0 replies; 24+ messages in thread
From: Shani Peretz @ 2025-07-07 5:46 UTC (permalink / raw)
To: Morten Brørup, Stephen Hemminger; +Cc: dev
> -----Original Message-----
> From: Morten Brørup <mb@smartsharesystems.com>
> Sent: Thursday, 19 June 2025 15:57
> To: Stephen Hemminger <stephen@networkplumber.org>; Shani Peretz
> <shperetz@nvidia.com>
> Cc: dev@dpdk.org
> Subject: RE: [RFC PATCH 0/5] Introduce mempool object new debug
> capabilities
>
> External email: Use caution opening links or attachments
>
>
> > From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> > Sent: Monday, 16 June 2025 17.30
> >
> > On Mon, 16 Jun 2025 10:29:05 +0300
> > Shani Peretz <shperetz@nvidia.com> wrote:
> >
> > > This feature is designed to monitor the lifecycle of mempool objects
> > > as they move between the application and the PMD.
> > >
> > > It will allow us to track the operations and transitions of each
> > > mempool object throughout the system, helping in debugging and
> > > understanding objects
> > flow.
> > >
> > > The implementation include several key components:
> > > 1. Added a bitmap to mempool's header (rte_mempool_objhdr)
> > > that represent the operations history.
> > > 2. Added functions that allow marking operations on an
> > > mempool objects.
> > > 3. Dumps the history to a file or the console
> > > (rte_mempool_objects_dump).
> > > 4. Added python script that can parse, analyze the data and
> > > present it in an human readable format.
> > > 5. Added compilation flag to enable the feature.
> > >
> > > Shani Peretz (5):
> > > mempool: record mempool objects operations history
> > > drivers: add mempool history compilation flag
> > > net/mlx5: mark an operation in mempool object's history
> > > app/testpmd: add testpmd command to dump mempool history
> > > usertool: add a script to parse mempool history dump
> > >
> >
> > Could this not already be done with tracing infrastructure?
>
> I agree with Stephen on this.
>
> And, if you plan to use this for performance measurements, you can use the
> coming PMU trace to trace the objects' movements between CPU caches and
> RAM, so you can discriminate between hot and cold objects.
We want to track when the transition of objects between the app and the PMD. So I don't know if PMU library is helpful in this case, isn't it?
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [RFC PATCH 0/5] Introduce mempool object new debug capabilities
2025-07-07 5:45 ` Shani Peretz
@ 2025-07-07 12:10 ` Morten Brørup
2025-07-19 14:39 ` Morten Brørup
0 siblings, 1 reply; 24+ messages in thread
From: Morten Brørup @ 2025-07-07 12:10 UTC (permalink / raw)
To: Shani Peretz, Stephen Hemminger; +Cc: dev
> From: Shani Peretz [mailto:shperetz@nvidia.com]
> Sent: Monday, 7 July 2025 07.45
>
> > From: Stephen Hemminger <stephen@networkplumber.org>
> > Sent: Monday, 16 June 2025 18:30
> >
> > On Mon, 16 Jun 2025 10:29:05 +0300
> > Shani Peretz <shperetz@nvidia.com> wrote:
> >
> > > This feature is designed to monitor the lifecycle of mempool objects
> > > as they move between the application and the PMD.
> > >
> > > It will allow us to track the operations and transitions of each
> > > mempool object throughout the system, helping in debugging and
> > understanding objects flow.
> > >
> > > The implementation include several key components:
> > > 1. Added a bitmap to mempool's header (rte_mempool_objhdr)
> > > that represent the operations history.
> > > 2. Added functions that allow marking operations on an
> > > mempool objects.
> > > 3. Dumps the history to a file or the console
> > > (rte_mempool_objects_dump).
> > > 4. Added python script that can parse, analyze the data and
> > > present it in an human readable format.
> > > 5. Added compilation flag to enable the feature.
> > >
> >
> > Could this not already be done with tracing infrastructure?
>
> Hey,
> We did consider tracing but:
> - It has limited capacity, which will result in older mbufs being
> lost in the tracing output while they are still in use
> - Some operations may be lost, and we might not capture the
> complete picture due to trace misses caused by the performance overhead
> of tracking on the datapath as far as I understand
> WDYT?
This looks like an alternative trace infrastructure, just for mempool objects.
But the list of operations is limited to basic operations on mbuf mempool objects.
It lacks support for other operations on mbufs, e.g. IP fragmentation/defragmentation library operations, application specific operations, and transitions between the mempool cache and the mempool backing store.
It also lacks support for operations on other mempool objects than mbufs.
You might better off using the trace infrastructure, or something similar.
Using the trace infrastructure allows you to record more detailed information along with the transitions of "owners" of each mbuf.
I'm not opposing this RFC, but I think it is very limited, and not sufficiently expandable.
I get the point that trace can cause old events on active mbufs to be lost, and the concept of a trace buffer per mempool object is a good solution to that.
But I think you need to be able to store much more information with each transition; at least a timestamp. And if you do that, you need much more than 4 bits per event.
Alternatively, if you do proceed with the RFC in the current form, I have two key suggestions:
1. Make it possible to register operations at runtime. (Look at dynamic mbuf fields for inspiration.)
2. Use 8 bits for the operation, instead of 4.
And if you need a longer trace history, you can use the rte_bitset library instead of a single uint64_t.
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [RFC PATCH 0/5] Introduce mempool object new debug capabilities
2025-07-07 12:10 ` Morten Brørup
@ 2025-07-19 14:39 ` Morten Brørup
2025-08-25 11:27 ` Slava Ovsiienko
0 siblings, 1 reply; 24+ messages in thread
From: Morten Brørup @ 2025-07-19 14:39 UTC (permalink / raw)
To: Shani Peretz; +Cc: dev, Stephen Hemminger
> From: Morten Brørup [mailto:mb@smartsharesystems.com]
> Sent: Monday, 7 July 2025 14.11
>
> > From: Shani Peretz [mailto:shperetz@nvidia.com]
> > Sent: Monday, 7 July 2025 07.45
> >
> > > From: Stephen Hemminger <stephen@networkplumber.org>
> > > Sent: Monday, 16 June 2025 18:30
> > >
> > > On Mon, 16 Jun 2025 10:29:05 +0300
> > > Shani Peretz <shperetz@nvidia.com> wrote:
> > >
> > > > This feature is designed to monitor the lifecycle of mempool
> objects
> > > > as they move between the application and the PMD.
> > > >
> > > > It will allow us to track the operations and transitions of each
> > > > mempool object throughout the system, helping in debugging and
> > > understanding objects flow.
> > > >
> > > > The implementation include several key components:
> > > > 1. Added a bitmap to mempool's header (rte_mempool_objhdr)
> > > > that represent the operations history.
> > > > 2. Added functions that allow marking operations on an
> > > > mempool objects.
> > > > 3. Dumps the history to a file or the console
> > > > (rte_mempool_objects_dump).
> > > > 4. Added python script that can parse, analyze the data and
> > > > present it in an human readable format.
> > > > 5. Added compilation flag to enable the feature.
> > > >
> > >
> > > Could this not already be done with tracing infrastructure?
> >
> > Hey,
> > We did consider tracing but:
> > - It has limited capacity, which will result in older mbufs being
> > lost in the tracing output while they are still in use
> > - Some operations may be lost, and we might not capture the
> > complete picture due to trace misses caused by the performance
> overhead
> > of tracking on the datapath as far as I understand
> > WDYT?
>
> This looks like an alternative trace infrastructure, just for mempool
> objects.
> But the list of operations is limited to basic operations on mbuf
> mempool objects.
> It lacks support for other operations on mbufs, e.g. IP
> fragmentation/defragmentation library operations, application specific
> operations, and transitions between the mempool cache and the mempool
> backing store.
> It also lacks support for operations on other mempool objects than
> mbufs.
>
> You might better off using the trace infrastructure, or something
> similar.
> Using the trace infrastructure allows you to record more detailed
> information along with the transitions of "owners" of each mbuf.
>
> I'm not opposing this RFC, but I think it is very limited, and not
> sufficiently expandable.
>
> I get the point that trace can cause old events on active mbufs to be
> lost, and the concept of a trace buffer per mempool object is a good
> solution to that.
> But I think you need to be able to store much more information with each
> transition; at least a timestamp. And if you do that, you need much more
> than 4 bits per event.
>
> Alternatively, if you do proceed with the RFC in the current form, I
> have two key suggestions:
> 1. Make it possible to register operations at runtime. (Look at dynamic
> mbuf fields for inspiration.)
> 2. Use 8 bits for the operation, instead of 4.
> And if you need a longer trace history, you can use the rte_bitset
> library instead of a single uint64_t.
One more comment:
If this feature is meant for mbuf type mempool objects only, it might be possible to add it to the mbuf library (and store the trace in the rte_mbuf structure) instead of the mempool library.
Although that would prevent tracing internal mempool library operations, specifically moving the object between the mempool cache and mempool backing store.
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [RFC PATCH 0/5] Introduce mempool object new debug capabilities
2025-07-19 14:39 ` Morten Brørup
@ 2025-08-25 11:27 ` Slava Ovsiienko
2025-09-01 15:34 ` Morten Brørup
0 siblings, 1 reply; 24+ messages in thread
From: Slava Ovsiienko @ 2025-08-25 11:27 UTC (permalink / raw)
To: Morten Brørup, Shani Peretz; +Cc: dev, Stephen Hemminger
Hi, Morten
> -----Original Message-----
> From: Morten Brørup <mb@smartsharesystems.com>
> Sent: Saturday, July 19, 2025 5:39 PM
> To: Shani Peretz <shperetz@nvidia.com>
> Cc: dev@dpdk.org; Stephen Hemminger <stephen@networkplumber.org>
> Subject: RE: [RFC PATCH 0/5] Introduce mempool object new debug
> capabilities
>
> > From: Morten Brørup [mailto:mb@smartsharesystems.com]
> > Sent: Monday, 7 July 2025 14.11
> >
> > > From: Shani Peretz [mailto:shperetz@nvidia.com]
> > > Sent: Monday, 7 July 2025 07.45
> > >
> > > > From: Stephen Hemminger <stephen@networkplumber.org>
> > > > Sent: Monday, 16 June 2025 18:30
> > > >
> > > > On Mon, 16 Jun 2025 10:29:05 +0300 Shani Peretz
> > > > <shperetz@nvidia.com> wrote:
> > > >
> > > > > This feature is designed to monitor the lifecycle of mempool
> > objects
> > > > > as they move between the application and the PMD.
> > > > >
> > > > > It will allow us to track the operations and transitions of each
> > > > > mempool object throughout the system, helping in debugging and
> > > > understanding objects flow.
> > > > >
> > > > > The implementation include several key components:
> > > > > 1. Added a bitmap to mempool's header (rte_mempool_objhdr)
> > > > > that represent the operations history.
> > > > > 2. Added functions that allow marking operations on an
> > > > > mempool objects.
> > > > > 3. Dumps the history to a file or the console
> > > > > (rte_mempool_objects_dump).
> > > > > 4. Added python script that can parse, analyze the data and
> > > > > present it in an human readable format.
> > > > > 5. Added compilation flag to enable the feature.
> > > > >
> > > >
> > > > Could this not already be done with tracing infrastructure?
> > >
> > > Hey,
> > > We did consider tracing but:
> > > - It has limited capacity, which will result in older mbufs being
> > > lost in the tracing output while they are still in use
> > > - Some operations may be lost, and we might not capture the
> > > complete picture due to trace misses caused by the performance
> > overhead
> > > of tracking on the datapath as far as I understand WDYT?
> >
> > This looks like an alternative trace infrastructure, just for mempool
> > objects.
It is rather not alternative, but some orthogonal way to trace.
We gather the life history per mbuf and it helps a lot with nailing the issues
on the user's sites. From our practice - it helped instantly to find the mbuf double free,
and core policy violations for rx/tx_burst routines in the user applications .
> > But the list of operations is limited to basic operations on mbuf
> > mempool objects.
Agree, but this limitation is because of only mbuf life milestones
are well-defined in DPDK:
- alloc by app/tx_burst/tx queued|tx_busy/free by PMD|app
- alloc by PMD/rx replenish/rx_burst_returned/free by app
No other mempool-based object milestones are defined so well.
So, do you think we should move this history data to the mbuf object
itself ?
> > It lacks support for other operations on mbufs, e.g. IP
> > fragmentation/defragmentation library operations, application specific
Yes, these ones can be added later, once we see the user's request.
There are too many options to cover and limited field bit width to embrace all of them 😊
> > operations, and transitions between the mempool cache and the mempool
> > backing store.
> > It also lacks support for operations on other mempool objects than
> > mbufs.
> >
> > You might better off using the trace infrastructure, or something
> > similar.
We considered using the existing trace feature.
We even tried in practice to achieve our debugging goals with trace
on the user's sites:
Trace:
- is overwhelming - to many events with too many mbufs
- is incomplete - sometimes there were longtime runs to catch the bug conditions
and it was a little bit hard to record all the huge trace to the files in real time,
usually we had gaps
- is performance impacting. It records the full event information, that is redundant,
queries timestamp (and this action might introduce execution barriers), uses more memory
(as a large linear address range - it impacts cache performance). Sometimes it is crucially
not to impact the datapath performance to have an issue repro onsite.
> > Using the trace infrastructure allows you to record more detailed
> > information along with the transitions of "owners" of each mbuf.
> >
> > I'm not opposing this RFC, but I think it is very limited, and not
> > sufficiently expandable.
It was intentionally developed to be limited, very compact and very fast.
> >
> > I get the point that trace can cause old events on active mbufs to be
> > lost, and the concept of a trace buffer per mempool object is a good
> > solution to that.
> > But I think you need to be able to store much more information with
> > each transition; at least a timestamp. And if you do that, you need rdtsc
> > much more than 4 bits per event.
> >
> > Alternatively, if you do proceed with the RFC in the current form, I
> > have two key suggestions:
> > 1. Make it possible to register operations at runtime. (Look at
> > dynamic mbuf fields for inspiration.) 2. Use 8 bits for the operation,
> > instead of 4.
OK, we can do:
- narrow the gather history feature to mbuf only
- use the dynamic mbuf field for that
- registration would involve the extra memory access and affects the perf,
I would prefer to have fixed set of events to record, at least on the feature
firstintoduction.
> > And if you need a longer trace history, you can use the rte_bitset
> > library instead of a single uint64_t.
Would be not the best option for the perf ☹
>
> One more comment:
> If this feature is meant for mbuf type mempool objects only, it might be
> possible to add it to the mbuf library (and store the trace in the rte_mbuf
> structure) instead of the mempool library.
> Although that would prevent tracing internal mempool library operations,
> specifically moving the object between the mempool cache and mempool
> backing store.
We can update free/alloc mbuf functions, so functionality will not be lost.
With best regards,
Slava
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [RFC PATCH 0/5] Introduce mempool object new debug capabilities
2025-08-25 11:27 ` Slava Ovsiienko
@ 2025-09-01 15:34 ` Morten Brørup
0 siblings, 0 replies; 24+ messages in thread
From: Morten Brørup @ 2025-09-01 15:34 UTC (permalink / raw)
To: Slava Ovsiienko, Shani Peretz; +Cc: dev, Stephen Hemminger, Andrew Rybchenko
> From: Slava Ovsiienko [mailto:viacheslavo@nvidia.com]
> Sent: Monday, 25 August 2025 13.28
>
> Hi, Morten
>
> > From: Morten Brørup <mb@smartsharesystems.com>
> > Sent: Saturday, July 19, 2025 5:39 PM
> >
> > > From: Morten Brørup [mailto:mb@smartsharesystems.com]
> > > Sent: Monday, 7 July 2025 14.11
> > >
> > > > From: Shani Peretz [mailto:shperetz@nvidia.com]
> > > > Sent: Monday, 7 July 2025 07.45
> > > >
> > > > > From: Stephen Hemminger <stephen@networkplumber.org>
> > > > > Sent: Monday, 16 June 2025 18:30
> > > > >
> > > > > On Mon, 16 Jun 2025 10:29:05 +0300 Shani Peretz
> > > > > <shperetz@nvidia.com> wrote:
> > > > >
> > > > > > This feature is designed to monitor the lifecycle of mempool
> > > objects
> > > > > > as they move between the application and the PMD.
> > > > > >
> > > > > > It will allow us to track the operations and transitions of each
> > > > > > mempool object throughout the system, helping in debugging and
> > > > > understanding objects flow.
> > > > > >
> > > > > > The implementation include several key components:
> > > > > > 1. Added a bitmap to mempool's header (rte_mempool_objhdr)
> > > > > > that represent the operations history.
> > > > > > 2. Added functions that allow marking operations on an
> > > > > > mempool objects.
> > > > > > 3. Dumps the history to a file or the console
> > > > > > (rte_mempool_objects_dump).
> > > > > > 4. Added python script that can parse, analyze the data and
> > > > > > present it in an human readable format.
> > > > > > 5. Added compilation flag to enable the feature.
> > > > > >
> > > > >
> > > > > Could this not already be done with tracing infrastructure?
> > > >
> > > > Hey,
> > > > We did consider tracing but:
> > > > - It has limited capacity, which will result in older mbufs being
> > > > lost in the tracing output while they are still in use
> > > > - Some operations may be lost, and we might not capture the
> > > > complete picture due to trace misses caused by the performance
> > > overhead
> > > > of tracking on the datapath as far as I understand WDYT?
> > >
> > > This looks like an alternative trace infrastructure, just for mempool
> > > objects.
>
> It is rather not alternative, but some orthogonal way to trace.
> We gather the life history per mbuf and it helps a lot with nailing the issues
> on the user's sites. From our practice - it helped instantly to find the mbuf
> double free,
> and core policy violations for rx/tx_burst routines in the user applications .
Great with some real life use cases, proving the value of this patch series!
<sidetracking>
The mbuf library has a performance optimization where it keeps ref_cnt==1 for free mbufs in the pool, so double free of mbufs cannot be detected (without this series). Some other patch series could make that performance optimization optional, so double free can be detected by the mbuf library at runtime. Although that would be a major API change, considering how many drivers bypass the mbuf library and call the mempool library directly for allocating/freeing mbufs.
</sidetracking>
What are "core policy violations for rx/tx_burst routines"?
(I'm trying to understand what other types of bugs this series has helped you track down.)
>
> > > But the list of operations is limited to basic operations on mbuf
> > > mempool objects.
>
> Agree, but this limitation is because of only mbuf life milestones
> are well-defined in DPDK:
> - alloc by app/tx_burst/tx queued|tx_busy/free by PMD|app
> - alloc by PMD/rx replenish/rx_burst_returned/free by app
>
> No other mempool-based object milestones are defined so well.
> So, do you think we should move this history data to the mbuf object
> itself ?
This series specifically targets mbufs, and it doesn't look useful for generic mempool objects, so yes, please move the history data to the mbuf object itself.
>
> > > It lacks support for other operations on mbufs, e.g. IP
> > > fragmentation/defragmentation library operations, application specific
>
> Yes, these ones can be added later, once we see the user's request.
> There are too many options to cover and limited field bit width to embrace all
> of them 😊
Agree.
>
> > > operations, and transitions between the mempool cache and the mempool
> > > backing store.
> > > It also lacks support for operations on other mempool objects than
> > > mbufs.
> > >
> > > You might better off using the trace infrastructure, or something
> > > similar.
> We considered using the existing trace feature.
> We even tried in practice to achieve our debugging goals with trace
> on the user's sites:
> Trace:
> - is overwhelming - to many events with too many mbufs
> - is incomplete - sometimes there were longtime runs to catch the bug
> conditions
> and it was a little bit hard to record all the huge trace to the files in real
> time,
> usually we had gaps
> - is performance impacting. It records the full event information, that is
> redundant,
> queries timestamp (and this action might introduce execution barriers), uses
> more memory
> (as a large linear address range - it impacts cache performance). Sometimes it
> is crucially
> not to impact the datapath performance to have an issue repro onsite.
OK. Personally, I'm not a big fan of trace in the fast path either.
>
> > > Using the trace infrastructure allows you to record more detailed
> > > information along with the transitions of "owners" of each mbuf.
> > >
> > > I'm not opposing this RFC, but I think it is very limited, and not
> > > sufficiently expandable.
> It was intentionally developed to be limited, very compact and very fast.
OK. I agree to focusing only on the lifecycle of mbufs with this series.
>
> > >
> > > I get the point that trace can cause old events on active mbufs to be
> > > lost, and the concept of a trace buffer per mempool object is a good
> > > solution to that.
> > > But I think you need to be able to store much more information with
> > > each transition; at least a timestamp. And if you do that, you need rdtsc
> > > much more than 4 bits per event.
> > >
> > > Alternatively, if you do proceed with the RFC in the current form, I
> > > have two key suggestions:
> > > 1. Make it possible to register operations at runtime. (Look at
> > > dynamic mbuf fields for inspiration.) 2. Use 8 bits for the operation,
> > > instead of 4.
> OK, we can do:
> - narrow the gather history feature to mbuf only
Agree.
> - use the dynamic mbuf field for that
Agree.
> - registration would involve the extra memory access and affects the perf,
> I would prefer to have fixed set of events to record, at least on the feature
> firstintoduction.
OK, let's stick with a fixed set of event types for now.
Patch 1/5 has only 8 event types, so using 4 bits for each event history entry probably suffices.
That still leaves room for 8 more event types.
Suggest adding event types USR1 = 15, USR2 = 14, USR3 = 13, USR4 = 12 for application use.
>
> > > And if you need a longer trace history, you can use the rte_bitset
> > > library instead of a single uint64_t.
> Would be not the best option for the perf ☹
Agree.
I suppose a trace history of 16 or 8 entries suffices for debugging.
> >
> > One more comment:
> > If this feature is meant for mbuf type mempool objects only, it might be
> > possible to add it to the mbuf library (and store the trace in the rte_mbuf
> > structure) instead of the mempool library.
>
> > Although that would prevent tracing internal mempool library operations,
> > specifically moving the object between the mempool cache and mempool
> > backing store.
>
> We can update free/alloc mbuf functions, so functionality will not be lost.
>
> With best regards,
> Slava
>
Please go ahead with this, Slava.
I'm looking forward to seeing the next version of this series. :-)
-Morten
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 0/4] add mbuf debug capabilities
2025-06-16 7:29 [RFC PATCH 0/5] Introduce mempool object new debug capabilities Shani Peretz
` (5 preceding siblings ...)
2025-06-16 15:30 ` [RFC PATCH 0/5] Introduce mempool object new debug capabilities Stephen Hemminger
@ 2025-09-16 15:12 ` Shani Peretz
2025-09-16 15:12 ` [PATCH v2 1/4] mbuf: record mbuf operations history Shani Peretz
` (3 more replies)
6 siblings, 4 replies; 24+ messages in thread
From: Shani Peretz @ 2025-09-16 15:12 UTC (permalink / raw)
To: dev
Cc: mb, stephen, bruce.richardson, ajit.khaparde, jerinj,
konstantin.v.ananyev, david.marchand, maxime.coquelin, gakhil,
viacheslavo, thomas, Shani Peretz
v2:
narrow scope to mbuf only - move the history tracking data to the rte_mbuf structure.
use a dynamic mbuf field to store the tracking bitmap.
remove the compilation flag.
v1:
This feature is designed to monitor the lifecycle of mempool objects
as they move between the application and the PMD.
It will allow us to track the operations and transitions of each mempool
object throughout the system, helping in debugging and understanding objects flow.
The implementation include several key components:
1. Added a bitmap to mempool's header (rte_mempool_objhdr)
that represent the operations history.
2. Added functions that allow marking operations on an
mempool objects.
3. Dumps the history to a file or the console
(rte_mempool_objects_dump).
4. Added python script that can parse, analyze the data and
present it in an human readable format.
5. Added compilation flag to enable the feature.
Shani Peretz (4):
mbuf: record mbuf operations history
net/mlx5: mark an operation in mbuf's history
app/testpmd: add testpmd command to dump mbuf history
usertool: add a script to parse mbuf history dump
app/test-pmd/cmdline.c | 60 ++++++++-
config/meson.build | 1 +
drivers/net/mlx5/mlx5_rx.c | 25 ++++
drivers/net/mlx5/mlx5_rx.h | 6 +
drivers/net/mlx5/mlx5_rxq.c | 15 ++-
drivers/net/mlx5/mlx5_rxtx_vec.c | 16 +++
drivers/net/mlx5/mlx5_tx.h | 21 +++
drivers/net/mlx5/mlx5_txq.c | 3 +
lib/ethdev/rte_ethdev.h | 15 +++
lib/mbuf/meson.build | 2 +
lib/mbuf/rte_mbuf.c | 10 +-
lib/mbuf/rte_mbuf.h | 23 +++-
lib/mbuf/rte_mbuf_dyn.h | 7 +
lib/mbuf/rte_mbuf_history.c | 181 ++++++++++++++++++++++++++
lib/mbuf/rte_mbuf_history.h | 154 ++++++++++++++++++++++
meson_options.txt | 2 +
usertools/dpdk-mbuf_history_parser.py | 173 ++++++++++++++++++++++++
17 files changed, 708 insertions(+), 6 deletions(-)
create mode 100644 lib/mbuf/rte_mbuf_history.c
create mode 100644 lib/mbuf/rte_mbuf_history.h
create mode 100755 usertools/dpdk-mbuf_history_parser.py
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 1/4] mbuf: record mbuf operations history
2025-09-16 15:12 ` [PATCH v2 0/4] add mbuf " Shani Peretz
@ 2025-09-16 15:12 ` Shani Peretz
2025-09-16 21:17 ` Stephen Hemminger
2025-09-16 15:12 ` [PATCH v2 2/4] net/mlx5: mark an operation in mbuf's history Shani Peretz
` (2 subsequent siblings)
3 siblings, 1 reply; 24+ messages in thread
From: Shani Peretz @ 2025-09-16 15:12 UTC (permalink / raw)
To: dev
Cc: mb, stephen, bruce.richardson, ajit.khaparde, jerinj,
konstantin.v.ananyev, david.marchand, maxime.coquelin, gakhil,
viacheslavo, thomas, Shani Peretz, Andrew Rybchenko
This feature is designed to monitor the lifecycle of mbufs
as they move between the applicationand the PMD.
It will allow us to track the operations and transitions
of each mbuf throughout the system, helping in debugging
and understanding objects flow.
The implementation uses a dynamic field to store a 64-bit
history value in each mbuf. Each operation is represented
by a 4-bit value, allowing up to 16 operations to be tracked
per mbuf. The dynamic field is automatically initialized
when the first mbuf pool is created.
Signed-off-by: Shani Peretz <shperetz@nvidia.com>
---
config/meson.build | 1 +
lib/ethdev/rte_ethdev.h | 15 +++
lib/mbuf/meson.build | 2 +
lib/mbuf/rte_mbuf.c | 10 +-
lib/mbuf/rte_mbuf.h | 23 ++++-
lib/mbuf/rte_mbuf_dyn.h | 7 ++
lib/mbuf/rte_mbuf_history.c | 181 ++++++++++++++++++++++++++++++++++++
lib/mbuf/rte_mbuf_history.h | 154 ++++++++++++++++++++++++++++++
meson_options.txt | 2 +
9 files changed, 392 insertions(+), 3 deletions(-)
create mode 100644 lib/mbuf/rte_mbuf_history.c
create mode 100644 lib/mbuf/rte_mbuf_history.h
diff --git a/config/meson.build b/config/meson.build
index 55497f0bf5..d1f21f3115 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -379,6 +379,7 @@ if get_option('mbuf_refcnt_atomic')
dpdk_conf.set('RTE_MBUF_REFCNT_ATOMIC', true)
endif
dpdk_conf.set10('RTE_IOVA_IN_MBUF', get_option('enable_iova_as_pa'))
+dpdk_conf.set10('RTE_MBUF_HISTORY_DEBUG', get_option('enable_mbuf_history'))
compile_time_cpuflags = []
subdir(arch_subdir)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index d23c143eed..d0f6cd2582 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -6336,6 +6336,10 @@ rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts);
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_bulk(rx_pkts, nb_rx, RTE_MBUF_APP_RX);
+#endif
+
#ifdef RTE_ETHDEV_RXTX_CALLBACKS
{
void *cb;
@@ -6688,8 +6692,19 @@ rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id,
}
#endif
+#if RTE_MBUF_HISTORY_DEBUG
+ uint16_t requested_pkts = nb_pkts;
+ rte_mbuf_history_bulk(tx_pkts, nb_pkts, RTE_MBUF_PMD_TX);
+#endif
+
nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts);
+#if RTE_MBUF_HISTORY_DEBUG
+ if (requested_pkts > nb_pkts)
+ rte_mbuf_history_bulk(tx_pkts + nb_pkts,
+ requested_pkts - nb_pkts, RTE_MBUF_BUSY_TX);
+#endif
+
rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts);
return nb_pkts;
}
diff --git a/lib/mbuf/meson.build b/lib/mbuf/meson.build
index 0435c5e628..2c840ee2f2 100644
--- a/lib/mbuf/meson.build
+++ b/lib/mbuf/meson.build
@@ -6,6 +6,7 @@ sources = files(
'rte_mbuf_ptype.c',
'rte_mbuf_pool_ops.c',
'rte_mbuf_dyn.c',
+ 'rte_mbuf_history.c',
)
headers = files(
'rte_mbuf.h',
@@ -13,5 +14,6 @@ headers = files(
'rte_mbuf_ptype.h',
'rte_mbuf_pool_ops.h',
'rte_mbuf_dyn.h',
+ 'rte_mbuf_history.h',
)
deps += ['mempool']
diff --git a/lib/mbuf/rte_mbuf.c b/lib/mbuf/rte_mbuf.c
index 9e7731a8a2..362dd252bc 100644
--- a/lib/mbuf/rte_mbuf.c
+++ b/lib/mbuf/rte_mbuf.c
@@ -281,6 +281,10 @@ rte_pktmbuf_pool_create(const char *name, unsigned int n,
unsigned int cache_size, uint16_t priv_size, uint16_t data_room_size,
int socket_id)
{
+#if RTE_MBUF_HISTORY_DEBUG
+ if (rte_mbuf_history_init() < 0)
+ RTE_LOG(ERR, MBUF, "Failed to enable mbuf history\n");
+#endif
return rte_pktmbuf_pool_create_by_ops(name, n, cache_size, priv_size,
data_room_size, socket_id, NULL);
}
@@ -516,8 +520,12 @@ void rte_pktmbuf_free_bulk(struct rte_mbuf **mbufs, unsigned int count)
} while (m != NULL);
}
- if (nb_pending > 0)
+ if (nb_pending > 0) {
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_bulk(pending, nb_pending, RTE_MBUF_FREE);
+#endif
rte_mempool_put_bulk(pending[0]->pool, (void **)pending, nb_pending);
+ }
}
/* Creates a shallow copy of mbuf */
diff --git a/lib/mbuf/rte_mbuf.h b/lib/mbuf/rte_mbuf.h
index 06ab7502a5..8126f09de4 100644
--- a/lib/mbuf/rte_mbuf.h
+++ b/lib/mbuf/rte_mbuf.h
@@ -40,6 +40,7 @@
#include <rte_branch_prediction.h>
#include <rte_mbuf_ptype.h>
#include <rte_mbuf_core.h>
+#include "rte_mbuf_history.h"
#ifdef __cplusplus
extern "C" {
@@ -607,6 +608,9 @@ static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp)
if (rte_mempool_get(mp, &ret.ptr) < 0)
return NULL;
__rte_mbuf_raw_sanity_check(ret.m);
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(ret.m, RTE_MBUF_ALLOC);
+#endif
return ret.m;
}
@@ -642,9 +646,14 @@ static __rte_always_inline int
rte_mbuf_raw_alloc_bulk(struct rte_mempool *mp, struct rte_mbuf **mbufs, unsigned int count)
{
int rc = rte_mempool_get_bulk(mp, (void **)mbufs, count);
- if (likely(rc == 0))
- for (unsigned int idx = 0; idx < count; idx++)
+ if (likely(rc == 0)) {
+ for (unsigned int idx = 0; idx < count; idx++) {
__rte_mbuf_raw_sanity_check(mbufs[idx]);
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(mbufs[idx], RTE_MBUF_ALLOC);
+#endif
+ }
+ }
return rc;
}
@@ -667,6 +676,9 @@ rte_mbuf_raw_free(struct rte_mbuf *m)
{
__rte_mbuf_raw_sanity_check(m);
rte_mempool_put(m->pool, m);
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(m, RTE_MBUF_FREE);
+#endif
}
/**
@@ -701,6 +713,9 @@ rte_mbuf_raw_free_bulk(struct rte_mempool *mp, struct rte_mbuf **mbufs, unsigned
RTE_ASSERT(m != NULL);
RTE_ASSERT(m->pool == mp);
__rte_mbuf_raw_sanity_check(m);
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(mbufs[idx], RTE_MBUF_FREE);
+#endif
}
rte_mempool_put_bulk(mp, (void **)mbufs, count);
@@ -1013,6 +1028,10 @@ static inline int rte_pktmbuf_alloc_bulk(struct rte_mempool *pool,
if (unlikely(rc))
return rc;
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_bulk(mbufs, count, RTE_MBUF_ALLOC);
+#endif
+
/* To understand duff's device on loop unwinding optimization, see
* https://en.wikipedia.org/wiki/Duff's_device.
* Here while() loop is used rather than do() while{} to avoid extra
diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
index 865c90f579..8ae31b4e65 100644
--- a/lib/mbuf/rte_mbuf_dyn.h
+++ b/lib/mbuf/rte_mbuf_dyn.h
@@ -240,6 +240,13 @@ void rte_mbuf_dyn_dump(FILE *out);
* and parameters together.
*/
+/**
+ * The mbuf history dynamic field provides lifecycle tracking for mbuf objects through the system.
+ * It records a fixed set of predefined operations to maintain performance
+ * while providing debugging capabilities.
+ */
+#define RTE_MBUF_DYNFIELD_HISTORY_NAME "rte_mbuf_dynfield_history"
+
/*
* The metadata dynamic field provides some extra packet information
* to interact with RTE Flow engine. The metadata in sent mbufs can be
diff --git a/lib/mbuf/rte_mbuf_history.c b/lib/mbuf/rte_mbuf_history.c
new file mode 100644
index 0000000000..5be56289ca
--- /dev/null
+++ b/lib/mbuf/rte_mbuf_history.c
@@ -0,0 +1,181 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 NVIDIA Corporation & Affiliates
+ */
+
+#include <rte_mbuf_history.h>
+#include <rte_mbuf_dyn.h>
+#include <rte_log.h>
+#include <rte_errno.h>
+#include <eal_export.h>
+#include <rte_mempool.h>
+#include <rte_tailq.h>
+#include <stdio.h>
+#include <string.h>
+#include <inttypes.h>
+
+/* Global offset for the history field */
+int rte_mbuf_history_field_offset = -1;
+RTE_EXPORT_SYMBOL(rte_mbuf_history_field_offset);
+
+#if RTE_MBUF_HISTORY_DEBUG
+/* Dynamic field definition for mbuf history */
+static const struct rte_mbuf_dynfield mbuf_dynfield_history = {
+ .name = RTE_MBUF_DYNFIELD_HISTORY_NAME,
+ .size = sizeof(uint64_t),
+ .align = RTE_ALIGN(sizeof(uint64_t), 8),
+};
+
+/* Context structure for combined statistics counting and mbuf history printing */
+struct count_and_print_ctx {
+ uint64_t *stats;
+ FILE *f;
+};
+
+static void
+mbuf_history_count_stats_and_print(struct rte_mempool *mp __rte_unused, void *opaque,
+ void *obj, unsigned obj_idx __rte_unused)
+{
+ struct count_and_print_ctx *ctx = (struct count_and_print_ctx *)opaque;
+
+ struct rte_mbuf *mbuf = (struct rte_mbuf *)obj;
+
+ if (obj == NULL || ctx == NULL || ctx->stats == NULL || ctx->f == NULL)
+ return;
+
+ /* Get mbuf history */
+ uint64_t history = rte_mbuf_history_get(mbuf);
+
+ ctx->stats[0]++; /* n_total */
+
+ if (history == 0) {
+ ctx->stats[1]++; /* n_never */
+ return;
+ }
+
+ /* Extract the most recent operation */
+ uint64_t op = history & RTE_MBUF_HISTORY_MASK;
+
+ switch (op) {
+ case RTE_MBUF_FREE:
+ ctx->stats[2]++; /* n_free */
+ break;
+ case RTE_MBUF_PMD_FREE:
+ ctx->stats[3]++; /* n_pmd_free */
+ break;
+ case RTE_MBUF_PMD_TX:
+ ctx->stats[4]++; /* n_pmd_tx */
+ break;
+ case RTE_MBUF_APP_RX:
+ ctx->stats[5]++; /* n_app_rx */
+ break;
+ case RTE_MBUF_PMD_ALLOC:
+ ctx->stats[6]++; /* n_pmd_alloc */
+ break;
+ case RTE_MBUF_ALLOC:
+ ctx->stats[7]++; /* n_alloc */
+ break;
+ case RTE_MBUF_BUSY_TX:
+ ctx->stats[8]++; /* n_busy_tx */
+ break;
+ default:
+ break;
+ }
+
+ /* Print the mbuf history value */
+ fprintf(ctx->f, "mbuf %p: %016" PRIX64 "\n", mbuf, history);
+
+}
+
+static void
+mbuf_history_get_stat(struct rte_mempool *mp, void *arg)
+{
+ FILE *f = (FILE *)arg;
+ uint64_t stats[9] = {0};
+
+ if (f == NULL)
+ return;
+
+ /* Output mempool header */
+ fprintf(f, "=== Mempool: %s ===\n", mp->name);
+
+ /* Create context structure for combined counting and printing */
+ struct count_and_print_ctx ctx = { .stats = stats, .f = f };
+
+ /* Single pass: collect statistics and print mbuf history */
+ rte_mempool_obj_iter(mp, mbuf_history_count_stats_and_print, &ctx);
+
+ /* Calculate total allocated mbufs */
+ uint64_t total_allocated = stats[3] + stats[4] + stats[5] +
+ stats[6] + stats[7] + stats[8];
+
+ /* Print statistics summary */
+ fprintf(f, "\n"
+ "Populated: %u\n"
+ "Never allocated: %" PRIu64 "\n"
+ "Free: %" PRIu64 "\n"
+ "Allocated: %" PRIu64 "\n"
+ "PMD owned Tx: %" PRIu64 "\n"
+ "PMD owned Rx: %" PRIu64 "\n"
+ "App owned alloc: %" PRIu64 "\n"
+ "App owned Rx: %" PRIu64 "\n"
+ "App owned busy: %" PRIu64 "\n"
+ "Counted total: %" PRIu64 "\n",
+ mp->populated_size, stats[1], stats[2], total_allocated,
+ stats[4], stats[6], stats[7], stats[5], stats[8], stats[0]);
+
+ fprintf(f, "---\n");
+}
+#endif
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_mbuf_history_dump, 25.11)
+#if RTE_MBUF_HISTORY_DEBUG
+void rte_mbuf_history_dump(FILE *f)
+{
+ if (f == NULL) {
+ RTE_LOG(ERR, MBUF, "Invalid file pointer\n");
+ return;
+ }
+
+ fprintf(f, "=== MBUF History Statistics ===\n");
+ fprintf(f, "Dumping complete mbuf history for all mempools...\n");
+
+ /* Check if mbuf history is initialized */
+ if (rte_mbuf_history_field_offset == -1) {
+ fprintf(f, "WARNING: MBUF history not initialized. Call rte_mbuf_history_init() first.\n\n");
+ return;
+ }
+
+ /* Use rte_mempool_walk to iterate over all mempools */
+ rte_mempool_walk(mbuf_history_get_stat, f);
+}
+
+int rte_mbuf_history_init(void)
+{
+ if (rte_mbuf_history_field_offset != -1) {
+ /* Already initialized */
+ return 0;
+ }
+
+ rte_mbuf_history_field_offset = rte_mbuf_dynfield_register(&mbuf_dynfield_history);
+ if (rte_mbuf_history_field_offset < 0) {
+ RTE_LOG(ERR, MBUF, "Failed to register mbuf history dynamic field: %s\n",
+ rte_strerror(rte_errno));
+ return -1;
+ }
+ return 0;
+}
+#else
+void rte_mbuf_history_dump(FILE *f)
+{
+ RTE_SET_USED(f);
+ RTE_LOG(INFO, MBUF, "Mbuf history recorder is not supported\n");
+}
+
+int rte_mbuf_history_init(void)
+{
+ rte_errno = ENOTSUP;
+ return -1;
+}
+#endif
+RTE_EXPORT_SYMBOL(rte_mbuf_history_init);
+RTE_EXPORT_SYMBOL(rte_mbuf_history_dump);
diff --git a/lib/mbuf/rte_mbuf_history.h b/lib/mbuf/rte_mbuf_history.h
new file mode 100644
index 0000000000..4448ad1557
--- /dev/null
+++ b/lib/mbuf/rte_mbuf_history.h
@@ -0,0 +1,154 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 NVIDIA Corporation & Affiliates
+ */
+
+#ifndef _RTE_MBUF_HISTORY_H_
+#define _RTE_MBUF_HISTORY_H_
+
+/**
+ * @file
+ * MBUF History
+ *
+ * This module provides history tracking for mbuf objects using dynamic fields.
+ * It tracks the lifecycle of mbuf objects through the system with a fixed set
+ * of predefined events to maintain performance.
+ *
+ * The history is stored as a 64-bit value in the mbuf dynamic field area,
+ * with each event encoded in 4 bits, allowing up to 16 events to be tracked.
+ */
+
+#include <stdint.h>
+#include <rte_mbuf_dyn.h>
+#include <rte_common.h>
+#include <rte_branch_prediction.h>
+#include "mbuf_log.h"
+
+/* Forward declaration to avoid circular dependency */
+struct rte_mbuf;
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Number of bits for each history operation
+ */
+#define RTE_MBUF_HISTORY_BITS 4
+
+/**
+ * Maximum number of history operations that can be stored
+ */
+#define RTE_MBUF_HISTORY_MAX_OPS 16
+
+/**
+ * Mask for extracting the most recent operation from history
+ */
+#define RTE_MBUF_HISTORY_MASK ((1ULL << RTE_MBUF_HISTORY_BITS) - 1)
+
+/**
+ * History operation types
+ */
+enum rte_mbuf_history_op {
+ RTE_MBUF_NEVER = 0, /* Initial state - never allocated */
+ RTE_MBUF_FREE = 1, /* Freed back to pool */
+ RTE_MBUF_PMD_FREE = 2, /* Freed by PMD back to pool*/
+ RTE_MBUF_PMD_TX = 3, /* Sent to PMD for Tx */
+ RTE_MBUF_APP_RX = 4, /* Returned to application on Rx */
+ RTE_MBUF_PMD_ALLOC = 5, /* Allocated by PMD for Rx */
+ RTE_MBUF_ALLOC = 6, /* Allocated by application */
+ RTE_MBUF_BUSY_TX = 7, /* Returned to app due to Tx busy */
+ RTE_MBUF_USR3 = 13, /* Application-defined event 3 */
+ RTE_MBUF_USR2 = 14, /* Application-defined event 2 */
+ RTE_MBUF_USR1 = 15, /* Application-defined event 1 */
+ RTE_MBUF_MAX = 16, /* Maximum trace operation value */
+};
+
+
+/**
+ * Global offset for the history field (set during initialization)
+ */
+extern int rte_mbuf_history_field_offset;
+
+/**
+ * Initialize the mbuf history system
+ *
+ * This function registers the dynamic field for mbuf history tracking.
+ * It should be called once during application initialization.
+ *
+ * Note: This function is called by rte_pktmbuf_pool_create,
+ * so explicit invocation is usually not required unless initializing manually.
+ *
+ * @return
+ * 0 on success, -1 on failure with rte_errno set
+ */
+int rte_mbuf_history_init(void);
+
+#if RTE_MBUF_HISTORY_DEBUG
+/**
+ * Get the history value from an mbuf
+ *
+ * @param m
+ * Pointer to the mbuf
+ * @return
+ * The history value, or 0 if history is not available
+ */
+static inline uint64_t rte_mbuf_history_get(const struct rte_mbuf *m)
+{
+ if (unlikely(m == NULL || rte_mbuf_history_field_offset == -1))
+ return 0;
+
+ return *RTE_MBUF_DYNFIELD(m, rte_mbuf_history_field_offset, uint64_t *);
+}
+
+/**
+ * Mark an mbuf with a history event
+ *
+ * @param m
+ * Pointer to the mbuf
+ * @param op
+ * The operation to record
+ */
+static inline void rte_mbuf_history_mark(struct rte_mbuf *m, uint32_t op)
+{
+ if (unlikely(m == NULL || op >= RTE_MBUF_MAX || rte_mbuf_history_field_offset == -1))
+ return;
+
+ uint64_t *history = RTE_MBUF_DYNFIELD(m, rte_mbuf_history_field_offset, uint64_t *);
+ *history = (*history << RTE_MBUF_HISTORY_BITS) | op;
+}
+
+/**
+ * Mark multiple mbufs with a history event
+ *
+ * @param mbufs
+ * Array of mbuf pointers
+ * @param n
+ * Number of mbufs to mark
+ * @param op
+ * The operation to record
+ */
+static inline void rte_mbuf_history_bulk(struct rte_mbuf * const *mbufs,
+ uint32_t n, uint32_t op)
+{
+ if (unlikely(mbufs == NULL || op >= RTE_MBUF_MAX || rte_mbuf_history_field_offset == -1))
+ return;
+
+ while (n--)
+ rte_mbuf_history_mark(*mbufs++, op);
+}
+#endif
+
+/**
+ * Dump mbuf history statistics for all mempools to a file
+ *
+ * @param f
+ * File pointer to write the history statistics to
+ */
+__rte_experimental
+void rte_mbuf_history_dump(FILE *f);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_MBUF_HISTORY_H_ */
diff --git a/meson_options.txt b/meson_options.txt
index e49b2fc089..48f6d4a9a5 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -16,6 +16,8 @@ option('drivers_install_subdir', type: 'string', value: 'dpdk/pmds-<VERSION>', d
'Subdirectory of libdir where to install PMDs. Defaults to using a versioned subdirectory.')
option('enable_docs', type: 'boolean', value: false, description:
'build documentation')
+option('enable_mbuf_history', type: 'boolean', value: false, description:
+ 'Enable mbuf history tracking for debugging purposes. This will track mbufs allocation and free operations. Default is false.')
option('enable_apps', type: 'string', value: '', description:
'Comma-separated list of apps to build. If unspecified, build all apps.')
option('enable_deprecated_libs', type: 'string', value: '', description:
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 2/4] net/mlx5: mark an operation in mbuf's history
2025-09-16 15:12 ` [PATCH v2 0/4] add mbuf " Shani Peretz
2025-09-16 15:12 ` [PATCH v2 1/4] mbuf: record mbuf operations history Shani Peretz
@ 2025-09-16 15:12 ` Shani Peretz
2025-09-16 21:14 ` Stephen Hemminger
2025-09-16 15:12 ` [PATCH v2 3/4] app/testpmd: add testpmd command to dump mbuf history Shani Peretz
2025-09-16 15:12 ` [PATCH v2 4/4] usertool: add a script to parse mbuf history dump Shani Peretz
3 siblings, 1 reply; 24+ messages in thread
From: Shani Peretz @ 2025-09-16 15:12 UTC (permalink / raw)
To: dev
Cc: mb, stephen, bruce.richardson, ajit.khaparde, jerinj,
konstantin.v.ananyev, david.marchand, maxime.coquelin, gakhil,
viacheslavo, thomas, Shani Peretz, Dariusz Sosnowski, Bing Zhao,
Ori Kam, Suanming Mou, Matan Azrad
record operations on mbufs when it is allocated
and released inside the mlx5 PMD.
Signed-off-by: Shani Peretz <shperetz@nvidia.com>
---
drivers/net/mlx5/mlx5_rx.c | 25 +++++++++++++++++++++++++
drivers/net/mlx5/mlx5_rx.h | 6 ++++++
drivers/net/mlx5/mlx5_rxq.c | 15 +++++++++++++--
drivers/net/mlx5/mlx5_rxtx_vec.c | 16 ++++++++++++++++
drivers/net/mlx5/mlx5_tx.h | 21 +++++++++++++++++++++
drivers/net/mlx5/mlx5_txq.c | 3 +++
6 files changed, 84 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index 420a03068d..4e44892d93 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -640,12 +640,19 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec,
elt_idx = (elts_ci + i) & e_mask;
elt = &(*rxq->elts)[elt_idx];
*elt = rte_mbuf_raw_alloc(rxq->mp);
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(*elt, RTE_MBUF_PMD_ALLOC);
+#endif
if (!*elt) {
for (i--; i >= 0; --i) {
elt_idx = (elts_ci +
i) & elts_n;
elt = &(*rxq->elts)
[elt_idx];
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(*elt,
+ RTE_MBUF_PMD_FREE);
+#endif
rte_pktmbuf_free_seg
(*elt);
}
@@ -1048,6 +1055,9 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
rte_prefetch0(wqe);
/* Allocate the buf from the same pool. */
rep = rte_mbuf_raw_alloc(seg->pool);
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(rep, RTE_MBUF_PMD_ALLOC);
+#endif
if (unlikely(rep == NULL)) {
++rxq->stats.rx_nombuf;
if (!pkt) {
@@ -1062,6 +1072,9 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
rep = NEXT(pkt);
NEXT(pkt) = NULL;
NB_SEGS(pkt) = 1;
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(pkt, RTE_MBUF_PMD_FREE);
+#endif
rte_mbuf_raw_free(pkt);
pkt = rep;
}
@@ -1076,6 +1089,9 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
&mcqe, &skip_cnt, false, NULL);
if (unlikely(len & MLX5_ERROR_CQE_MASK)) {
/* We drop packets with non-critical errors */
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(rep, RTE_MBUF_PMD_FREE);
+#endif
rte_mbuf_raw_free(rep);
if (len == MLX5_CRITICAL_ERROR_CQE_RET) {
rq_ci = rxq->rq_ci << sges_n;
@@ -1089,6 +1105,9 @@ mlx5_rx_burst(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
continue;
}
if (len == 0) {
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(rep, RTE_MBUF_PMD_FREE);
+#endif
rte_mbuf_raw_free(rep);
break;
}
@@ -1540,6 +1559,9 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
++rxq->stats.rx_nombuf;
break;
}
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(pkt, RTE_MBUF_PMD_ALLOC);
+#endif
len = (byte_cnt & MLX5_MPRQ_LEN_MASK) >> MLX5_MPRQ_LEN_SHIFT;
MLX5_ASSERT((int)len >= (rxq->crc_present << 2));
if (rxq->crc_present)
@@ -1547,6 +1569,9 @@ mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
rxq_code = mprq_buf_to_pkt(rxq, pkt, len, buf,
strd_idx, strd_cnt);
if (unlikely(rxq_code != MLX5_RXQ_CODE_EXIT)) {
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(pkt, RTE_MBUF_PMD_FREE);
+#endif
rte_pktmbuf_free_seg(pkt);
if (rxq_code == MLX5_RXQ_CODE_DROPPED) {
++rxq->stats.idropped;
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 7be31066a5..075b4bfc4b 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -525,6 +525,9 @@ mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, uint32_t len,
if (unlikely(next == NULL))
return MLX5_RXQ_CODE_NOMBUF;
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(next, RTE_MBUF_PMD_ALLOC);
+#endif
NEXT(prev) = next;
SET_DATA_OFF(next, 0);
addr = RTE_PTR_ADD(addr, seg_len);
@@ -588,6 +591,9 @@ mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, uint32_t len,
if (unlikely(seg == NULL))
return MLX5_RXQ_CODE_NOMBUF;
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(seg, RTE_MBUF_PMD_ALLOC);
+#endif
SET_DATA_OFF(seg, 0);
rte_memcpy(rte_pktmbuf_mtod(seg, void *),
RTE_PTR_ADD(addr, len - hdrm_overlap),
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index aeefece8c1..434a57ca32 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -164,6 +164,9 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
rte_errno = ENOMEM;
goto error;
}
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(buf, RTE_MBUF_PMD_ALLOC);
+#endif
/* Only vectored Rx routines rely on headroom size. */
MLX5_ASSERT(!has_vec_support ||
DATA_OFF(buf) >= RTE_PKTMBUF_HEADROOM);
@@ -221,8 +224,12 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
err = rte_errno; /* Save rte_errno before cleanup. */
elts_n = i;
for (i = 0; (i != elts_n); ++i) {
- if ((*rxq_ctrl->rxq.elts)[i] != NULL)
+ if ((*rxq_ctrl->rxq.elts)[i] != NULL) {
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark((*rxq_ctrl->rxq.elts)[i], RTE_MBUF_PMD_FREE);
+#endif
rte_pktmbuf_free_seg((*rxq_ctrl->rxq.elts)[i]);
+ }
(*rxq_ctrl->rxq.elts)[i] = NULL;
}
if (rxq_ctrl->share_group == 0)
@@ -324,8 +331,12 @@ rxq_free_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl)
rxq->rq_pi = elts_ci;
}
for (i = 0; i != q_n; ++i) {
- if ((*rxq->elts)[i] != NULL)
+ if ((*rxq->elts)[i] != NULL) {
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark((*rxq->elts)[i], RTE_MBUF_PMD_FREE);
+#endif
rte_pktmbuf_free_seg((*rxq->elts)[i]);
+ }
(*rxq->elts)[i] = NULL;
}
}
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c
index 1b701801c5..c7ca808f43 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.c
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.c
@@ -63,6 +63,9 @@ rxq_handle_pending_error(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts,
if (pkt->packet_type == RTE_PTYPE_ALL_MASK || rxq->err_state) {
#ifdef MLX5_PMD_SOFT_COUNTERS
err_bytes += PKT_LEN(pkt);
+#endif
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(pkt, RTE_MBUF_PMD_FREE);
#endif
rte_pktmbuf_free_seg(pkt);
} else {
@@ -107,6 +110,9 @@ mlx5_rx_replenish_bulk_mbuf(struct mlx5_rxq_data *rxq)
rxq->stats.rx_nombuf += n;
return;
}
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_bulk(elts, n, RTE_MBUF_PMD_ALLOC);
+#endif
if (unlikely(mlx5_mr_btree_len(&rxq->mr_ctrl.cache_bh) > 1)) {
for (i = 0; i < n; ++i) {
/*
@@ -171,6 +177,9 @@ mlx5_rx_mprq_replenish_bulk_mbuf(struct mlx5_rxq_data *rxq)
rxq->stats.rx_nombuf += n;
return;
}
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_bulk(elts, n, RTE_MBUF_PMD_ALLOC);
+#endif
rxq->elts_ci += n;
/* Prevent overflowing into consumed mbufs. */
elts_idx = rxq->elts_ci & wqe_mask;
@@ -224,6 +233,9 @@ rxq_copy_mprq_mbuf_v(struct mlx5_rxq_data *rxq,
if (!elts[i]->pkt_len) {
rxq->consumed_strd = strd_n;
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(elts[i], RTE_MBUF_PMD_FREE);
+#endif
rte_pktmbuf_free_seg(elts[i]);
#ifdef MLX5_PMD_SOFT_COUNTERS
rxq->stats.ipackets -= 1;
@@ -236,6 +248,9 @@ rxq_copy_mprq_mbuf_v(struct mlx5_rxq_data *rxq,
buf, rxq->consumed_strd, strd_cnt);
rxq->consumed_strd += strd_cnt;
if (unlikely(rxq_code != MLX5_RXQ_CODE_EXIT)) {
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(elts[i], RTE_MBUF_PMD_FREE);
+#endif
rte_pktmbuf_free_seg(elts[i]);
#ifdef MLX5_PMD_SOFT_COUNTERS
rxq->stats.ipackets -= 1;
@@ -586,6 +601,7 @@ mlx5_rx_burst_mprq_vec(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n)
rte_io_wmb();
*rxq->cq_db = rte_cpu_to_be_32(rxq->cq_ci);
} while (tn != pkts_n);
+
return tn;
}
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 16307206e2..c3d69942a8 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -555,6 +555,9 @@ mlx5_tx_free_mbuf(struct mlx5_txq_data *__rte_restrict txq,
if (!MLX5_TXOFF_CONFIG(MULTI) && txq->fast_free) {
mbuf = *pkts;
pool = mbuf->pool;
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_bulk(pkts, pkts_n, RTE_MBUF_PMD_FREE);
+#endif
rte_mempool_put_bulk(pool, (void *)pkts, pkts_n);
return;
}
@@ -610,6 +613,9 @@ mlx5_tx_free_mbuf(struct mlx5_txq_data *__rte_restrict txq,
* Free the array of pre-freed mbufs
* belonging to the same memory pool.
*/
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_bulk(p_free, n_free, RTE_MBUF_PMD_FREE);
+#endif
rte_mempool_put_bulk(pool, (void *)p_free, n_free);
if (unlikely(mbuf != NULL)) {
/* There is the request to start new scan. */
@@ -1225,6 +1231,9 @@ mlx5_tx_mseg_memcpy(uint8_t *pdst,
/* Exhausted packet, just free. */
mbuf = loc->mbuf;
loc->mbuf = mbuf->next;
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(mbuf, RTE_MBUF_PMD_FREE);
+#endif
rte_pktmbuf_free_seg(mbuf);
loc->mbuf_off = 0;
MLX5_ASSERT(loc->mbuf_nseg > 1);
@@ -1267,6 +1276,9 @@ mlx5_tx_mseg_memcpy(uint8_t *pdst,
/* Exhausted packet, just free. */
mbuf = loc->mbuf;
loc->mbuf = mbuf->next;
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(mbuf, RTE_MBUF_PMD_FREE);
+#endif
rte_pktmbuf_free_seg(mbuf);
loc->mbuf_off = 0;
MLX5_ASSERT(loc->mbuf_nseg >= 1);
@@ -1717,6 +1729,9 @@ mlx5_tx_mseg_build(struct mlx5_txq_data *__rte_restrict txq,
/* Zero length segment found, just skip. */
mbuf = loc->mbuf;
loc->mbuf = loc->mbuf->next;
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(mbuf, RTE_MBUF_PMD_FREE);
+#endif
rte_pktmbuf_free_seg(mbuf);
if (--loc->mbuf_nseg == 0)
break;
@@ -2020,6 +2035,9 @@ mlx5_tx_packet_multi_send(struct mlx5_txq_data *__rte_restrict txq,
wqe->cseg.sq_ds -= RTE_BE32(1);
mbuf = loc->mbuf;
loc->mbuf = mbuf->next;
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(mbuf, RTE_MBUF_PMD_FREE);
+#endif
rte_pktmbuf_free_seg(mbuf);
if (--nseg == 0)
break;
@@ -3319,6 +3337,9 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq,
* Packet data are completely inlined,
* free the packet immediately.
*/
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(loc->mbuf, RTE_MBUF_PMD_FREE);
+#endif
rte_pktmbuf_free_seg(loc->mbuf);
} else if ((!MLX5_TXOFF_CONFIG(EMPW) ||
MLX5_TXOFF_CONFIG(MPW)) &&
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index 2aa2475a8a..445d1d62c4 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -79,6 +79,9 @@ txq_free_elts(struct mlx5_txq_ctrl *txq_ctrl)
struct rte_mbuf *elt = (*elts)[elts_tail & elts_m];
MLX5_ASSERT(elt != NULL);
+#if RTE_MBUF_HISTORY_DEBUG
+ rte_mbuf_history_mark(elt, RTE_MBUF_PMD_FREE);
+#endif
rte_pktmbuf_free_seg(elt);
#ifdef RTE_LIBRTE_MLX5_DEBUG
/* Poisoning. */
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 3/4] app/testpmd: add testpmd command to dump mbuf history
2025-09-16 15:12 ` [PATCH v2 0/4] add mbuf " Shani Peretz
2025-09-16 15:12 ` [PATCH v2 1/4] mbuf: record mbuf operations history Shani Peretz
2025-09-16 15:12 ` [PATCH v2 2/4] net/mlx5: mark an operation in mbuf's history Shani Peretz
@ 2025-09-16 15:12 ` Shani Peretz
2025-09-16 15:12 ` [PATCH v2 4/4] usertool: add a script to parse mbuf history dump Shani Peretz
3 siblings, 0 replies; 24+ messages in thread
From: Shani Peretz @ 2025-09-16 15:12 UTC (permalink / raw)
To: dev
Cc: mb, stephen, bruce.richardson, ajit.khaparde, jerinj,
konstantin.v.ananyev, david.marchand, maxime.coquelin, gakhil,
viacheslavo, thomas, Shani Peretz, Aman Singh
dumps the mbufs history to console or to a file.
The dump will contain:
- Operation history for each mbuf
- Summary and statistics about all mbufs
testpmd> dump_mbuf_history
testpmd> dump_mbuf_history <file_name>
Signed-off-by: Shani Peretz <shperetz@nvidia.com>
---
app/test-pmd/cmdline.c | 60 +++++++++++++++++++++++++++++++++++++++++-
1 file changed, 59 insertions(+), 1 deletion(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 3731fba370..7a1cba5094 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -40,6 +40,7 @@
#include <rte_gro.h>
#endif
#include <rte_mbuf_dyn.h>
+#include <rte_mbuf_history.h>
#include <rte_trace.h>
#include <cmdline_rdline.h>
@@ -296,6 +297,9 @@ static void cmd_help_long_parsed(void *parsed_result,
"dump_log_types\n"
" Dumps the log level for all the dpdk modules\n\n"
+ "dump_mbuf_history\n"
+ " Dumps the mbuf history\n\n"
+
"show port (port_id) speed_lanes capabilities"
" Show speed lanes capabilities of a port.\n\n"
);
@@ -9177,6 +9181,8 @@ static void cmd_dump_parsed(void *parsed_result,
#endif
else if (!strcmp(res->dump, "dump_log_types"))
rte_log_dump(stdout);
+ else if (!strcmp(res->dump, "dump_mbuf_history"))
+ rte_mbuf_history_dump(stdout);
}
static cmdline_parse_token_string_t cmd_dump_dump =
@@ -9198,7 +9204,8 @@ cmd_dump_init(void)
#ifndef RTE_EXEC_ENV_WINDOWS
"dump_trace#"
#endif
- "dump_log_types";
+ "dump_log_types#"
+ "dump_mbuf_history";
}
static cmdline_parse_inst_t cmd_dump = {
@@ -9260,6 +9267,56 @@ static cmdline_parse_inst_t cmd_dump_one = {
},
};
+/* Dump mbuf history to file */
+struct cmd_dump_mbuf_to_file_result {
+ cmdline_fixed_string_t dump;
+ cmdline_fixed_string_t file;
+};
+
+static void cmd_dump_mbuf_to_file_parsed(void *parsed_result, struct cmdline *cl,
+ __rte_unused void *data)
+{
+ struct cmd_dump_mbuf_to_file_result *res = parsed_result;
+ FILE *file = stdout;
+ char *file_name = res->file;
+
+ if (strcmp(res->dump, "dump_mbuf_history")) {
+ cmdline_printf(cl, "Invalid dump type\n");
+ return;
+ }
+
+ if (file_name && strlen(file_name)) {
+ file = fopen(file_name, "w");
+ if (!file) {
+ rte_mbuf_history_dump(stdout);
+ return;
+ }
+ }
+ rte_mbuf_history_dump(file);
+ printf("Flow dump finished\n");
+ if (file_name && strlen(file_name))
+ fclose(file);
+}
+
+static cmdline_parse_token_string_t cmd_dump_mbuf_to_file_dump =
+ TOKEN_STRING_INITIALIZER(struct cmd_dump_mbuf_to_file_result, dump,
+ "dump_mbuf_history");
+
+static cmdline_parse_token_string_t cmd_dump_mbuf_to_file_file =
+ TOKEN_STRING_INITIALIZER(struct cmd_dump_mbuf_to_file_result, file, NULL);
+
+static cmdline_parse_inst_t cmd_dump_mbuf_to_file = {
+ .f = cmd_dump_mbuf_to_file_parsed, /* function to call */
+ .data = NULL, /* 2nd arg of func */
+ .help_str = "dump_mbuf_history <file_name>: Dump mbuf history to file",
+ .tokens = { /* token list, NULL terminated */
+ (void *)&cmd_dump_mbuf_to_file_dump,
+ (void *)&cmd_dump_mbuf_to_file_file,
+ NULL,
+ },
+};
+
+
/* *** Filters Control *** */
#define IPV4_ADDR_TO_UINT(ip_addr, ip) \
@@ -13999,6 +14056,7 @@ static cmdline_parse_ctx_t builtin_ctx[] = {
&cmd_cleanup_txq_mbufs,
&cmd_dump,
&cmd_dump_one,
+ &cmd_dump_mbuf_to_file,
&cmd_flow,
&cmd_show_port_meter_cap,
&cmd_add_port_meter_profile_srtcm,
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH v2 4/4] usertool: add a script to parse mbuf history dump
2025-09-16 15:12 ` [PATCH v2 0/4] add mbuf " Shani Peretz
` (2 preceding siblings ...)
2025-09-16 15:12 ` [PATCH v2 3/4] app/testpmd: add testpmd command to dump mbuf history Shani Peretz
@ 2025-09-16 15:12 ` Shani Peretz
3 siblings, 0 replies; 24+ messages in thread
From: Shani Peretz @ 2025-09-16 15:12 UTC (permalink / raw)
To: dev
Cc: mb, stephen, bruce.richardson, ajit.khaparde, jerinj,
konstantin.v.ananyev, david.marchand, maxime.coquelin, gakhil,
viacheslavo, thomas, Shani Peretz, Robin Jarry
Added a Python script that parses the history dump of a mbufs
generated by rte_mbuf_objects_dump and presents it in a human-readable
format.
If an operation ID is repeated, such as in the case of a double free,
it will be highlighted in red and listed at the end of the file.
Signed-off-by: Shani Peretz <shperetz@nvidia.com>
---
usertools/dpdk-mbuf_history_parser.py | 173 ++++++++++++++++++++++++++
1 file changed, 173 insertions(+)
create mode 100755 usertools/dpdk-mbuf_history_parser.py
diff --git a/usertools/dpdk-mbuf_history_parser.py b/usertools/dpdk-mbuf_history_parser.py
new file mode 100755
index 0000000000..c39a796d5d
--- /dev/null
+++ b/usertools/dpdk-mbuf_history_parser.py
@@ -0,0 +1,173 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright (c) 2023 NVIDIA Corporation & Affiliates
+
+import sys
+import re
+import os
+import enum
+
+RED = "\033[91m"
+RESET = "\033[0m"
+ENUM_PATTERN = r'enum\s+rte_mbuf_history_op\s*{([^}]+)}'
+VALUE_PATTERN = r'([A-Z_]+)\s*=\s*(\d+),\s*(?:/\*\s*(.*?)\s*\*/)?'
+HEADER_FILE = os.path.join(
+ os.path.dirname(os.path.dirname(__file__)),
+ 'lib/mbuf/rte_mbuf_history.h'
+)
+
+
+def print_history_sequence(address: str, sequence: list[str]):
+ max_op_width = max(
+ len(re.sub(r'\x1b\[[0-9;]*m', '', op)) for op in sequence
+ )
+ op_width = max_op_width
+ for i in range(0, len(sequence), 4):
+ chunk = sequence[i:i + 4]
+ formatted_ops = [f"{op:<{op_width}}" for op in chunk]
+ line = ""
+ for j, op in enumerate(formatted_ops):
+ line += op
+ if j < len(formatted_ops) - 1:
+ line += " -> "
+ if i + 4 < len(sequence):
+ line += " ->"
+ print(f"mbuf {address}: " + line)
+ print()
+
+
+def match_field(match: re.Match) -> tuple[int, str]:
+ name, value, _ = match.groups()
+ return (int(value), name.replace('RTE_MBUF_', ''))
+
+
+class HistoryEnum:
+ def __init__(self, ops: enum.Enum):
+ self.ops = ops
+
+ @staticmethod
+ def from_header(header_file: str) -> 'HistoryEnum':
+ with open(header_file, 'r') as f:
+ content = f.read()
+
+ # Extract each enum value and its comment
+ enum_content = re.search(ENUM_PATTERN, content, re.DOTALL).group(1)
+ fields = map(match_field, re.finditer(VALUE_PATTERN, enum_content))
+ fields = dict({v: k for k, v in fields})
+ return HistoryEnum(enum.Enum('HistoryOps', fields))
+
+
+class HistoryLine:
+ def __init__(self, address: str, ops: list):
+ self.address = address
+ self.ops = ops
+
+ def repeats(self) -> [list[str], str | None]:
+ repeated = None
+ sequence = []
+ for idx, op in enumerate(self.ops):
+ if idx > 0 and op == self.ops[idx - 1] and op.name != 'NEVER':
+ sequence[-1] = f"{RED}{op.name}{RESET}"
+ sequence.append(f"{RED}{op.name}{RESET}")
+ repeated = op.name
+ else:
+ sequence.append(op.name)
+ return sequence, repeated
+
+
+class HistoryMetrics:
+ def __init__(self, metrics: dict[str, int]):
+ self.metrics = metrics
+
+ def max_name_width(self) -> int:
+ return max(len(name) for name in self.metrics.keys())
+
+
+class HistoryParser:
+ def __init__(self):
+ self.history_enum = HistoryEnum.from_header(HEADER_FILE)
+
+ def parse(
+ self, dump_file: str
+ ) -> tuple[list[HistoryLine], 'HistoryMetrics']:
+ with open(dump_file, 'r') as f:
+ lines = [line for line in f.readlines() if line.strip()]
+ populated = next(line for line in lines if "Populated:" in line)
+ metrics_start = lines.index(populated)
+
+ history_lines = lines[3:metrics_start]
+ metrics_lines = lines[metrics_start:-1]
+ return (
+ self._parse_history(history_lines),
+ self._parse_metrics(metrics_lines)
+ )
+
+ def _parse_metrics(self, lines: list[str]) -> HistoryMetrics:
+ metrics = {}
+ for line in lines:
+ key, value = line.split(':', 1)
+ metrics[key] = int(value)
+ return HistoryMetrics(metrics)
+
+ def _parse_history(self, lines: list[str]) -> list[HistoryLine]:
+ # Parse the format "mbuf 0x1054b9980: 0000000000000065"
+ history_lines = []
+ for line in lines:
+ address = line.split(':')[0].split('mbuf ')[1]
+ history = line.split(':')[1]
+ history_lines.append(
+ HistoryLine(
+ address=address,
+ ops=self._parse(int(history, 16))
+ )
+ )
+ return history_lines
+
+ def _parse(self, history: int) -> list[str]:
+ ops = []
+ for _ in range(16): # 64 bits / 4 bits = 16 possible operations
+ op = history & 0xF # Extract lowest 4 bits
+ if op == 0:
+ break
+ ops.append(self.history_enum.ops(op))
+ history >>= 4
+
+ ops.reverse()
+ return ops
+
+
+def print_history_lines(history_lines: list[HistoryLine]):
+ lines = [
+ (line.address, line.repeats()) for line in history_lines
+ ]
+
+ for address, (sequence, _) in lines:
+ print_history_sequence(address, sequence)
+
+ print("=== Violations ===")
+ for address, (sequence, repeated) in lines:
+ if repeated:
+ print(f"mbuf {address} has repeated ops: {RED}{repeated}{RESET}")
+
+
+def print_metrics(metrics: HistoryMetrics):
+ print("=== Metrics Summary ===")
+ for name, value in metrics.metrics.items():
+ print(f"{name + ':':<{metrics.max_name_width() + 2}} {value}")
+
+
+def main():
+ if len(sys.argv) != 2:
+ print("Usage: {} <history_file>".format(sys.argv[0]))
+ sys.exit(1)
+
+ history_parser = HistoryParser()
+ history_lines, metrics = history_parser.parse(sys.argv[1])
+
+ print_history_lines(history_lines)
+ print()
+ print_metrics(metrics)
+
+
+if __name__ == "__main__":
+ main()
--
2.34.1
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 2/4] net/mlx5: mark an operation in mbuf's history
2025-09-16 15:12 ` [PATCH v2 2/4] net/mlx5: mark an operation in mbuf's history Shani Peretz
@ 2025-09-16 21:14 ` Stephen Hemminger
2025-09-16 21:31 ` Thomas Monjalon
0 siblings, 1 reply; 24+ messages in thread
From: Stephen Hemminger @ 2025-09-16 21:14 UTC (permalink / raw)
To: Shani Peretz
Cc: dev, mb, bruce.richardson, ajit.khaparde, jerinj,
konstantin.v.ananyev, david.marchand, maxime.coquelin, gakhil,
viacheslavo, thomas, Dariusz Sosnowski, Bing Zhao, Ori Kam,
Suanming Mou, Matan Azrad
On Tue, 16 Sep 2025 18:12:05 +0300
Shani Peretz <shperetz@nvidia.com> wrote:
> record operations on mbufs when it is allocated
> and released inside the mlx5 PMD.
>
> Signed-off-by: Shani Peretz <shperetz@nvidia.com>
> ---
If you are adding this to one driver, it means it should be
done to all drivers. Which means it is creating lots of churn
and testing.
For me, this amount of churn and #ifdef is not worth it.
Think of a better way using some other mechanism.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 1/4] mbuf: record mbuf operations history
2025-09-16 15:12 ` [PATCH v2 1/4] mbuf: record mbuf operations history Shani Peretz
@ 2025-09-16 21:17 ` Stephen Hemminger
2025-09-16 21:33 ` Thomas Monjalon
0 siblings, 1 reply; 24+ messages in thread
From: Stephen Hemminger @ 2025-09-16 21:17 UTC (permalink / raw)
To: Shani Peretz
Cc: dev, mb, bruce.richardson, ajit.khaparde, jerinj,
konstantin.v.ananyev, david.marchand, maxime.coquelin, gakhil,
viacheslavo, thomas, Andrew Rybchenko
On Tue, 16 Sep 2025 18:12:04 +0300
Shani Peretz <shperetz@nvidia.com> wrote:
> @@ -607,6 +608,9 @@ static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp)
> if (rte_mempool_get(mp, &ret.ptr) < 0)
> return NULL;
> __rte_mbuf_raw_sanity_check(ret.m);
> +#if RTE_MBUF_HISTORY_DEBUG
> + rte_mbuf_history_mark(ret.m, RTE_MBUF_ALLOC);
> +#endif
> return ret.m;
> }
If you made rte_mbuf_history_mark a dummy function if RTE_MBUF_HISTORY_DEBUG
was not defined, then you could remove most of the #ifdef clutter and
would get type checking on normal builds.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 2/4] net/mlx5: mark an operation in mbuf's history
2025-09-16 21:14 ` Stephen Hemminger
@ 2025-09-16 21:31 ` Thomas Monjalon
0 siblings, 0 replies; 24+ messages in thread
From: Thomas Monjalon @ 2025-09-16 21:31 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Shani Peretz, dev, mb, bruce.richardson, ajit.khaparde, jerinj,
konstantin.v.ananyev, david.marchand, maxime.coquelin, gakhil,
viacheslavo, Dariusz Sosnowski, Bing Zhao, Ori Kam, Suanming Mou,
Matan Azrad
16/09/2025 23:14, Stephen Hemminger:
> On Tue, 16 Sep 2025 18:12:05 +0300
> Shani Peretz <shperetz@nvidia.com> wrote:
>
> > record operations on mbufs when it is allocated
> > and released inside the mlx5 PMD.
> >
> > Signed-off-by: Shani Peretz <shperetz@nvidia.com>
> > ---
>
> If you are adding this to one driver, it means it should be
> done to all drivers. Which means it is creating lots of churn
> and testing.
Why a new feature should be applied to all drivers?
We never force a new feature to be implemented by all,
it is impossible to do.
> For me, this amount of churn and #ifdef is not worth it.
I agree we could avoid the #ifdef with a dummy function
which would be optimized out by the compiler.
> Think of a better way using some other mechanism.
Except avoiding the #ifdef, I don't see what better to do
for tracking what the driver is doing with mbufs.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH v2 1/4] mbuf: record mbuf operations history
2025-09-16 21:17 ` Stephen Hemminger
@ 2025-09-16 21:33 ` Thomas Monjalon
2025-09-17 1:22 ` Morten Brørup
0 siblings, 1 reply; 24+ messages in thread
From: Thomas Monjalon @ 2025-09-16 21:33 UTC (permalink / raw)
To: Shani Peretz, Stephen Hemminger
Cc: dev, mb, bruce.richardson, ajit.khaparde, jerinj,
konstantin.v.ananyev, david.marchand, maxime.coquelin, gakhil,
viacheslavo, Andrew Rybchenko
16/09/2025 23:17, Stephen Hemminger:
> On Tue, 16 Sep 2025 18:12:04 +0300
> Shani Peretz <shperetz@nvidia.com> wrote:
>
> > @@ -607,6 +608,9 @@ static inline struct rte_mbuf *rte_mbuf_raw_alloc(struct rte_mempool *mp)
> > if (rte_mempool_get(mp, &ret.ptr) < 0)
> > return NULL;
> > __rte_mbuf_raw_sanity_check(ret.m);
> > +#if RTE_MBUF_HISTORY_DEBUG
> > + rte_mbuf_history_mark(ret.m, RTE_MBUF_ALLOC);
> > +#endif
> > return ret.m;
> > }
>
> If you made rte_mbuf_history_mark a dummy function if RTE_MBUF_HISTORY_DEBUG
> was not defined, then you could remove most of the #ifdef clutter and
> would get type checking on normal builds.
Yes good idea!
We need to check whether an empty inline function will be completely
optimized out by the compilers (clang and GCC).
^ permalink raw reply [flat|nested] 24+ messages in thread
* RE: [PATCH v2 1/4] mbuf: record mbuf operations history
2025-09-16 21:33 ` Thomas Monjalon
@ 2025-09-17 1:22 ` Morten Brørup
0 siblings, 0 replies; 24+ messages in thread
From: Morten Brørup @ 2025-09-17 1:22 UTC (permalink / raw)
To: Thomas Monjalon, Shani Peretz, Stephen Hemminger
Cc: dev, bruce.richardson, ajit.khaparde, jerinj,
konstantin.v.ananyev, david.marchand, maxime.coquelin, gakhil,
viacheslavo, Andrew Rybchenko
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Tuesday, 16 September 2025 23.34
>
> 16/09/2025 23:17, Stephen Hemminger:
> > On Tue, 16 Sep 2025 18:12:04 +0300
> > Shani Peretz <shperetz@nvidia.com> wrote:
> >
> > > @@ -607,6 +608,9 @@ static inline struct rte_mbuf
> *rte_mbuf_raw_alloc(struct rte_mempool *mp)
> > > if (rte_mempool_get(mp, &ret.ptr) < 0)
> > > return NULL;
> > > __rte_mbuf_raw_sanity_check(ret.m);
> > > +#if RTE_MBUF_HISTORY_DEBUG
> > > + rte_mbuf_history_mark(ret.m, RTE_MBUF_ALLOC);
> > > +#endif
> > > return ret.m;
> > > }
> >
> > If you made rte_mbuf_history_mark a dummy function if
> RTE_MBUF_HISTORY_DEBUG
> > was not defined, then you could remove most of the #ifdef clutter and
> > would get type checking on normal builds.
>
> Yes good idea!
+1
> We need to check whether an empty inline function will be completely
> optimized out by the compilers (clang and GCC).
>
It works for trace.
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2025-09-17 1:22 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-06-16 7:29 [RFC PATCH 0/5] Introduce mempool object new debug capabilities Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 1/5] mempool: record mempool objects operations history Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 2/5] drivers: add mempool history compilation flag Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 3/5] net/mlx5: mark an operation in mempool object's history Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 4/5] app/testpmd: add testpmd command to dump mempool history Shani Peretz
2025-06-16 7:29 ` [RFC PATCH 5/5] usertool: add a script to parse mempool history dump Shani Peretz
2025-06-16 15:30 ` [RFC PATCH 0/5] Introduce mempool object new debug capabilities Stephen Hemminger
2025-06-19 12:57 ` Morten Brørup
2025-07-07 5:46 ` Shani Peretz
2025-07-07 5:45 ` Shani Peretz
2025-07-07 12:10 ` Morten Brørup
2025-07-19 14:39 ` Morten Brørup
2025-08-25 11:27 ` Slava Ovsiienko
2025-09-01 15:34 ` Morten Brørup
2025-09-16 15:12 ` [PATCH v2 0/4] add mbuf " Shani Peretz
2025-09-16 15:12 ` [PATCH v2 1/4] mbuf: record mbuf operations history Shani Peretz
2025-09-16 21:17 ` Stephen Hemminger
2025-09-16 21:33 ` Thomas Monjalon
2025-09-17 1:22 ` Morten Brørup
2025-09-16 15:12 ` [PATCH v2 2/4] net/mlx5: mark an operation in mbuf's history Shani Peretz
2025-09-16 21:14 ` Stephen Hemminger
2025-09-16 21:31 ` Thomas Monjalon
2025-09-16 15:12 ` [PATCH v2 3/4] app/testpmd: add testpmd command to dump mbuf history Shani Peretz
2025-09-16 15:12 ` [PATCH v2 4/4] usertool: add a script to parse mbuf history dump Shani Peretz
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).