* [PATCH 1/4] net/mlx5/tools: fix trace dump multiple burst completions [not found] <20241009114028.973284-1-viacheslavo@nvidia.com> @ 2024-10-09 11:40 ` Viacheslav Ovsiienko 2024-10-09 13:08 ` Dariusz Sosnowski 2024-10-09 11:40 ` [PATCH 2/4] net/mlx5: fix real time counter reading from PCI BAR Viacheslav Ovsiienko ` (2 subsequent siblings) 3 siblings, 1 reply; 7+ messages in thread From: Viacheslav Ovsiienko @ 2024-10-09 11:40 UTC (permalink / raw) To: dev; +Cc: matan, rasland, orika, dsosnowski, stable In case if there were multiple bursts completed in the single completion the first only burst was moved to the done list. The situation is not typical, because usually tracing was used for scheduled traffic debugging and for this case each burst had its own completion requested, and there were no completions with multiple bursts. Fixes: 9725191a7e14 ("net/mlx5: add Tx datapath trace analyzing script") Cc: stable@dpdk.org Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> --- drivers/net/mlx5/tools/mlx5_trace.py | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/tools/mlx5_trace.py b/drivers/net/mlx5/tools/mlx5_trace.py index 8c1fd0a350..67461520a9 100755 --- a/drivers/net/mlx5/tools/mlx5_trace.py +++ b/drivers/net/mlx5/tools/mlx5_trace.py @@ -258,13 +258,14 @@ def do_tx_complete(msg, trace): if burst.comp(wqe_id, wqe_ts) == 0: break rmv += 1 - # mode completed burst to done list + # move completed burst(s) to done list if rmv != 0: idx = 0 while idx < rmv: + burst = queue.wait_burst[idx] queue.done_burst.append(burst) idx += 1 - del queue.wait_burst[0:rmv] + queue.wait_burst = queue.wait_burst[rmv:] def do_tx(msg, trace): -- 2.34.1 ^ permalink raw reply [flat|nested] 7+ messages in thread
* RE: [PATCH 1/4] net/mlx5/tools: fix trace dump multiple burst completions 2024-10-09 11:40 ` [PATCH 1/4] net/mlx5/tools: fix trace dump multiple burst completions Viacheslav Ovsiienko @ 2024-10-09 13:08 ` Dariusz Sosnowski 0 siblings, 0 replies; 7+ messages in thread From: Dariusz Sosnowski @ 2024-10-09 13:08 UTC (permalink / raw) To: Slava Ovsiienko, dev; +Cc: Matan Azrad, Raslan Darawsheh, Ori Kam, stable > -----Original Message----- > From: Slava Ovsiienko <viacheslavo@nvidia.com> > Sent: Wednesday, October 9, 2024 13:40 > To: dev@dpdk.org > Cc: Matan Azrad <matan@nvidia.com>; Raslan Darawsheh > <rasland@nvidia.com>; Ori Kam <orika@nvidia.com>; Dariusz Sosnowski > <dsosnowski@nvidia.com>; stable@dpdk.org > Subject: [PATCH 1/4] net/mlx5/tools: fix trace dump multiple burst completions > > In case if there were multiple bursts completed in the single completion the first > only burst was moved to the done list. > The situation is not typical, because usually tracing was used for scheduled traffic > debugging and for this case each burst had its own completion requested, and > there were no completions with multiple bursts. > > Fixes: 9725191a7e14 ("net/mlx5: add Tx datapath trace analyzing script") > Cc: stable@dpdk.org > > Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Best regards, Dariusz Sosnowski ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 2/4] net/mlx5: fix real time counter reading from PCI BAR [not found] <20241009114028.973284-1-viacheslavo@nvidia.com> 2024-10-09 11:40 ` [PATCH 1/4] net/mlx5/tools: fix trace dump multiple burst completions Viacheslav Ovsiienko @ 2024-10-09 11:40 ` Viacheslav Ovsiienko 2024-10-09 11:40 ` [PATCH 3/4] net/mlx5: fix Tx tracing to use single clock source Viacheslav Ovsiienko [not found] ` <20241014080434.1211629-1-viacheslavo@nvidia.com> 3 siblings, 0 replies; 7+ messages in thread From: Viacheslav Ovsiienko @ 2024-10-09 11:40 UTC (permalink / raw) To: dev; +Cc: matan, rasland, orika, dsosnowski, Tim Martin, stable From: Tim Martin <timothym@nvidia.com> There is the mlx5_txpp_read_clock() routine reading the 64-bit real time counter from the device PCI BAR. It introduced two issues: - it checks the PCI BAR mapping into process address space and tries to map this on demand. This might be problematic if something goes wrong and mapping fails. It happens on every read_clock API call, invokes kernel taking a long time and causing application malfunction. - the 64-bit counter should be read in single atomic transaction Fixes: 9b31fc9007f9 ("net/mlx5: fix read device clock in real time mode") Cc: stable@dpdk.org Signed-off-by: Tim Martin <timothym@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> --- .mailmap | 1 + drivers/net/mlx5/mlx5.c | 4 +++ drivers/net/mlx5/mlx5_tx.h | 53 +++++++++++++++++++++++++++++++++--- drivers/net/mlx5/mlx5_txpp.c | 11 ++------ 4 files changed, 56 insertions(+), 13 deletions(-) diff --git a/.mailmap b/.mailmap index dff07122f3..e36e0a4766 100644 --- a/.mailmap +++ b/.mailmap @@ -1494,6 +1494,7 @@ Timmons C. Player <timmons.player@spirent.com> Timothy McDaniel <timothy.mcdaniel@intel.com> Timothy Miskell <timothy.miskell@intel.com> Timothy Redaelli <tredaelli@redhat.com> +Tim Martin <timothym@nvidia.com> Tim Shearer <tim.shearer@overturenetworks.com> Ting-Kai Ku <ting-kai.ku@intel.com> Ting Xu <ting.xu@intel.com> diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index cf34766a50..14676be484 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -2242,6 +2242,7 @@ int mlx5_proc_priv_init(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; struct mlx5_proc_priv *ppriv; size_t ppriv_size; @@ -2262,6 +2263,9 @@ mlx5_proc_priv_init(struct rte_eth_dev *dev) dev->process_private = ppriv; if (rte_eal_process_type() == RTE_PROC_PRIMARY) priv->sh->pppriv = ppriv; + /* Check and try to map HCA PCI BAR to allow reading real time. */ + if (sh->dev_cap.rt_timestamp && mlx5_dev_is_pci(dev->device)) + mlx5_txpp_map_hca_bar(dev); return 0; } diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h index 983913faa2..55568c41b1 100644 --- a/drivers/net/mlx5/mlx5_tx.h +++ b/drivers/net/mlx5/mlx5_tx.h @@ -372,6 +372,46 @@ mlx5_txpp_convert_tx_ts(struct mlx5_dev_ctx_shared *sh, uint64_t mts) return ci; } +/** + * Read real time clock counter directly from the device PCI BAR area. + * The PCI BAR must be mapped to the process memory space at initialization. + * + * @param dev + * Device to read clock counter from + * + * @return + * 0 - if HCA BAR is not supported or not mapped. + * !=0 - read 64-bit value of real-time in UTC formatv (nanoseconds) + */ +static __rte_always_inline uint64_t mlx5_read_pcibar_clock(struct rte_eth_dev *dev) +{ + struct mlx5_proc_priv *ppriv = dev->process_private; + + if (ppriv && ppriv->hca_bar) { + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; + uint64_t *hca_ptr = (uint64_t *)(ppriv->hca_bar) + + __mlx5_64_off(initial_seg, real_time); + uint64_t __rte_atomic *ts_addr; + uint64_t ts; + + ts_addr = (uint64_t __rte_atomic *)hca_ptr; + ts = rte_atomic_load_explicit(ts_addr, rte_memory_order_seq_cst); + ts = rte_be_to_cpu_64(ts); + ts = mlx5_txpp_convert_rx_ts(sh, ts); + return ts; + } + return 0; +} + +static __rte_always_inline uint64_t mlx5_read_pcibar_clock_from_txq(struct mlx5_txq_data *txq) +{ + struct mlx5_txq_ctrl *txq_ctrl = container_of(txq, struct mlx5_txq_ctrl, txq); + struct rte_eth_dev *dev = ETH_DEV(txq_ctrl->priv); + + return mlx5_read_pcibar_clock(dev); +} + /** * Set Software Parser flags and offsets in Ethernet Segment of WQE. * Flags must be preliminary initialized to zero. @@ -809,6 +849,7 @@ mlx5_tx_cseg_init(struct mlx5_txq_data *__rte_restrict txq, unsigned int olx) { struct mlx5_wqe_cseg *__rte_restrict cs = &wqe->cseg; + uint64_t real_time; /* For legacy MPW replace the EMPW by TSO with modifier. */ if (MLX5_TXOFF_CONFIG(MPW) && opcode == MLX5_OPCODE_ENHANCED_MPSW) @@ -822,9 +863,12 @@ mlx5_tx_cseg_init(struct mlx5_txq_data *__rte_restrict txq, cs->flags = RTE_BE32(MLX5_COMP_ONLY_FIRST_ERR << MLX5_COMP_MODE_OFFSET); cs->misc = RTE_BE32(0); - if (__rte_trace_point_fp_is_enabled() && !loc->pkts_sent) - rte_pmd_mlx5_trace_tx_entry(txq->port_id, txq->idx); - rte_pmd_mlx5_trace_tx_wqe((txq->wqe_ci << 8) | opcode); + if (__rte_trace_point_fp_is_enabled()) { + real_time = mlx5_read_pcibar_clock_from_txq(txq); + if (!loc->pkts_sent) + rte_pmd_mlx5_trace_tx_entry(real_time, txq->port_id, txq->idx); + rte_pmd_mlx5_trace_tx_wqe(real_time, (txq->wqe_ci << 8) | opcode); + } } /** @@ -3786,7 +3830,8 @@ mlx5_tx_burst_tmpl(struct mlx5_txq_data *__rte_restrict txq, __mlx5_tx_free_mbuf(txq, pkts, loc.mbuf_free, olx); /* Trace productive bursts only. */ if (__rte_trace_point_fp_is_enabled() && loc.pkts_sent) - rte_pmd_mlx5_trace_tx_exit(loc.pkts_sent, pkts_n); + rte_pmd_mlx5_trace_tx_exit(mlx5_read_pcibar_clock_from_txq(txq), + loc.pkts_sent, pkts_n); return loc.pkts_sent; } diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c index 4e26fa2db8..e6d3ad83e9 100644 --- a/drivers/net/mlx5/mlx5_txpp.c +++ b/drivers/net/mlx5/mlx5_txpp.c @@ -971,7 +971,6 @@ mlx5_txpp_read_clock(struct rte_eth_dev *dev, uint64_t *timestamp) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_proc_priv *ppriv; uint64_t ts; int ret; @@ -997,15 +996,9 @@ mlx5_txpp_read_clock(struct rte_eth_dev *dev, uint64_t *timestamp) *timestamp = ts; return 0; } - /* Check and try to map HCA PIC BAR to allow reading real time. */ - ppriv = dev->process_private; - if (ppriv && !ppriv->hca_bar && - sh->dev_cap.rt_timestamp && mlx5_dev_is_pci(dev->device)) - mlx5_txpp_map_hca_bar(dev); /* Check if we can read timestamp directly from hardware. */ - if (ppriv && ppriv->hca_bar) { - ts = MLX5_GET64(initial_seg, ppriv->hca_bar, real_time); - ts = mlx5_txpp_convert_rx_ts(sh, ts); + ts = mlx5_read_pcibar_clock(dev); + if (ts != 0) { *timestamp = ts; return 0; } -- 2.34.1 ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH 3/4] net/mlx5: fix Tx tracing to use single clock source [not found] <20241009114028.973284-1-viacheslavo@nvidia.com> 2024-10-09 11:40 ` [PATCH 1/4] net/mlx5/tools: fix trace dump multiple burst completions Viacheslav Ovsiienko 2024-10-09 11:40 ` [PATCH 2/4] net/mlx5: fix real time counter reading from PCI BAR Viacheslav Ovsiienko @ 2024-10-09 11:40 ` Viacheslav Ovsiienko [not found] ` <20241014080434.1211629-1-viacheslavo@nvidia.com> 3 siblings, 0 replies; 7+ messages in thread From: Viacheslav Ovsiienko @ 2024-10-09 11:40 UTC (permalink / raw) To: dev; +Cc: matan, rasland, orika, dsosnowski, Tim Martin, stable From: Tim Martin <timothym@nvidia.com> The prior commit introduced tracing for mlx5, but there is a mixture of two unrelated clocks used: the TSC for host work submission timestamps and the NIC HW clock for CQE completion times. It is necessary to have timestamps from a single common clock, and the NIC HW clock is the better choice since it can be used with externally synchronized clocks. This patch adds the NIC HW clock as an additional logged parameter for trace_tx_entry, trace_tx_exit, and trace_tx_wqe. The included trace analysis python script is also updated to use the new clock when it is available. Fixes: a1e910f5b8d4 ("net/mlx5: introduce tracepoints") Fixes: 9725191a7e14 ("net/mlx5: add Tx datapath trace analyzing script") Cc: stable@dpdk.org Signed-off-by: Tim Martin <timothym@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> --- drivers/net/mlx5/mlx5_trace.h | 9 ++++++--- drivers/net/mlx5/tools/mlx5_trace.py | 12 +++++++++--- 2 files changed, 15 insertions(+), 6 deletions(-) diff --git a/drivers/net/mlx5/mlx5_trace.h b/drivers/net/mlx5/mlx5_trace.h index 888d96f60b..656dbb1a4f 100644 --- a/drivers/net/mlx5/mlx5_trace.h +++ b/drivers/net/mlx5/mlx5_trace.h @@ -22,21 +22,24 @@ extern "C" { /* TX burst subroutines trace points. */ RTE_TRACE_POINT_FP( rte_pmd_mlx5_trace_tx_entry, - RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id), + RTE_TRACE_POINT_ARGS(uint64_t real_time, uint16_t port_id, uint16_t queue_id), + rte_trace_point_emit_u64(real_time); rte_trace_point_emit_u16(port_id); rte_trace_point_emit_u16(queue_id); ) RTE_TRACE_POINT_FP( rte_pmd_mlx5_trace_tx_exit, - RTE_TRACE_POINT_ARGS(uint16_t nb_sent, uint16_t nb_req), + RTE_TRACE_POINT_ARGS(uint64_t real_time, uint16_t nb_sent, uint16_t nb_req), + rte_trace_point_emit_u64(real_time); rte_trace_point_emit_u16(nb_sent); rte_trace_point_emit_u16(nb_req); ) RTE_TRACE_POINT_FP( rte_pmd_mlx5_trace_tx_wqe, - RTE_TRACE_POINT_ARGS(uint32_t opcode), + RTE_TRACE_POINT_ARGS(uint64_t real_time, uint32_t opcode), + rte_trace_point_emit_u64(real_time); rte_trace_point_emit_u32(opcode); ) diff --git a/drivers/net/mlx5/tools/mlx5_trace.py b/drivers/net/mlx5/tools/mlx5_trace.py index 67461520a9..5eb634a490 100755 --- a/drivers/net/mlx5/tools/mlx5_trace.py +++ b/drivers/net/mlx5/tools/mlx5_trace.py @@ -174,7 +174,9 @@ def do_tx_entry(msg, trace): return # allocate the new burst and append to the queue burst = MlxBurst() - burst.call_ts = msg.default_clock_snapshot.ns_from_origin + burst.call_ts = event["real_time"] + if burst.call_ts == 0: + burst.call_ts = msg.default_clock_snapshot.ns_from_origin trace.tx_blst[cpu_id] = burst pq_id = event["port_id"] << 16 | event["queue_id"] queue = trace.tx_qlst.get(pq_id) @@ -194,7 +196,9 @@ def do_tx_exit(msg, trace): burst = trace.tx_blst.get(cpu_id) if burst is None: return - burst.done_ts = msg.default_clock_snapshot.ns_from_origin + burst.done_ts = event["real_time"] + if burst.done_ts == 0: + burst.done_ts = msg.default_clock_snapshot.ns_from_origin burst.req = event["nb_req"] burst.done = event["nb_sent"] trace.tx_blst.pop(cpu_id) @@ -210,7 +214,9 @@ def do_tx_wqe(msg, trace): wqe = MlxWqe() wqe.wait_ts = trace.tx_wlst.get(cpu_id) if wqe.wait_ts is None: - wqe.wait_ts = msg.default_clock_snapshot.ns_from_origin + wqe.wait_ts = event["real_time"] + if wqe.wait_ts == 0: + wqe.wait_ts = msg.default_clock_snapshot.ns_from_origin wqe.opcode = event["opcode"] burst.wqes.append(wqe) -- 2.34.1 ^ permalink raw reply [flat|nested] 7+ messages in thread
[parent not found: <20241014080434.1211629-1-viacheslavo@nvidia.com>]
* [PATCH v2 1/4] net/mlx5/tools: fix trace dump multiple burst completions [not found] ` <20241014080434.1211629-1-viacheslavo@nvidia.com> @ 2024-10-14 8:04 ` Viacheslav Ovsiienko 2024-10-14 8:04 ` [PATCH v2 2/4] net/mlx5: fix real time counter reading from PCI BAR Viacheslav Ovsiienko 2024-10-14 8:04 ` [PATCH v2 3/4] net/mlx5: fix Tx tracing to use single clock source Viacheslav Ovsiienko 2 siblings, 0 replies; 7+ messages in thread From: Viacheslav Ovsiienko @ 2024-10-14 8:04 UTC (permalink / raw) To: dev; +Cc: matan, rasland, orika, dsosnowski, stable In case if there were multiple bursts completed in the single completion the first only burst was moved to the done list. The situation is not typical, because usually tracing was used for scheduled traffic debugging and for this case each burst had its own completion requested, and there were no completions with multiple bursts. Fixes: 9725191a7e14 ("net/mlx5: add Tx datapath trace analyzing script") Cc: stable@dpdk.org Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- drivers/net/mlx5/tools/mlx5_trace.py | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/tools/mlx5_trace.py b/drivers/net/mlx5/tools/mlx5_trace.py index 8c1fd0a350..67461520a9 100755 --- a/drivers/net/mlx5/tools/mlx5_trace.py +++ b/drivers/net/mlx5/tools/mlx5_trace.py @@ -258,13 +258,14 @@ def do_tx_complete(msg, trace): if burst.comp(wqe_id, wqe_ts) == 0: break rmv += 1 - # mode completed burst to done list + # move completed burst(s) to done list if rmv != 0: idx = 0 while idx < rmv: + burst = queue.wait_burst[idx] queue.done_burst.append(burst) idx += 1 - del queue.wait_burst[0:rmv] + queue.wait_burst = queue.wait_burst[rmv:] def do_tx(msg, trace): -- 2.34.1 ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 2/4] net/mlx5: fix real time counter reading from PCI BAR [not found] ` <20241014080434.1211629-1-viacheslavo@nvidia.com> 2024-10-14 8:04 ` [PATCH v2 1/4] net/mlx5/tools: fix trace dump multiple burst completions Viacheslav Ovsiienko @ 2024-10-14 8:04 ` Viacheslav Ovsiienko 2024-10-14 8:04 ` [PATCH v2 3/4] net/mlx5: fix Tx tracing to use single clock source Viacheslav Ovsiienko 2 siblings, 0 replies; 7+ messages in thread From: Viacheslav Ovsiienko @ 2024-10-14 8:04 UTC (permalink / raw) To: dev; +Cc: matan, rasland, orika, dsosnowski, Tim Martin, stable From: Tim Martin <timothym@nvidia.com> There is the mlx5_txpp_read_clock() routine reading the 64-bit real time counter from the device PCI BAR. It introduced two issues: - it checks the PCI BAR mapping into process address space and tries to map this on demand. This might be problematic if something goes wrong and mapping fails. It happens on every read_clock API call, invokes kernel taking a long time and causing application malfunction. - the 64-bit counter should be read in single atomic transaction Fixes: 9b31fc9007f9 ("net/mlx5: fix read device clock in real time mode") Cc: stable@dpdk.org Signed-off-by: Tim Martin <timothym@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> --- .mailmap | 1 + drivers/net/mlx5/mlx5.c | 4 ++++ drivers/net/mlx5/mlx5_tx.h | 34 +++++++++++++++++++++++++++++++++- drivers/net/mlx5/mlx5_txpp.c | 11 ++--------- 4 files changed, 40 insertions(+), 10 deletions(-) diff --git a/.mailmap b/.mailmap index b30d993f3b..3065211c0a 100644 --- a/.mailmap +++ b/.mailmap @@ -1505,6 +1505,7 @@ Timmons C. Player <timmons.player@spirent.com> Timothy McDaniel <timothy.mcdaniel@intel.com> Timothy Miskell <timothy.miskell@intel.com> Timothy Redaelli <tredaelli@redhat.com> +Tim Martin <timothym@nvidia.com> Tim Shearer <tim.shearer@overturenetworks.com> Ting-Kai Ku <ting-kai.ku@intel.com> Ting Xu <ting.xu@intel.com> diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index e36fa651a1..52b90e6ff3 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -2242,6 +2242,7 @@ int mlx5_proc_priv_init(struct rte_eth_dev *dev) { struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; struct mlx5_proc_priv *ppriv; size_t ppriv_size; @@ -2262,6 +2263,9 @@ mlx5_proc_priv_init(struct rte_eth_dev *dev) dev->process_private = ppriv; if (rte_eal_process_type() == RTE_PROC_PRIMARY) priv->sh->pppriv = ppriv; + /* Check and try to map HCA PCI BAR to allow reading real time. */ + if (sh->dev_cap.rt_timestamp && mlx5_dev_is_pci(dev->device)) + mlx5_txpp_map_hca_bar(dev); return 0; } diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h index 983913faa2..587e6a9f7d 100644 --- a/drivers/net/mlx5/mlx5_tx.h +++ b/drivers/net/mlx5/mlx5_tx.h @@ -372,6 +372,38 @@ mlx5_txpp_convert_tx_ts(struct mlx5_dev_ctx_shared *sh, uint64_t mts) return ci; } +/** + * Read real time clock counter directly from the device PCI BAR area. + * The PCI BAR must be mapped to the process memory space at initialization. + * + * @param dev + * Device to read clock counter from + * + * @return + * 0 - if HCA BAR is not supported or not mapped. + * !=0 - read 64-bit value of real-time in UTC formatv (nanoseconds) + */ +static __rte_always_inline uint64_t mlx5_read_pcibar_clock(struct rte_eth_dev *dev) +{ + struct mlx5_proc_priv *ppriv = dev->process_private; + + if (ppriv && ppriv->hca_bar) { + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; + uint64_t *hca_ptr = (uint64_t *)(ppriv->hca_bar) + + __mlx5_64_off(initial_seg, real_time); + uint64_t __rte_atomic *ts_addr; + uint64_t ts; + + ts_addr = (uint64_t __rte_atomic *)hca_ptr; + ts = rte_atomic_load_explicit(ts_addr, rte_memory_order_seq_cst); + ts = rte_be_to_cpu_64(ts); + ts = mlx5_txpp_convert_rx_ts(sh, ts); + return ts; + } + return 0; +} + /** * Set Software Parser flags and offsets in Ethernet Segment of WQE. * Flags must be preliminary initialized to zero. @@ -822,7 +854,7 @@ mlx5_tx_cseg_init(struct mlx5_txq_data *__rte_restrict txq, cs->flags = RTE_BE32(MLX5_COMP_ONLY_FIRST_ERR << MLX5_COMP_MODE_OFFSET); cs->misc = RTE_BE32(0); - if (__rte_trace_point_fp_is_enabled() && !loc->pkts_sent) + if (__rte_trace_point_fp_is_enabled()) rte_pmd_mlx5_trace_tx_entry(txq->port_id, txq->idx); rte_pmd_mlx5_trace_tx_wqe((txq->wqe_ci << 8) | opcode); } diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c index 4e26fa2db8..e6d3ad83e9 100644 --- a/drivers/net/mlx5/mlx5_txpp.c +++ b/drivers/net/mlx5/mlx5_txpp.c @@ -971,7 +971,6 @@ mlx5_txpp_read_clock(struct rte_eth_dev *dev, uint64_t *timestamp) { struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_dev_ctx_shared *sh = priv->sh; - struct mlx5_proc_priv *ppriv; uint64_t ts; int ret; @@ -997,15 +996,9 @@ mlx5_txpp_read_clock(struct rte_eth_dev *dev, uint64_t *timestamp) *timestamp = ts; return 0; } - /* Check and try to map HCA PIC BAR to allow reading real time. */ - ppriv = dev->process_private; - if (ppriv && !ppriv->hca_bar && - sh->dev_cap.rt_timestamp && mlx5_dev_is_pci(dev->device)) - mlx5_txpp_map_hca_bar(dev); /* Check if we can read timestamp directly from hardware. */ - if (ppriv && ppriv->hca_bar) { - ts = MLX5_GET64(initial_seg, ppriv->hca_bar, real_time); - ts = mlx5_txpp_convert_rx_ts(sh, ts); + ts = mlx5_read_pcibar_clock(dev); + if (ts != 0) { *timestamp = ts; return 0; } -- 2.34.1 ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 3/4] net/mlx5: fix Tx tracing to use single clock source [not found] ` <20241014080434.1211629-1-viacheslavo@nvidia.com> 2024-10-14 8:04 ` [PATCH v2 1/4] net/mlx5/tools: fix trace dump multiple burst completions Viacheslav Ovsiienko 2024-10-14 8:04 ` [PATCH v2 2/4] net/mlx5: fix real time counter reading from PCI BAR Viacheslav Ovsiienko @ 2024-10-14 8:04 ` Viacheslav Ovsiienko 2 siblings, 0 replies; 7+ messages in thread From: Viacheslav Ovsiienko @ 2024-10-14 8:04 UTC (permalink / raw) To: dev; +Cc: matan, rasland, orika, dsosnowski, Tim Martin, stable From: Tim Martin <timothym@nvidia.com> The prior commit introduced tracing for mlx5, but there is a mixture of two unrelated clocks used: the TSC for host work submission timestamps and the NIC HW clock for CQE completion times. It is necessary to have timestamps from a single common clock, and the NIC HW clock is the better choice since it can be used with externally synchronized clocks. This patch adds the NIC HW clock as an additional logged parameter for trace_tx_entry, trace_tx_exit, and trace_tx_wqe. The included trace analysis python script is also updated to use the new clock when it is available. Fixes: a1e910f5b8d4 ("net/mlx5: introduce tracepoints") Fixes: 9725191a7e14 ("net/mlx5: add Tx datapath trace analyzing script") Cc: stable@dpdk.org Signed-off-by: Tim Martin <timothym@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> --- drivers/net/mlx5/mlx5_trace.h | 9 ++++++--- drivers/net/mlx5/mlx5_tx.h | 21 +++++++++++++++++---- drivers/net/mlx5/tools/mlx5_trace.py | 12 +++++++++--- 3 files changed, 32 insertions(+), 10 deletions(-) diff --git a/drivers/net/mlx5/mlx5_trace.h b/drivers/net/mlx5/mlx5_trace.h index a8f0b372c8..4fc3584acc 100644 --- a/drivers/net/mlx5/mlx5_trace.h +++ b/drivers/net/mlx5/mlx5_trace.h @@ -22,21 +22,24 @@ extern "C" { /* TX burst subroutines trace points. */ RTE_TRACE_POINT_FP( rte_pmd_mlx5_trace_tx_entry, - RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id), + RTE_TRACE_POINT_ARGS(uint64_t real_time, uint16_t port_id, uint16_t queue_id), + rte_trace_point_emit_u64(real_time); rte_trace_point_emit_u16(port_id); rte_trace_point_emit_u16(queue_id); ) RTE_TRACE_POINT_FP( rte_pmd_mlx5_trace_tx_exit, - RTE_TRACE_POINT_ARGS(uint16_t nb_sent, uint16_t nb_req), + RTE_TRACE_POINT_ARGS(uint64_t real_time, uint16_t nb_sent, uint16_t nb_req), + rte_trace_point_emit_u64(real_time); rte_trace_point_emit_u16(nb_sent); rte_trace_point_emit_u16(nb_req); ) RTE_TRACE_POINT_FP( rte_pmd_mlx5_trace_tx_wqe, - RTE_TRACE_POINT_ARGS(uint32_t opcode), + RTE_TRACE_POINT_ARGS(uint64_t real_time, uint32_t opcode), + rte_trace_point_emit_u64(real_time); rte_trace_point_emit_u32(opcode); ) diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h index 587e6a9f7d..55568c41b1 100644 --- a/drivers/net/mlx5/mlx5_tx.h +++ b/drivers/net/mlx5/mlx5_tx.h @@ -404,6 +404,14 @@ static __rte_always_inline uint64_t mlx5_read_pcibar_clock(struct rte_eth_dev *d return 0; } +static __rte_always_inline uint64_t mlx5_read_pcibar_clock_from_txq(struct mlx5_txq_data *txq) +{ + struct mlx5_txq_ctrl *txq_ctrl = container_of(txq, struct mlx5_txq_ctrl, txq); + struct rte_eth_dev *dev = ETH_DEV(txq_ctrl->priv); + + return mlx5_read_pcibar_clock(dev); +} + /** * Set Software Parser flags and offsets in Ethernet Segment of WQE. * Flags must be preliminary initialized to zero. @@ -841,6 +849,7 @@ mlx5_tx_cseg_init(struct mlx5_txq_data *__rte_restrict txq, unsigned int olx) { struct mlx5_wqe_cseg *__rte_restrict cs = &wqe->cseg; + uint64_t real_time; /* For legacy MPW replace the EMPW by TSO with modifier. */ if (MLX5_TXOFF_CONFIG(MPW) && opcode == MLX5_OPCODE_ENHANCED_MPSW) @@ -854,9 +863,12 @@ mlx5_tx_cseg_init(struct mlx5_txq_data *__rte_restrict txq, cs->flags = RTE_BE32(MLX5_COMP_ONLY_FIRST_ERR << MLX5_COMP_MODE_OFFSET); cs->misc = RTE_BE32(0); - if (__rte_trace_point_fp_is_enabled()) - rte_pmd_mlx5_trace_tx_entry(txq->port_id, txq->idx); - rte_pmd_mlx5_trace_tx_wqe((txq->wqe_ci << 8) | opcode); + if (__rte_trace_point_fp_is_enabled()) { + real_time = mlx5_read_pcibar_clock_from_txq(txq); + if (!loc->pkts_sent) + rte_pmd_mlx5_trace_tx_entry(real_time, txq->port_id, txq->idx); + rte_pmd_mlx5_trace_tx_wqe(real_time, (txq->wqe_ci << 8) | opcode); + } } /** @@ -3818,7 +3830,8 @@ mlx5_tx_burst_tmpl(struct mlx5_txq_data *__rte_restrict txq, __mlx5_tx_free_mbuf(txq, pkts, loc.mbuf_free, olx); /* Trace productive bursts only. */ if (__rte_trace_point_fp_is_enabled() && loc.pkts_sent) - rte_pmd_mlx5_trace_tx_exit(loc.pkts_sent, pkts_n); + rte_pmd_mlx5_trace_tx_exit(mlx5_read_pcibar_clock_from_txq(txq), + loc.pkts_sent, pkts_n); return loc.pkts_sent; } diff --git a/drivers/net/mlx5/tools/mlx5_trace.py b/drivers/net/mlx5/tools/mlx5_trace.py index 67461520a9..5eb634a490 100755 --- a/drivers/net/mlx5/tools/mlx5_trace.py +++ b/drivers/net/mlx5/tools/mlx5_trace.py @@ -174,7 +174,9 @@ def do_tx_entry(msg, trace): return # allocate the new burst and append to the queue burst = MlxBurst() - burst.call_ts = msg.default_clock_snapshot.ns_from_origin + burst.call_ts = event["real_time"] + if burst.call_ts == 0: + burst.call_ts = msg.default_clock_snapshot.ns_from_origin trace.tx_blst[cpu_id] = burst pq_id = event["port_id"] << 16 | event["queue_id"] queue = trace.tx_qlst.get(pq_id) @@ -194,7 +196,9 @@ def do_tx_exit(msg, trace): burst = trace.tx_blst.get(cpu_id) if burst is None: return - burst.done_ts = msg.default_clock_snapshot.ns_from_origin + burst.done_ts = event["real_time"] + if burst.done_ts == 0: + burst.done_ts = msg.default_clock_snapshot.ns_from_origin burst.req = event["nb_req"] burst.done = event["nb_sent"] trace.tx_blst.pop(cpu_id) @@ -210,7 +214,9 @@ def do_tx_wqe(msg, trace): wqe = MlxWqe() wqe.wait_ts = trace.tx_wlst.get(cpu_id) if wqe.wait_ts is None: - wqe.wait_ts = msg.default_clock_snapshot.ns_from_origin + wqe.wait_ts = event["real_time"] + if wqe.wait_ts == 0: + wqe.wait_ts = msg.default_clock_snapshot.ns_from_origin wqe.opcode = event["opcode"] burst.wqes.append(wqe) -- 2.34.1 ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2024-10-14 8:05 UTC | newest] Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- [not found] <20241009114028.973284-1-viacheslavo@nvidia.com> 2024-10-09 11:40 ` [PATCH 1/4] net/mlx5/tools: fix trace dump multiple burst completions Viacheslav Ovsiienko 2024-10-09 13:08 ` Dariusz Sosnowski 2024-10-09 11:40 ` [PATCH 2/4] net/mlx5: fix real time counter reading from PCI BAR Viacheslav Ovsiienko 2024-10-09 11:40 ` [PATCH 3/4] net/mlx5: fix Tx tracing to use single clock source Viacheslav Ovsiienko [not found] ` <20241014080434.1211629-1-viacheslavo@nvidia.com> 2024-10-14 8:04 ` [PATCH v2 1/4] net/mlx5/tools: fix trace dump multiple burst completions Viacheslav Ovsiienko 2024-10-14 8:04 ` [PATCH v2 2/4] net/mlx5: fix real time counter reading from PCI BAR Viacheslav Ovsiienko 2024-10-14 8:04 ` [PATCH v2 3/4] net/mlx5: fix Tx tracing to use single clock source Viacheslav Ovsiienko
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).