From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8DCBE42DD9; Wed, 5 Jul 2023 17:32:00 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7A2E4406B5; Wed, 5 Jul 2023 17:32:00 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2053.outbound.protection.outlook.com [40.107.237.53]) by mails.dpdk.org (Postfix) with ESMTP id 671E44021F for ; Wed, 5 Jul 2023 17:31:59 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EWgTBHhDAQ8c4zqU7mXW25sjqKn/1An6xjuozlu52qh4KZuqL5L3q+eRXNmHUOKkO+XYZ/+ZXccQtXDIyNislP8GgzlZZiAAW6VbUiCHUdNbdJW4biMNfs21vcJHQc50u6OqG6wFzS+Vfu9a1237bel8vRvJSffLBs+Zj8x1g3guTTBmTm1N4g+g1ZMH8FdkV7wrUfUC92GFDIFxTiCdsUwF6/uYNDehk2QQYbbDMOWJEyrXTOEx0awkG75ub+ffqfBDzqhJv7TEZKkbNns1TYa7z5/mPW9tK+Ij3ot2LfzcsqvZxe0Tz0XaSxxoedwzO/ptb7o5p6I9eSrw2H8EpA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=FMfs8CUlvZGYjmMgILERAQ+kzA3er0nFOFar1r4uNjw=; b=eigMT6vj23u0Y7xSFgL1pdpfWw6bsIn53X4Qpjp11DanIE13TCvrqqryog3cbyDcrOp6mw1Nd813yJLhbPvVjxGlKNkcv9pwj6kyZIwuSYGxjbcmZKjCQd81pdHZkTEOPwyFvZSdKMQERvCCU57MpJH0wrKN4b2BR/8OQ1LXnim6EXfNhaN2B+wsRZWSfY+HFU1/VvffK+XTwVH9hT2/4cUD/QAGVx+fhQyJKAYJvrjFrxDUwt5lb7kI93pLTc16esNXZQguhZNrsP1l1Ql2tknYvs5I9h0fdgndfqTo7J9x+iJibVQmRQRgBkFUGtSiuAYvGq04vC7huY1yyBthPA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=FMfs8CUlvZGYjmMgILERAQ+kzA3er0nFOFar1r4uNjw=; b=sOTHxRscxDr1ZA0B0QbPCS6NWaPBeHX4W+lRpIDWzq6GH5yHGGe9rnsmM+tNcaKVu6oJ5jCsaq2G+aYmSvCIXxxfRB2uQWhIb9HiiUZVKcrx4ceeyvxye8ni9v0emCx8kzwhMixZOUSmF21rhDNMkuTwYIJFFbszGLvuqUUeEW3GnnClC47x8qgBvInDL489FUekIeDh8IVDCHWAXi6jjTsbsn0og5rGJb9EuDgsAoG3T4JgrkhPbSWO4NmekLqgdi4Dua89As1E/9e2zA+7do3McsBf0y48CSGGsdQgMgxYFA5t0i969Ae24k8uN0MaUuY3k84UCQdUH8BFViPe0g== Received: from BN9PR03CA0872.namprd03.prod.outlook.com (2603:10b6:408:13c::7) by BL3PR12MB6428.namprd12.prod.outlook.com (2603:10b6:208:3b7::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6565.17; Wed, 5 Jul 2023 15:31:57 +0000 Received: from BN8NAM11FT047.eop-nam11.prod.protection.outlook.com (2603:10b6:408:13c:cafe::c4) by BN9PR03CA0872.outlook.office365.com (2603:10b6:408:13c::7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6565.18 via Frontend Transport; Wed, 5 Jul 2023 15:31:57 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT047.mail.protection.outlook.com (10.13.177.220) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6521.45 via Frontend Transport; Wed, 5 Jul 2023 15:31:57 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Wed, 5 Jul 2023 08:31:41 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Wed, 5 Jul 2023 08:31:39 -0700 From: Viacheslav Ovsiienko To: CC: , Subject: [PATCH v5 1/4] net/mlx5: introduce tracepoints for mlx5 drivers Date: Wed, 5 Jul 2023 18:31:22 +0300 Message-ID: <20230705153125.4657-2-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20230705153125.4657-1-viacheslavo@nvidia.com> References: <20230420100803.494-1-viacheslavo@nvidia.com> <20230705153125.4657-1-viacheslavo@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT047:EE_|BL3PR12MB6428:EE_ X-MS-Office365-Filtering-Correlation-Id: 365375d1-29e9-4926-e0f5-08db7d6cf816 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Z4e6Ftc8IQj7mISll1QJOh9OpqkprLFRU36Wo4ymWLrK2JSTVSU/qm3tC4R1QRfMEO3ACxUYEicdm1epCHB4v+1+Cm0PxfB4Ue+ihuOZynNK+ABIUH+Xxfm3opeozx8tgm7Ui5wXqzuq4sj6ma+WHch5Y6XnsQMfen1i/oztoR7k+/tIHdWIHC3krHvIQkF4g28pd9HqYKvQCceB7zH8fz4QNZAkcb+mAPYcl5+ZthHuey6pWd1Qh3ayeLwaBTMFzuPBc1RHx1Gk0Fo15zy6TkNOnK32xOXug6YhHhy/a/GmSl6zWz6tQA+1fp9vJqC04SjkguAd1fd8Eh2IWr2q98R4U4zlj8ecuI/N+yq8ykiCKimTrBSMAZepFxjDlxEJyWTEsO2RufMdLjbWzzIRgITtu3dUp6MW5KG5cGCnpk6TxBKixD04vRYokwg0rorpzhvmpn4N9qWloDM6+ENzbB8FxfYFJzMQbUEppl63EJ9GJf9/u3bInIJk2Q9FGMny1qsaE+sxye60/uYvGQny+x7ksTMSDLXxaW8xJzgSnvxTss9E1zBmnC7gdMSVVUfxyxpqOEFVr+AP/kqUnUpI0xVvj5uSsR50yRecMQdltM6dZYoWt05KHax2Q/yVCa2sL3VZhkzj20TO9h4JErdm6aBIrfqTmX3+It2yRfQJ5UxanAvqFNKsAnFhQK5iL/4ZrCHzjXLW2hNnR930P1uZDoAt+loGp3pXo+0I9plfUTSpB5pSD7qVPOBC2cHO2jmG X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(136003)(346002)(396003)(39860400002)(376002)(451199021)(46966006)(36840700001)(40470700004)(30864003)(186003)(70206006)(16526019)(2616005)(7636003)(356005)(6286002)(82740400003)(26005)(55016003)(83380400001)(36860700001)(47076005)(40480700001)(426003)(336012)(41300700001)(5660300002)(8936002)(8676002)(36756003)(6916009)(86362001)(2906002)(7696005)(6666004)(4326008)(107886003)(70586007)(478600001)(316002)(40460700003)(54906003)(1076003)(82310400005); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jul 2023 15:31:57.3371 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 365375d1-29e9-4926-e0f5-08db7d6cf816 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT047.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL3PR12MB6428 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org There is an intention to engage DPDK tracing capabilities for mlx5 PMDs monitoring and profiling in various modes. The patch introduces tracepoints for the Tx datapath in the ethernet device driver. To engage this tracing capability the following steps should be taken: - meson option -Denable_trace_fp=true - meson option -Dc_args='-DALLOW_EXPERIMENTAL_API' - EAL command line parameter --trace=pmd.net.mlx5.tx.* The Tx datapath tracing allows to get information how packets are pushed into hardware descriptors, time stamping for scheduled wait and send completions, etc. To provide the human readable form of trace results the dedicated post-processing script is presumed. Signed-off-by: Viacheslav Ovsiienko --- drivers/net/mlx5/meson.build | 1 + drivers/net/mlx5/mlx5_rx.h | 19 --------- drivers/net/mlx5/mlx5_rxtx.h | 19 +++++++++ drivers/net/mlx5/mlx5_trace.c | 25 ++++++++++++ drivers/net/mlx5/mlx5_trace.h | 73 +++++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_tx.c | 9 +++++ drivers/net/mlx5/mlx5_tx.h | 26 ++++++++++++- 7 files changed, 151 insertions(+), 21 deletions(-) create mode 100644 drivers/net/mlx5/mlx5_trace.c create mode 100644 drivers/net/mlx5/mlx5_trace.h diff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build index bcb9c8542f..69771c63ab 100644 --- a/drivers/net/mlx5/meson.build +++ b/drivers/net/mlx5/meson.build @@ -31,6 +31,7 @@ sources = files( 'mlx5_rxtx.c', 'mlx5_stats.c', 'mlx5_trigger.c', + 'mlx5_trace.c', 'mlx5_tx.c', 'mlx5_tx_empw.c', 'mlx5_tx_mpw.c', diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 3514edd84e..f42607dce4 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -377,25 +377,6 @@ mlx5_rx_mb2mr(struct mlx5_rxq_data *rxq, struct rte_mbuf *mb) return mlx5_mr_mempool2mr_bh(mr_ctrl, mb->pool, addr); } -/** - * Convert timestamp from HW format to linear counter - * from Packet Pacing Clock Queue CQE timestamp format. - * - * @param sh - * Pointer to the device shared context. Might be needed - * to convert according current device configuration. - * @param ts - * Timestamp from CQE to convert. - * @return - * UTC in nanoseconds - */ -static __rte_always_inline uint64_t -mlx5_txpp_convert_rx_ts(struct mlx5_dev_ctx_shared *sh, uint64_t ts) -{ - RTE_SET_USED(sh); - return (ts & UINT32_MAX) + (ts >> 32) * NS_PER_S; -} - /** * Set timestamp in mbuf dynamic field. * diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index 876aa14ae6..b109d50758 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -43,4 +43,23 @@ int mlx5_queue_state_modify_primary(struct rte_eth_dev *dev, int mlx5_queue_state_modify(struct rte_eth_dev *dev, struct mlx5_mp_arg_queue_state_modify *sm); +/** + * Convert timestamp from HW format to linear counter + * from Packet Pacing Clock Queue CQE timestamp format. + * + * @param sh + * Pointer to the device shared context. Might be needed + * to convert according current device configuration. + * @param ts + * Timestamp from CQE to convert. + * @return + * UTC in nanoseconds + */ +static __rte_always_inline uint64_t +mlx5_txpp_convert_rx_ts(struct mlx5_dev_ctx_shared *sh, uint64_t ts) +{ + RTE_SET_USED(sh); + return (ts & UINT32_MAX) + (ts >> 32) * NS_PER_S; +} + #endif /* RTE_PMD_MLX5_RXTX_H_ */ diff --git a/drivers/net/mlx5/mlx5_trace.c b/drivers/net/mlx5/mlx5_trace.c new file mode 100644 index 0000000000..bbbfd9178c --- /dev/null +++ b/drivers/net/mlx5/mlx5_trace.c @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 NVIDIA Corporation & Affiliates + */ + +#include +#include + +/* TX burst subroutines trace points. */ +RTE_TRACE_POINT_REGISTER(rte_pmd_mlx5_trace_tx_entry, + pmd.net.mlx5.tx.entry) + +RTE_TRACE_POINT_REGISTER(rte_pmd_mlx5_trace_tx_exit, + pmd.net.mlx5.tx.exit) + +RTE_TRACE_POINT_REGISTER(rte_pmd_mlx5_trace_tx_wqe, + pmd.net.mlx5.tx.wqe) + +RTE_TRACE_POINT_REGISTER(rte_pmd_mlx5_trace_tx_wait, + pmd.net.mlx5.tx.wait) + +RTE_TRACE_POINT_REGISTER(rte_pmd_mlx5_trace_tx_push, + pmd.net.mlx5.tx.push) + +RTE_TRACE_POINT_REGISTER(rte_pmd_mlx5_trace_tx_complete, + pmd.net.mlx5.tx.complete) diff --git a/drivers/net/mlx5/mlx5_trace.h b/drivers/net/mlx5/mlx5_trace.h new file mode 100644 index 0000000000..888d96f60b --- /dev/null +++ b/drivers/net/mlx5/mlx5_trace.h @@ -0,0 +1,73 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 NVIDIA Corporation & Affiliates + */ + +#ifndef RTE_PMD_MLX5_TRACE_H_ +#define RTE_PMD_MLX5_TRACE_H_ + +/** + * @file + * + * API for mlx5 PMD trace support + */ + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include +#include + +/* TX burst subroutines trace points. */ +RTE_TRACE_POINT_FP( + rte_pmd_mlx5_trace_tx_entry, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); +) + +RTE_TRACE_POINT_FP( + rte_pmd_mlx5_trace_tx_exit, + RTE_TRACE_POINT_ARGS(uint16_t nb_sent, uint16_t nb_req), + rte_trace_point_emit_u16(nb_sent); + rte_trace_point_emit_u16(nb_req); +) + +RTE_TRACE_POINT_FP( + rte_pmd_mlx5_trace_tx_wqe, + RTE_TRACE_POINT_ARGS(uint32_t opcode), + rte_trace_point_emit_u32(opcode); +) + +RTE_TRACE_POINT_FP( + rte_pmd_mlx5_trace_tx_wait, + RTE_TRACE_POINT_ARGS(uint64_t ts), + rte_trace_point_emit_u64(ts); +) + + +RTE_TRACE_POINT_FP( + rte_pmd_mlx5_trace_tx_push, + RTE_TRACE_POINT_ARGS(const struct rte_mbuf *mbuf, uint16_t wqe_id), + rte_trace_point_emit_ptr(mbuf); + rte_trace_point_emit_u32(mbuf->pkt_len); + rte_trace_point_emit_u16(mbuf->nb_segs); + rte_trace_point_emit_u16(wqe_id); +) + +RTE_TRACE_POINT_FP( + rte_pmd_mlx5_trace_tx_complete, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, + uint16_t wqe_id, uint64_t ts), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_u64(ts); + rte_trace_point_emit_u16(wqe_id); +) + +#ifdef __cplusplus +} +#endif + +#endif /* RTE_PMD_MLX5_TRACE_H_ */ diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c index 14e1487e59..1fe9521dfc 100644 --- a/drivers/net/mlx5/mlx5_tx.c +++ b/drivers/net/mlx5/mlx5_tx.c @@ -232,6 +232,15 @@ mlx5_tx_handle_completion(struct mlx5_txq_data *__rte_restrict txq, MLX5_ASSERT((txq->fcqs[txq->cq_ci & txq->cqe_m] >> 16) == cqe->wqe_counter); #endif + if (__rte_trace_point_fp_is_enabled()) { + uint64_t ts = rte_be_to_cpu_64(cqe->timestamp); + uint16_t wqe_id = rte_be_to_cpu_16(cqe->wqe_counter); + + if (txq->rt_timestamp) + ts = mlx5_txpp_convert_rx_ts(NULL, ts); + rte_pmd_mlx5_trace_tx_complete(txq->port_id, txq->idx, + wqe_id, ts); + } ring_doorbell = true; ++txq->cq_ci; last_cqe = cqe; diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h index cc8f7e98aa..5df0c4a794 100644 --- a/drivers/net/mlx5/mlx5_tx.h +++ b/drivers/net/mlx5/mlx5_tx.h @@ -13,12 +13,15 @@ #include #include #include +#include #include #include #include "mlx5.h" #include "mlx5_autoconf.h" +#include "mlx5_rxtx.h" +#include "mlx5_trace.h" /* TX burst subroutines return codes. */ enum mlx5_txcmp_code { @@ -764,6 +767,9 @@ mlx5_tx_cseg_init(struct mlx5_txq_data *__rte_restrict txq, cs->flags = RTE_BE32(MLX5_COMP_ONLY_FIRST_ERR << MLX5_COMP_MODE_OFFSET); cs->misc = RTE_BE32(0); + if (__rte_trace_point_fp_is_enabled() && !loc->pkts_sent) + rte_pmd_mlx5_trace_tx_entry(txq->port_id, txq->idx); + rte_pmd_mlx5_trace_tx_wqe((txq->wqe_ci << 8) | opcode); } /** @@ -1692,6 +1698,7 @@ mlx5_tx_schedule_send(struct mlx5_txq_data *restrict txq, if (txq->wait_on_time) { /* The wait on time capability should be used. */ ts -= sh->txpp.skew; + rte_pmd_mlx5_trace_tx_wait(ts); mlx5_tx_cseg_init(txq, loc, wqe, 1 + sizeof(struct mlx5_wqe_wseg) / MLX5_WSEG_SIZE, @@ -1706,6 +1713,7 @@ mlx5_tx_schedule_send(struct mlx5_txq_data *restrict txq, if (unlikely(wci < 0)) return MLX5_TXCMP_CODE_SINGLE; /* Build the WAIT WQE with specified completion. */ + rte_pmd_mlx5_trace_tx_wait(ts - sh->txpp.skew); mlx5_tx_cseg_init(txq, loc, wqe, 1 + sizeof(struct mlx5_wqe_qseg) / MLX5_WSEG_SIZE, @@ -1810,6 +1818,7 @@ mlx5_tx_packet_multi_tso(struct mlx5_txq_data *__rte_restrict txq, wqe = txq->wqes + (txq->wqe_ci & txq->wqe_m); loc->wqe_last = wqe; mlx5_tx_cseg_init(txq, loc, wqe, 0, MLX5_OPCODE_TSO, olx); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); ds = mlx5_tx_mseg_build(txq, loc, wqe, vlan, inlen, 1, olx); wqe->cseg.sq_ds = rte_cpu_to_be_32(txq->qp_num_8s | ds); txq->wqe_ci += (ds + 3) / 4; @@ -1892,6 +1901,7 @@ mlx5_tx_packet_multi_send(struct mlx5_txq_data *__rte_restrict txq, wqe = txq->wqes + (txq->wqe_ci & txq->wqe_m); loc->wqe_last = wqe; mlx5_tx_cseg_init(txq, loc, wqe, ds, MLX5_OPCODE_SEND, olx); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); mlx5_tx_eseg_none(txq, loc, wqe, olx); dseg = &wqe->dseg[0]; do { @@ -2115,6 +2125,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq, wqe = txq->wqes + (txq->wqe_ci & txq->wqe_m); loc->wqe_last = wqe; mlx5_tx_cseg_init(txq, loc, wqe, 0, MLX5_OPCODE_SEND, olx); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); ds = mlx5_tx_mseg_build(txq, loc, wqe, vlan, inlen, 0, olx); wqe->cseg.sq_ds = rte_cpu_to_be_32(txq->qp_num_8s | ds); txq->wqe_ci += (ds + 3) / 4; @@ -2318,8 +2329,8 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq, */ wqe = txq->wqes + (txq->wqe_ci & txq->wqe_m); loc->wqe_last = wqe; - mlx5_tx_cseg_init(txq, loc, wqe, ds, - MLX5_OPCODE_TSO, olx); + mlx5_tx_cseg_init(txq, loc, wqe, ds, MLX5_OPCODE_TSO, olx); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); dseg = mlx5_tx_eseg_data(txq, loc, wqe, vlan, hlen, 1, olx); dptr = rte_pktmbuf_mtod(loc->mbuf, uint8_t *) + hlen - vlan; dlen -= hlen - vlan; @@ -2688,6 +2699,7 @@ mlx5_tx_burst_empw_simple(struct mlx5_txq_data *__rte_restrict txq, /* Update sent data bytes counter. */ slen += dlen; #endif + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); mlx5_tx_dseg_ptr (txq, loc, dseg, rte_pktmbuf_mtod(loc->mbuf, uint8_t *), @@ -2926,6 +2938,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq, tlen += sizeof(struct rte_vlan_hdr); if (room < tlen) break; + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); dseg = mlx5_tx_dseg_vlan(txq, loc, dseg, dptr, dlen, olx); #ifdef MLX5_PMD_SOFT_COUNTERS @@ -2935,6 +2948,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq, } else { if (room < tlen) break; + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); dseg = mlx5_tx_dseg_empw(txq, loc, dseg, dptr, dlen, olx); } @@ -2980,6 +2994,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq, if (MLX5_TXOFF_CONFIG(VLAN)) MLX5_ASSERT(!(loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); mlx5_tx_dseg_ptr(txq, loc, dseg, dptr, dlen, olx); /* We have to store mbuf in elts.*/ txq->elts[txq->elts_head++ & txq->elts_m] = loc->mbuf; @@ -3194,6 +3209,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq, loc->wqe_last = wqe; mlx5_tx_cseg_init(txq, loc, wqe, seg_n, MLX5_OPCODE_SEND, olx); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); mlx5_tx_eseg_data(txq, loc, wqe, vlan, inlen, 0, olx); txq->wqe_ci += wqe_n; @@ -3256,6 +3272,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq, loc->wqe_last = wqe; mlx5_tx_cseg_init(txq, loc, wqe, ds, MLX5_OPCODE_SEND, olx); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); dseg = mlx5_tx_eseg_data(txq, loc, wqe, vlan, txq->inlen_mode, 0, olx); @@ -3297,6 +3314,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq, loc->wqe_last = wqe; mlx5_tx_cseg_init(txq, loc, wqe, 4, MLX5_OPCODE_SEND, olx); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); mlx5_tx_eseg_dmin(txq, loc, wqe, vlan, olx); dptr = rte_pktmbuf_mtod(loc->mbuf, uint8_t *) + MLX5_ESEG_MIN_INLINE_SIZE - vlan; @@ -3338,6 +3356,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq, loc->wqe_last = wqe; mlx5_tx_cseg_init(txq, loc, wqe, 3, MLX5_OPCODE_SEND, olx); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); mlx5_tx_eseg_none(txq, loc, wqe, olx); mlx5_tx_dseg_ptr (txq, loc, &wqe->dseg[0], @@ -3707,6 +3726,9 @@ mlx5_tx_burst_tmpl(struct mlx5_txq_data *__rte_restrict txq, #endif if (MLX5_TXOFF_CONFIG(INLINE) && loc.mbuf_free) __mlx5_tx_free_mbuf(txq, pkts, loc.mbuf_free, olx); + /* Trace productive bursts only. */ + if (__rte_trace_point_fp_is_enabled() && loc.pkts_sent) + rte_pmd_mlx5_trace_tx_exit(loc.pkts_sent, pkts_n); return loc.pkts_sent; } -- 2.18.1