From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A8A7142D7F; Wed, 28 Jun 2023 13:10:35 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9584A410F6; Wed, 28 Jun 2023 13:10:33 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2040.outbound.protection.outlook.com [40.107.236.40]) by mails.dpdk.org (Postfix) with ESMTP id 5D04B410F6 for ; Wed, 28 Jun 2023 13:10:32 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ey4pOlWRXOrXBhj5D4TJ9cI5HRovfdZNBXFYEq0BTmsHXTQBxZLhEKNq6aOnCj9T26WvgMR6SElDwuawKn8yuBVlXANRaOvvOYSegEtUkFKtdKhulxN+ZaSEptw98HU/6Rrm6XY1uwvSRcsQrRAJdeuyJq0WKT781BNCZC+Fvp7ZwndIRvYrya4QnC8LEHc8Zlkq6a4hv+glT6oQkaaWq4nEdktNCfMteTIG0vv1vA+X048fPpqY3ELpzDRLq1GEP1aUmtaoh8+39Xh+67tyQMIocjpZ++q8Mh4BA0kububBVQClGYQkQH4s02C+9ks4mRMhb45vyCtRUiN8kXfwgQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Fxb3TQjm4GVID4nsJniGX7xjurzAlgh2LYU/QSDzdUc=; b=kRT3wLIzNChJIbFhHrcUkdVK21ZWo9YzojcoUthn0NgDSjaqK+z+nvEb1j5YGG1KBzTtw9Ks++UPsuVjZZmgZ5LnRXjllroCNF5PplP1Lb7phEKMarP1d8iMLWqwwMcZRYhOXCFvOhHedW0EV58ibuaPdxJM1LyorSJvvKP4wv5y01Yo9cOCS78Z7DOzCDl87+kxJYoVppZNbALYfvza5gYf6sBgDUGfHdNkJWMZoWVFjLpLZQE5/YQj38iugcHVCIzRq/TvO/bjr6vYY5adFmry8dTpQv2vRC7v9EincNPkre8CCK9gMQlUNH2yO0P6oUHyWLUto+8Jl1boGpACcA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.161) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Fxb3TQjm4GVID4nsJniGX7xjurzAlgh2LYU/QSDzdUc=; b=fGu8fmnjj4wZCQAR3WBNiWrFteybUdkFoMb98YM/aB2j+twsB406sFEWudXhQ/sDMRHjXWT9pO6v5cmxTQ23JErpUMQVm2lRaArssQs2iqyw5WbPPe69pdFQR0NxgVQCBi8p6PowLawjBL8j6Cg+ARuhVg9MQtX8Xg+uB+ibBrvja92l1tP9nSJivwS+M3sTiTKd8aub9MYIotiL3Kl8L3KulmXxKPWUqp7Aexpricowqerf1lxl2lWd2HP45SfMIwD8DkJQ+hAnrjEj3y62vh/qs8+r0xdA66fqW/yfvh0X0aS4X/Fa9ONx440pRl2eDNYcIYNg40fYZgq8jdzC7Q== Received: from BN8PR03CA0019.namprd03.prod.outlook.com (2603:10b6:408:94::32) by MN0PR12MB5860.namprd12.prod.outlook.com (2603:10b6:208:37b::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6521.26; Wed, 28 Jun 2023 11:10:30 +0000 Received: from BN8NAM11FT093.eop-nam11.prod.protection.outlook.com (2603:10b6:408:94:cafe::59) by BN8PR03CA0019.outlook.office365.com (2603:10b6:408:94::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6544.19 via Frontend Transport; Wed, 28 Jun 2023 11:10:30 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.161) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.161 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.161; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.161) by BN8NAM11FT093.mail.protection.outlook.com (10.13.177.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6500.48 via Frontend Transport; Wed, 28 Jun 2023 11:10:29 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.67) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.5; Wed, 28 Jun 2023 04:10:16 -0700 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.37; Wed, 28 Jun 2023 04:10:15 -0700 From: Viacheslav Ovsiienko To: Subject: [PATCH v3 1/4] net/mlx5: introduce tracepoints for mlx5 drivers Date: Wed, 28 Jun 2023 14:09:55 +0300 Message-ID: <20230628110958.1403-2-viacheslavo@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20230628110958.1403-1-viacheslavo@nvidia.com> References: <20230420100803.494-1-viacheslavo@nvidia.com> <20230628110958.1403-1-viacheslavo@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail201.nvidia.com (10.129.68.8) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT093:EE_|MN0PR12MB5860:EE_ X-MS-Office365-Filtering-Correlation-Id: 8cbf8b41-a1fd-4737-8bee-08db77c8489c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: RYIGB7KmdJV9T9LnmLnosvEBZVyQIaUMAbpwVd6GVdiplRUqQu/zxO08OKQ10X1QWmX4dbC89T4QPKbY0RnBQ3Ed1a0sG3JScI3R/37bHZt4cNK2AazdDB46A1jYOH95YpOVI402lTnVHC2ydujN4+YnrNrHqFwurjJ+lYNMx5WZG2+RBktT9BrT03ssF1dkSGGcizCU88g9+M5IyG4VlWihERQjQynCf5ud1Yo3MzgRV8XtxSf2o4sqynFprpE6z9cDtaJZtd+dkli/p+7lGJDnAtYzbeyv7qrxZHVMOydPCkmJf6Lg89IWHcF8oY83mga1cDOcdcUWY67PYDbO3SflgQJd4X5SiESDF+4v2TklrDYDiGmWaQtY/4YCsCgbJGit8akQ2ZEc9UbHgaommjbuzuv7asG4htlbzptkK4CGQmOnsK4HQv8aMMOK2kPhv/DgKoPbeQY05jVASGoqmnruHK7lPpQY/m2Payh8wscPgr1mhvxrseTfRV2pCEquMnyzVDwsrABwqG+wzsb/RllbzPNr1bpBvffvRpNCkc4EEq+0FK9pAVlwYuET1j6ZK0PPW1b6MApQP669nMaY+08FazX290KA2jEn9GqMvj0Ij5YhoYmrSV5YZyo8+40qfT1/5DG1057uVv4UfcdVXAen5a3qQXzaDEtUTWH9e2iUj2Z5FgsZkDS1Rie4p4kUIe4Q6+PPFKdHTNOE5JxQD3Cy0LADUtXPYbK/3eyPJbLFgVszwDL5RtFrGQXDD3x5 X-Forefront-Antispam-Report: CIP:216.228.117.161; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge2.nvidia.com; CAT:NONE; SFS:(13230028)(4636009)(396003)(376002)(346002)(136003)(39860400002)(451199021)(46966006)(40470700004)(36840700001)(82310400005)(1076003)(36860700001)(70206006)(40460700003)(36756003)(30864003)(5660300002)(356005)(86362001)(8936002)(8676002)(6916009)(41300700001)(316002)(70586007)(40480700001)(7636003)(82740400003)(55016003)(47076005)(2906002)(26005)(6286002)(478600001)(186003)(16526019)(2616005)(7696005)(426003)(83380400001)(336012)(6666004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2023 11:10:29.6409 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8cbf8b41-a1fd-4737-8bee-08db77c8489c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.161]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT093.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN0PR12MB5860 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org There is an intention to engage DPDK tracing capabilities for mlx5 PMDs monitoring and profiling in various modes. The patch introduces tracepoints for the Tx datapath in the ethernet device driver. To engage this tracing capability the following steps should be taken: - meson option -Denable_trace_fp=true - meson option -Dc_args='-DALLOW_EXPERIMENTAL_API' - EAL command line parameter --trace=pmd.net.mlx5.tx.* The Tx datapath tracing allows to get information how packets are pushed into hardware descriptors, time stamping for scheduled wait and send completions, etc. To provide the human readable form of trace results the dedicated post-processing script is presumed. Signed-off-by: Viacheslav Ovsiienko --- drivers/net/mlx5/mlx5_rx.h | 19 ---------- drivers/net/mlx5/mlx5_rxtx.h | 19 ++++++++++ drivers/net/mlx5/mlx5_tx.c | 29 +++++++++++++++ drivers/net/mlx5/mlx5_tx.h | 72 +++++++++++++++++++++++++++++++++++- 4 files changed, 118 insertions(+), 21 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 3514edd84e..f42607dce4 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -377,25 +377,6 @@ mlx5_rx_mb2mr(struct mlx5_rxq_data *rxq, struct rte_mbuf *mb) return mlx5_mr_mempool2mr_bh(mr_ctrl, mb->pool, addr); } -/** - * Convert timestamp from HW format to linear counter - * from Packet Pacing Clock Queue CQE timestamp format. - * - * @param sh - * Pointer to the device shared context. Might be needed - * to convert according current device configuration. - * @param ts - * Timestamp from CQE to convert. - * @return - * UTC in nanoseconds - */ -static __rte_always_inline uint64_t -mlx5_txpp_convert_rx_ts(struct mlx5_dev_ctx_shared *sh, uint64_t ts) -{ - RTE_SET_USED(sh); - return (ts & UINT32_MAX) + (ts >> 32) * NS_PER_S; -} - /** * Set timestamp in mbuf dynamic field. * diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index 876aa14ae6..b109d50758 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -43,4 +43,23 @@ int mlx5_queue_state_modify_primary(struct rte_eth_dev *dev, int mlx5_queue_state_modify(struct rte_eth_dev *dev, struct mlx5_mp_arg_queue_state_modify *sm); +/** + * Convert timestamp from HW format to linear counter + * from Packet Pacing Clock Queue CQE timestamp format. + * + * @param sh + * Pointer to the device shared context. Might be needed + * to convert according current device configuration. + * @param ts + * Timestamp from CQE to convert. + * @return + * UTC in nanoseconds + */ +static __rte_always_inline uint64_t +mlx5_txpp_convert_rx_ts(struct mlx5_dev_ctx_shared *sh, uint64_t ts) +{ + RTE_SET_USED(sh); + return (ts & UINT32_MAX) + (ts >> 32) * NS_PER_S; +} + #endif /* RTE_PMD_MLX5_RXTX_H_ */ diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c index 14e1487e59..13e2d90e03 100644 --- a/drivers/net/mlx5/mlx5_tx.c +++ b/drivers/net/mlx5/mlx5_tx.c @@ -7,6 +7,7 @@ #include #include +#include #include #include #include @@ -232,6 +233,15 @@ mlx5_tx_handle_completion(struct mlx5_txq_data *__rte_restrict txq, MLX5_ASSERT((txq->fcqs[txq->cq_ci & txq->cqe_m] >> 16) == cqe->wqe_counter); #endif + if (__rte_trace_point_fp_is_enabled()) { + uint64_t ts = rte_be_to_cpu_64(cqe->timestamp); + uint16_t wqe_id = rte_be_to_cpu_16(cqe->wqe_counter); + + if (txq->rt_timestamp) + ts = mlx5_txpp_convert_rx_ts(NULL, ts); + rte_pmd_mlx5_trace_tx_complete(txq->port_id, txq->idx, + wqe_id, ts); + } ring_doorbell = true; ++txq->cq_ci; last_cqe = cqe; @@ -752,3 +762,22 @@ mlx5_tx_burst_mode_get(struct rte_eth_dev *dev, } return -EINVAL; } + +/* TX burst subroutines trace points. */ +RTE_TRACE_POINT_REGISTER(rte_pmd_mlx5_trace_tx_entry, + pmd.net.mlx5.tx.entry) + +RTE_TRACE_POINT_REGISTER(rte_pmd_mlx5_trace_tx_exit, + pmd.net.mlx5.tx.exit) + +RTE_TRACE_POINT_REGISTER(rte_pmd_mlx5_trace_tx_wqe, + pmd.net.mlx5.tx.wqe) + +RTE_TRACE_POINT_REGISTER(rte_pmd_mlx5_trace_tx_wait, + pmd.net.mlx5.tx.wait) + +RTE_TRACE_POINT_REGISTER(rte_pmd_mlx5_trace_tx_push, + pmd.net.mlx5.tx.push) + +RTE_TRACE_POINT_REGISTER(rte_pmd_mlx5_trace_tx_complete, + pmd.net.mlx5.tx.complete) diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h index cc8f7e98aa..b90cdf1fcc 100644 --- a/drivers/net/mlx5/mlx5_tx.h +++ b/drivers/net/mlx5/mlx5_tx.h @@ -13,12 +13,61 @@ #include #include #include +#include #include #include #include "mlx5.h" #include "mlx5_autoconf.h" +#include "mlx5_rxtx.h" + +/* TX burst subroutines trace points. */ +RTE_TRACE_POINT_FP( + rte_pmd_mlx5_trace_tx_entry, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); +) + +RTE_TRACE_POINT_FP( + rte_pmd_mlx5_trace_tx_exit, + RTE_TRACE_POINT_ARGS(uint16_t nb_sent, uint16_t nb_req), + rte_trace_point_emit_u16(nb_sent); + rte_trace_point_emit_u16(nb_req); +) + +RTE_TRACE_POINT_FP( + rte_pmd_mlx5_trace_tx_wqe, + RTE_TRACE_POINT_ARGS(uint32_t opcode), + rte_trace_point_emit_u32(opcode); +) + +RTE_TRACE_POINT_FP( + rte_pmd_mlx5_trace_tx_wait, + RTE_TRACE_POINT_ARGS(uint64_t ts), + rte_trace_point_emit_u64(ts); +) + + +RTE_TRACE_POINT_FP( + rte_pmd_mlx5_trace_tx_push, + RTE_TRACE_POINT_ARGS(const struct rte_mbuf *mbuf, uint16_t wqe_id), + rte_trace_point_emit_ptr(mbuf); + rte_trace_point_emit_u32(mbuf->pkt_len); + rte_trace_point_emit_u16(mbuf->nb_segs); + rte_trace_point_emit_u16(wqe_id); +) + +RTE_TRACE_POINT_FP( + rte_pmd_mlx5_trace_tx_complete, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id, + uint16_t wqe_id, uint64_t ts), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(queue_id); + rte_trace_point_emit_u64(ts); + rte_trace_point_emit_u16(wqe_id); +) /* TX burst subroutines return codes. */ enum mlx5_txcmp_code { @@ -764,6 +813,9 @@ mlx5_tx_cseg_init(struct mlx5_txq_data *__rte_restrict txq, cs->flags = RTE_BE32(MLX5_COMP_ONLY_FIRST_ERR << MLX5_COMP_MODE_OFFSET); cs->misc = RTE_BE32(0); + if (__rte_trace_point_fp_is_enabled() && !loc->pkts_sent) + rte_pmd_mlx5_trace_tx_entry(txq->port_id, txq->idx); + rte_pmd_mlx5_trace_tx_wqe((txq->wqe_ci << 8) | opcode); } /** @@ -1692,6 +1744,7 @@ mlx5_tx_schedule_send(struct mlx5_txq_data *restrict txq, if (txq->wait_on_time) { /* The wait on time capability should be used. */ ts -= sh->txpp.skew; + rte_pmd_mlx5_trace_tx_wait(ts); mlx5_tx_cseg_init(txq, loc, wqe, 1 + sizeof(struct mlx5_wqe_wseg) / MLX5_WSEG_SIZE, @@ -1706,6 +1759,7 @@ mlx5_tx_schedule_send(struct mlx5_txq_data *restrict txq, if (unlikely(wci < 0)) return MLX5_TXCMP_CODE_SINGLE; /* Build the WAIT WQE with specified completion. */ + rte_pmd_mlx5_trace_tx_wait(ts - sh->txpp.skew); mlx5_tx_cseg_init(txq, loc, wqe, 1 + sizeof(struct mlx5_wqe_qseg) / MLX5_WSEG_SIZE, @@ -1810,6 +1864,7 @@ mlx5_tx_packet_multi_tso(struct mlx5_txq_data *__rte_restrict txq, wqe = txq->wqes + (txq->wqe_ci & txq->wqe_m); loc->wqe_last = wqe; mlx5_tx_cseg_init(txq, loc, wqe, 0, MLX5_OPCODE_TSO, olx); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); ds = mlx5_tx_mseg_build(txq, loc, wqe, vlan, inlen, 1, olx); wqe->cseg.sq_ds = rte_cpu_to_be_32(txq->qp_num_8s | ds); txq->wqe_ci += (ds + 3) / 4; @@ -1892,6 +1947,7 @@ mlx5_tx_packet_multi_send(struct mlx5_txq_data *__rte_restrict txq, wqe = txq->wqes + (txq->wqe_ci & txq->wqe_m); loc->wqe_last = wqe; mlx5_tx_cseg_init(txq, loc, wqe, ds, MLX5_OPCODE_SEND, olx); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); mlx5_tx_eseg_none(txq, loc, wqe, olx); dseg = &wqe->dseg[0]; do { @@ -2115,6 +2171,7 @@ mlx5_tx_packet_multi_inline(struct mlx5_txq_data *__rte_restrict txq, wqe = txq->wqes + (txq->wqe_ci & txq->wqe_m); loc->wqe_last = wqe; mlx5_tx_cseg_init(txq, loc, wqe, 0, MLX5_OPCODE_SEND, olx); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); ds = mlx5_tx_mseg_build(txq, loc, wqe, vlan, inlen, 0, olx); wqe->cseg.sq_ds = rte_cpu_to_be_32(txq->qp_num_8s | ds); txq->wqe_ci += (ds + 3) / 4; @@ -2318,8 +2375,8 @@ mlx5_tx_burst_tso(struct mlx5_txq_data *__rte_restrict txq, */ wqe = txq->wqes + (txq->wqe_ci & txq->wqe_m); loc->wqe_last = wqe; - mlx5_tx_cseg_init(txq, loc, wqe, ds, - MLX5_OPCODE_TSO, olx); + mlx5_tx_cseg_init(txq, loc, wqe, ds, MLX5_OPCODE_TSO, olx); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); dseg = mlx5_tx_eseg_data(txq, loc, wqe, vlan, hlen, 1, olx); dptr = rte_pktmbuf_mtod(loc->mbuf, uint8_t *) + hlen - vlan; dlen -= hlen - vlan; @@ -2688,6 +2745,7 @@ mlx5_tx_burst_empw_simple(struct mlx5_txq_data *__rte_restrict txq, /* Update sent data bytes counter. */ slen += dlen; #endif + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); mlx5_tx_dseg_ptr (txq, loc, dseg, rte_pktmbuf_mtod(loc->mbuf, uint8_t *), @@ -2926,6 +2984,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq, tlen += sizeof(struct rte_vlan_hdr); if (room < tlen) break; + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); dseg = mlx5_tx_dseg_vlan(txq, loc, dseg, dptr, dlen, olx); #ifdef MLX5_PMD_SOFT_COUNTERS @@ -2935,6 +2994,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq, } else { if (room < tlen) break; + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); dseg = mlx5_tx_dseg_empw(txq, loc, dseg, dptr, dlen, olx); } @@ -2980,6 +3040,7 @@ mlx5_tx_burst_empw_inline(struct mlx5_txq_data *__rte_restrict txq, if (MLX5_TXOFF_CONFIG(VLAN)) MLX5_ASSERT(!(loc->mbuf->ol_flags & RTE_MBUF_F_TX_VLAN)); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); mlx5_tx_dseg_ptr(txq, loc, dseg, dptr, dlen, olx); /* We have to store mbuf in elts.*/ txq->elts[txq->elts_head++ & txq->elts_m] = loc->mbuf; @@ -3194,6 +3255,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq, loc->wqe_last = wqe; mlx5_tx_cseg_init(txq, loc, wqe, seg_n, MLX5_OPCODE_SEND, olx); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); mlx5_tx_eseg_data(txq, loc, wqe, vlan, inlen, 0, olx); txq->wqe_ci += wqe_n; @@ -3256,6 +3318,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq, loc->wqe_last = wqe; mlx5_tx_cseg_init(txq, loc, wqe, ds, MLX5_OPCODE_SEND, olx); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); dseg = mlx5_tx_eseg_data(txq, loc, wqe, vlan, txq->inlen_mode, 0, olx); @@ -3297,6 +3360,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq, loc->wqe_last = wqe; mlx5_tx_cseg_init(txq, loc, wqe, 4, MLX5_OPCODE_SEND, olx); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); mlx5_tx_eseg_dmin(txq, loc, wqe, vlan, olx); dptr = rte_pktmbuf_mtod(loc->mbuf, uint8_t *) + MLX5_ESEG_MIN_INLINE_SIZE - vlan; @@ -3338,6 +3402,7 @@ mlx5_tx_burst_single_send(struct mlx5_txq_data *__rte_restrict txq, loc->wqe_last = wqe; mlx5_tx_cseg_init(txq, loc, wqe, 3, MLX5_OPCODE_SEND, olx); + rte_pmd_mlx5_trace_tx_push(loc->mbuf, txq->wqe_ci); mlx5_tx_eseg_none(txq, loc, wqe, olx); mlx5_tx_dseg_ptr (txq, loc, &wqe->dseg[0], @@ -3707,6 +3772,9 @@ mlx5_tx_burst_tmpl(struct mlx5_txq_data *__rte_restrict txq, #endif if (MLX5_TXOFF_CONFIG(INLINE) && loc.mbuf_free) __mlx5_tx_free_mbuf(txq, pkts, loc.mbuf_free, olx); + /* Trace productive bursts only. */ + if (__rte_trace_point_fp_is_enabled() && loc.pkts_sent) + rte_pmd_mlx5_trace_tx_exit(loc.pkts_sent, pkts_n); return loc.pkts_sent; } -- 2.18.1