From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 016C041D60 for ; Fri, 24 Feb 2023 18:51:26 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EF42840EF1; Fri, 24 Feb 2023 18:51:25 +0100 (CET) Received: from mail-pl1-f171.google.com (mail-pl1-f171.google.com [209.85.214.171]) by mails.dpdk.org (Postfix) with ESMTP id 94E8740697 for ; Fri, 24 Feb 2023 18:51:23 +0100 (CET) Received: by mail-pl1-f171.google.com with SMTP id ky4so287980plb.3 for ; Fri, 24 Feb 2023 09:51:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20210112.gappssmtp.com; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:from:to:cc:subject:date :message-id:reply-to; bh=Qezyb107DQdfikZ3dGvdE+qDJ0Dvn0UAXmcPpe4OtmM=; b=1gGkhz9wLPse/V5JHHqs79CI2yiy/GtKgWMD6p5iGC3VzK642dJWVxm+NXR2WP54hz KwyY838h//6xmAE8qb1tJSYUh5ukjv9qpag+/jQbULKxbo265gjj8oY8qBV9FbP8CGBj bcL5bzbM5W9lpqCWzfphWRM4ScqHYuw/X/F13Ob+ezz7Ip0zEXWd9DiSkbDumqQVrnvc He8tjTmN8nA1j1JtDo/XQ+8U0W4WiN22iSHH63w3TPQHd4bvXWS5naqTZDYp8qSGeLkV rlfF2zsifEx947wiH8JUBlCKDqm5ro/6l9i7wGSdO3Bb9Kc7MjU9JSgBtURVfTUjIx33 7DdQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:subject:cc:to:from:date:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Qezyb107DQdfikZ3dGvdE+qDJ0Dvn0UAXmcPpe4OtmM=; b=zI+D/6+pY8q3TjZ5tgy8xcWbaF9AwfWzjpfdgXGwBo1AjCDH4ePlQuy3v+0mi5K0UC Jr3Dp9+ci8YKZudeLOkO287FPuhxDHnEk5ofkGJqte1Km1PTrh1wI704rUikTjqjqn2c VyA0WtKOqICvxdlOkP526U089BMp+0oIxIUfudxQXIZGaQbQ+Ri5PvM+Kn06rxlyZ/68 hruirkKemy0qwKZqsgw8VMXbnle7iMjqHE3oE9q3pObWFRWoSXpELa7ZhznWfv0rT9aU PLF8lhLRbvq52LvryD+/FNrhDdVo0p0aR5N0NQsxDBhINNanz3vyNG1o70YrFl3BSQ6z EzkQ== X-Gm-Message-State: AO0yUKWVnqI1QPDiVLT+slzmzNk2DvP5IYfYIXGPf4WmBJ7sv4jZYu3l osvrMFfTrFsIzVY2miJVrYX+qw== X-Google-Smtp-Source: AK7set9yl2SEPj7Xu3MvJp+PP00lNRxPkCjNDOyo07f6+q5JcKdpz9cfNeGpstqWZ7H4ErDY2LIAAA== X-Received: by 2002:a17:90b:3147:b0:237:97fd:e3ac with SMTP id ip7-20020a17090b314700b0023797fde3acmr2606259pjb.48.1677261082526; Fri, 24 Feb 2023 09:51:22 -0800 (PST) Received: from hermes.local (204-195-120-218.wavecable.com. [204.195.120.218]) by smtp.gmail.com with ESMTPSA id r13-20020a17090a690d00b002309279baf8sm1790322pjj.43.2023.02.24.09.51.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 24 Feb 2023 09:51:22 -0800 (PST) Date: Fri, 24 Feb 2023 09:51:20 -0800 From: Stephen Hemminger To: Ferruh Yigit Cc: longli@linuxonhyperv.com, Thomas Monjalon , Andrew Rybchenko , Jerin Jacob Kollanukkaran , David Marchand , dev@dpdk.org, Ajay Sharma , Long Li , stable@dpdk.org, Luca Boccassi , Qi Z Zhang , Ajit Khaparde , Bruce Richardson , Konstantin Ananyev , Olivier Matz , Honnappa Nagarahalli Subject: Re: [PATCH] net/mana: use RTE_LOG_DP for logs on datapath Message-ID: <20230224095120.32c25481@hermes.local> In-Reply-To: <20230223100917.282f191c@hermes.local> References: <1677012145-3559-1-git-send-email-longli@linuxonhyperv.com> <20230223100917.282f191c@hermes.local> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org On Thu, 23 Feb 2023 10:09:17 -0800 Stephen Hemminger wrote: > On Thu, 23 Feb 2023 14:07:25 +0000 > Ferruh Yigit wrote: > > > Overall I am not sure if anyone is interested in driver datapath logs > > other than driver developers themselves. > > > > For datapath logging I think there are two concerns, > > 1) It should not eat *any* cycles unless explicitly enabled > > 2) Capability of enable/disable them because of massive amount of log it > > can generate > > > > > > Currently there are two existing approaches for driver datapath logging: > > i) Controlled by 'RTE_ETHDEV_DEBUG_RX/TX' compile time flag, > > when enabled 'rte_log()' is used with Rx/Tx specific log type. > > ii) 'RTE_LOG_DP' ', compile time control per logtype via > > 'RTE_LOG_DP_LEVEL', > > when enabled 'rte_log()' is used with PMD logtype. > > > > > > In (ii), need to re-compile code when you need to increase the log > > verbosity, and it leaks to production code as mentioned above. > > > > For (i), developer compiles once enabling debug, later can fine grain > > log level dynamically. This is more DPDK developer focused approach. > > > > > > [1] > > According above, what do you think to retire 'RTE_LOG_DP', (at least > > within ethdev datapath), and chose (i) as preferred datapath logging? > > I agree, the current tx/rx logging is a mess. > Each driver is different, each driver has to have something to enable it; > and it really isn't useful beyond the driver developer. > > Using tracing seems like a much better option. Could we agree on a common > set of trace points for drivers and fix all drivers to use the same thing. > Probably will cause some upset among driver developers: > "where did my nice printf's go, now I have to learn tracing" > but DPDK has a good facility here, lets use it. > > My proposal would be: > - agree on common set of trace points > - apply to all drivers > - remove RTE_LOG_DP() > - remove per driver RX/TX options > - side effect, more uses of RTE_LOGTYPE_PMD go away. Here is an example of using tracepoints instead. Compile tested for example only. Note: using tracepoints it is possible to keep some of the tracepoints even if fastpath is not enabled. Things like running out of Tx or Mbuf is not something that is perf critical; but would be good for application to see. --- drivers/net/ixgbe/ixgbe_ethdev.c | 7 -- drivers/net/ixgbe/ixgbe_logs.h | 18 ----- drivers/net/ixgbe/ixgbe_rxtx.c | 95 +++++++------------------- drivers/net/ixgbe/ixgbe_trace.h | 110 +++++++++++++++++++++++++++++++ 4 files changed, 133 insertions(+), 97 deletions(-) create mode 100644 drivers/net/ixgbe/ixgbe_trace.h diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index 88118bc30560..07f8ccdd97cb 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -8317,10 +8317,3 @@ RTE_PMD_REGISTER_PARAM_STRING(net_ixgbe_vf, RTE_LOG_REGISTER_SUFFIX(ixgbe_logtype_init, init, NOTICE); RTE_LOG_REGISTER_SUFFIX(ixgbe_logtype_driver, driver, NOTICE); - -#ifdef RTE_ETHDEV_DEBUG_RX -RTE_LOG_REGISTER_SUFFIX(ixgbe_logtype_rx, rx, DEBUG); -#endif -#ifdef RTE_ETHDEV_DEBUG_TX -RTE_LOG_REGISTER_SUFFIX(ixgbe_logtype_tx, tx, DEBUG); -#endif diff --git a/drivers/net/ixgbe/ixgbe_logs.h b/drivers/net/ixgbe/ixgbe_logs.h index 00ef797ca103..f70fa1fd4afd 100644 --- a/drivers/net/ixgbe/ixgbe_logs.h +++ b/drivers/net/ixgbe/ixgbe_logs.h @@ -12,24 +12,6 @@ extern int ixgbe_logtype_init; #define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>") -#ifdef RTE_ETHDEV_DEBUG_RX -extern int ixgbe_logtype_rx; -#define PMD_RX_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, ixgbe_logtype_rx, \ - "%s(): " fmt "\n", __func__, ## args) -#else -#define PMD_RX_LOG(level, fmt, args...) do { } while (0) -#endif - -#ifdef RTE_ETHDEV_DEBUG_TX -extern int ixgbe_logtype_tx; -#define PMD_TX_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, ixgbe_logtype_tx, \ - "%s(): " fmt "\n", __func__, ## args) -#else -#define PMD_TX_LOG(level, fmt, args...) do { } while (0) -#endif - extern int ixgbe_logtype_driver; #define PMD_DRV_LOG_RAW(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ixgbe_logtype_driver, "%s(): " fmt, \ diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index c9d6ca9efea4..f9f0ffebee8e 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -46,6 +46,7 @@ #include #include "ixgbe_logs.h" +#include "ixgbe_trace.h" #include "base/ixgbe_api.h" #include "base/ixgbe_vf.h" #include "ixgbe_ethdev.h" @@ -581,11 +582,8 @@ ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq) desc_to_clean_to = sw_ring[desc_to_clean_to].last_id; status = txr[desc_to_clean_to].wb.status; if (!(status & rte_cpu_to_le_32(IXGBE_TXD_STAT_DD))) { - PMD_TX_LOG(DEBUG, - "TX descriptor %4u is not done" - "(port=%d queue=%d)", - desc_to_clean_to, - txq->port_id, txq->queue_id); + ixgbe_trace_tx_not_done(txq->port_id, txq->queue_id, desc_to_clean_to); + /* Failed to clean any descriptors, better luck next time */ return -(1); } @@ -598,11 +596,8 @@ ixgbe_xmit_cleanup(struct ixgbe_tx_queue *txq) nb_tx_to_clean = (uint16_t)(desc_to_clean_to - last_desc_cleaned); - PMD_TX_LOG(DEBUG, - "Cleaning %4u TX descriptors: %4u to %4u " - "(port=%d queue=%d)", - nb_tx_to_clean, last_desc_cleaned, desc_to_clean_to, - txq->port_id, txq->queue_id); + ixgbe_trace_tx_done(txq->port_id, txq->queue_id, + nb_tx_to_clean, last_desc_cleaned, desc_to_clean_to); /* * The last descriptor to clean is done, so that means all the @@ -734,26 +729,14 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, if (tx_last >= txq->nb_tx_desc) tx_last = (uint16_t) (tx_last - txq->nb_tx_desc); - PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u pktlen=%u" - " tx_first=%u tx_last=%u", - (unsigned) txq->port_id, - (unsigned) txq->queue_id, - (unsigned) pkt_len, - (unsigned) tx_id, - (unsigned) tx_last); - + ixgbe_trace_tx(txq->port_id, txq->queue_id, pkt_len, tx_id, tx_last); /* * Make sure there are enough TX descriptors available to * transmit the entire packet. * nb_used better be less than or equal to txq->tx_rs_thresh */ if (nb_used > txq->nb_tx_free) { - PMD_TX_LOG(DEBUG, - "Not enough free TX descriptors " - "nb_used=%4u nb_free=%4u " - "(port=%d queue=%d)", - nb_used, txq->nb_tx_free, - txq->port_id, txq->queue_id); + ixgbe_trace_tx_full(txq->port_id, txq->queue_id, nb_used, txq->nb_tx_free); if (ixgbe_xmit_cleanup(txq) != 0) { /* Could not clean any descriptors */ @@ -764,17 +747,8 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* nb_used better be <= txq->tx_rs_thresh */ if (unlikely(nb_used > txq->tx_rs_thresh)) { - PMD_TX_LOG(DEBUG, - "The number of descriptors needed to " - "transmit the packet exceeds the " - "RS bit threshold. This will impact " - "performance." - "nb_used=%4u nb_free=%4u " - "tx_rs_thresh=%4u. " - "(port=%d queue=%d)", - nb_used, txq->nb_tx_free, - txq->tx_rs_thresh, - txq->port_id, txq->queue_id); + ixgbe_trace_tx_thresh(txq->port_id, txq->queue_id, + nb_used, txq->nb_tx_free, txq->tx_rs_thresh); /* * Loop here until there are enough TX * descriptors or until the ring cannot be @@ -918,10 +892,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* Set RS bit only on threshold packets' last descriptor */ if (txq->nb_tx_used >= txq->tx_rs_thresh) { - PMD_TX_LOG(DEBUG, - "Setting RS bit on TXD id=" - "%4u (port=%d queue=%d)", - tx_last, txq->port_id, txq->queue_id); + ixgbe_trace_tx_rs(txq->port_id, txq->queue_id, tx_last); cmd_type_len |= IXGBE_TXD_CMD_RS; @@ -944,9 +915,7 @@ ixgbe_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* * Set the Transmit Descriptor Tail (TDT) */ - PMD_TX_LOG(DEBUG, "port_id=%u queue_id=%u tx_tail=%u nb_tx=%u", - (unsigned) txq->port_id, (unsigned) txq->queue_id, - (unsigned) tx_id, (unsigned) nb_tx); + ixgbe_trace_tx_tail(txq->port_id, txq->queue_id, tx_id, nb_tx); IXGBE_PCI_REG_WC_WRITE_RELAXED(txq->tdt_reg_addr, tx_id); txq->tx_tail = tx_id; @@ -1717,9 +1686,7 @@ rx_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, if (ixgbe_rx_alloc_bufs(rxq, true) != 0) { int i, j; - PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u " - "queue_id=%u", (unsigned) rxq->port_id, - (unsigned) rxq->queue_id); + ixgbe_trace_rx_mbuf_alloc_failed(rxq->port_id, rxq->queue_id); rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed += rxq->rx_free_thresh; @@ -1851,17 +1818,12 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, * to happen by sending specific "back-pressure" flow control * frames to its peer(s). */ - PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u " - "ext_err_stat=0x%08x pkt_len=%u", - (unsigned) rxq->port_id, (unsigned) rxq->queue_id, - (unsigned) rx_id, (unsigned) staterr, - (unsigned) rte_le_to_cpu_16(rxd.wb.upper.length)); + ixgbe_trace_rx(rxq->port_id, rxq->queue_id, rx_id, + staterr, rte_le_to_cpu_16(rxd.wb.upper.length)); nmb = rte_mbuf_raw_alloc(rxq->mb_pool); if (nmb == NULL) { - PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u " - "queue_id=%u", (unsigned) rxq->port_id, - (unsigned) rxq->queue_id); + ixgbe_trace_rx_mbuf_alloc_failed(rxq->port_id, rxq->queue_id); rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed++; break; } @@ -1959,11 +1921,9 @@ ixgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, */ nb_hold = (uint16_t) (nb_hold + rxq->nb_rx_hold); if (nb_hold > rxq->rx_free_thresh) { - PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u " - "nb_hold=%u nb_rx=%u", - (unsigned) rxq->port_id, (unsigned) rxq->queue_id, - (unsigned) rx_id, (unsigned) nb_hold, - (unsigned) nb_rx); + ixgbe_trace_rx_post(rxq->port_id, rxq->queue_id, + rx_id, nb_hold, nb_rx); + rx_id = (uint16_t) ((rx_id == 0) ? (rxq->nb_rx_desc - 1) : (rx_id - 1)); IXGBE_PCI_REG_WC_WRITE(rxq->rdt_reg_addr, rx_id); @@ -2124,17 +2084,13 @@ ixgbe_recv_pkts_lro(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, rxd = *rxdp; - PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_id=%u " - "staterr=0x%x data_len=%u", - rxq->port_id, rxq->queue_id, rx_id, staterr, - rte_le_to_cpu_16(rxd.wb.upper.length)); + ixgbe_trace_rx(rxq->port_id, rxq->queue_id, rx_id, staterr, + rte_le_to_cpu_16(rxd.wb.upper.length)); if (!bulk_alloc) { nmb = rte_mbuf_raw_alloc(rxq->mb_pool); if (nmb == NULL) { - PMD_RX_LOG(DEBUG, "RX mbuf alloc failed " - "port_id=%u queue_id=%u", - rxq->port_id, rxq->queue_id); + ixgbe_trace_rx_mbuf_alloc_failed(rxq->port_id, rxq->queue_id); rte_eth_devices[rxq->port_id].data-> rx_mbuf_alloc_failed++; @@ -2150,10 +2106,7 @@ ixgbe_recv_pkts_lro(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, next_rdt); nb_hold -= rxq->rx_free_thresh; } else { - PMD_RX_LOG(DEBUG, "RX bulk alloc failed " - "port_id=%u queue_id=%u", - rxq->port_id, rxq->queue_id); - + ixgbe_trace_rx_mbuf_alloc_failed(rxq->port_id, rxq->queue_id); rte_eth_devices[rxq->port_id].data-> rx_mbuf_alloc_failed++; break; @@ -2307,9 +2260,7 @@ ixgbe_recv_pkts_lro(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts, * hardware point of view... */ if (!bulk_alloc && nb_hold > rxq->rx_free_thresh) { - PMD_RX_LOG(DEBUG, "port_id=%u queue_id=%u rx_tail=%u " - "nb_hold=%u nb_rx=%u", - rxq->port_id, rxq->queue_id, rx_id, nb_hold, nb_rx); + ixgbe_trace_rx_post(rxq->port_id, rxq->queue_id, rx_id, nb_hold, nb_rx); rte_wmb(); IXGBE_PCI_REG_WC_WRITE_RELAXED(rxq->rdt_reg_addr, prev_id); diff --git a/drivers/net/ixgbe/ixgbe_trace.h b/drivers/net/ixgbe/ixgbe_trace.h new file mode 100644 index 000000000000..f260edd6fe3a --- /dev/null +++ b/drivers/net/ixgbe/ixgbe_trace.h @@ -0,0 +1,110 @@ +/* SPDX-License-Identifier: BSD-3-Clause */ + +#ifndef _IXGBE_TRACE_H_ +#define _IXGBE_TRACE_H_ + +#include + +RTE_TRACE_POINT( + ixgbe_trace_rx_mbuf_alloc_failed, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t rx_queue_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(rx_queue_id); +); + +RTE_TRACE_POINT_FP( + ixgbe_trace_rx, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t rx_queue_id, + uint16_t rx_id, uint32_t staterr, uint16_t len), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(rx_queue_id); + rte_trace_point_emit_u16(rx_id); + rte_trace_point_emit_u32(staterr); + rte_trace_point_emit_u16(len); +); + +RTE_TRACE_POINT_FP( + ixgbe_trace_rx_post, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t rx_queue_id, + uint16_t rx_id, uint16_t nb_hold, uint16_t nb_rx), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(rx_queue_id); + rte_trace_point_emit_u16(rx_id); + rte_trace_point_emit_u16(nb_hold); + rte_trace_point_emit_u16(nb_rx); +); + +RTE_TRACE_POINT_FP( + ixgbe_trace_tx_done, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t tx_queue_id, + uint16_t nb_to_clean, uint16_t last_desc_cleaned, + uint16_t desc_to_clean), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(tx_queue_id); + rte_trace_point_emit_u16(nb_to_clean); + rte_trace_point_emit_u16(last_desc_cleaned); + rte_trace_point_emit_u16(desc_to_clean); +); + +RTE_TRACE_POINT_FP( + ixgbe_trace_tx_not_done, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t tx_queue_id, + uint16_t desc_to_clean), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(tx_queue_id); + rte_trace_point_emit_u16(desc_to_clean); +); + +RTE_TRACE_POINT_FP( + ixgbe_trace_tx, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t tx_queue_id, + uint16_t pktlen, uint16_t tx_first, uint16_t tx_last), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(tx_queue_id); + rte_trace_point_emit_u16(pktlen); + rte_trace_point_emit_u16(tx_first); + rte_trace_point_emit_u16(tx_last); +); + +RTE_TRACE_POINT( + ixgbe_trace_tx_full, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t tx_queue_id, + uint16_t nb_used, uint16_t nb_free), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(tx_queue_id); + rte_trace_point_emit_u16(nb_used); + rte_trace_point_emit_u16(nb_free); +); + +RTE_TRACE_POINT( + ixgbe_trace_tx_thresh, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t tx_queue_id, + uint16_t nb_used, uint16_t nb_free, + uint16_t tx_rs_thresh), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(tx_queue_id); + rte_trace_point_emit_u16(nb_used); + rte_trace_point_emit_u16(nb_free); + rte_trace_point_emit_u16(tx_rs_thresh); +); + +RTE_TRACE_POINT_FP( + ixgbe_trace_tx_rs, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t tx_queue_id, + uint16_t txd_id), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(tx_queue_id); + rte_trace_point_emit_u16(txd_id); +); + +RTE_TRACE_POINT_FP( + ixgbe_trace_tx_tail, + RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t tx_queue_id, + uint16_t tx_tail, uint16_t nb_tx), + rte_trace_point_emit_u16(port_id); + rte_trace_point_emit_u16(tx_queue_id); + rte_trace_point_emit_u16(tx_tail); + rte_trace_point_emit_u16(nb_tx); +); + +#endif /* _IXGBE_LOGS_H_ */ -- 2.39.1